index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
2,900
|
Modeling Neural Population Spiking Activity with Gibbs Distributions Frank Wood, Stefan Roth, and Michael J. Black Department of Computer Science Brown University Providence, RI 02912 {fwood,roth,black}@cs.brown.edu Abstract Probabilistic modeling of correlated neural population firing activity is central to understanding the neural code and building practical decoding algorithms. No parametric models currently exist for modeling multivariate correlated neural data and the high dimensional nature of the data makes fully non-parametric methods impractical. To address these problems we propose an energy-based model in which the joint probability of neural activity is represented using learned functions of the 1D marginal histograms of the data. The parameters of the model are learned using contrastive divergence and an optimization procedure for finding appropriate marginal directions. We evaluate the method using real data recorded from a population of motor cortical neurons. In particular, we model the joint probability of population spiking times and 2D hand position and show that the likelihood of test data under our model is significantly higher than under other models. These results suggest that our model captures correlations in the firing activity. Our rich probabilistic model of neural population activity is a step towards both measurement of the importance of correlations in neural coding and improved decoding of population activity. 1 Introduction Modeling population activity is central to many problems in the analysis of neural data. Traditional methods of analysis have used single cells and simple stimuli to make the problems tractable. Current multi-electrode technology, however, allows the activity of tens or hundreds of cells to be recorded simultaneously along with with complex natural stimuli or behavior. Probabilistic modeling of this data is challenging due to its high-dimensional nature and the correlated firing activity of neural populations. One can view the problem as one of learning the joint probability P(s, r) of a stimulus or behavior s and the firing activity of a neural population r. The neural activity may be in the form of firing rates or spike times. Here we focus the latter more challenging problem of representing a multivariate probability distribution over spike times. Modeling P(s, r) is made challenging by the high dimensional, correlated, and nonGaussian nature of the data. The dimensionality means that we are unlikely to have sufficient training data for a fully non-parametric model. On the other hand no parametric models currently exist that capture the one-sided, skewed nature of typical correlated neural data. We do, however, have sufficient data to model the marginal statistics of the data. With that observation we draw on the FRAME model developed by Zhu and Mumford for image texture synthesis [1] to represent neural population activity. The FRAME model represents P(s, r) in terms of its marginal histograms. In particular we seek the maximum entropy distribution that matches the observed marginals of P(s, r). The joint is represented by a Gibbs model that combines functions of these marginals and we exploit the method of [2] to automatically choose the optimal marginal directions. To learn the parameters of the model we exploit the technique of contrastive divergence [3, 4] which has been used previously to learn the parameters of Product-of-Experts (PoE) models [5]. We observe that the FRAME model can be viewed as a Product of Experts where the experts are functions of the marginal histograms. The resulting model is more flexible than the standard PoE formulation and allows us to model more complex, skewed distributions observed in neural data. We train and test the model on real data recorded from a monkey performing a motor control task; details of the task and the neural data are described in the following section. We learn a variety of probabilistic models including full Gaussian, independent Gaussian, product of t-distributions [4], independent non-parametric, and the FRAME model. We evaluate the log likelihood of test data under the different models and show that the complete FRAME model outperforms the other methods (note that “complete” here means the model uses the same number of marginal directions as there are dimensions in the data). The use of energy-based models such as FRAME for modeling neural data appears novel and promising, and the results reported here are easily extended to other cortical areas. There is a need in the community for such probabilistic models of multi-variate spiking processes. For example Bell and Para [6] formulate a simple model of correlated spiking but acknowledge that what they would really like, and do not have, is what they call a “maximum spikelihood” model. This neural modeling problem represents a new application of energy-based models and consequently suggests extensions of the basic methods. Finally, there is a need for rich probabilistic models of this type in the Bayesian decoding of neural activity [7]. 2 Methods The data used in this study consists of simultaneously recorded spike times from a population of M1 motor neurons recorded in monkeys trained to perform a manual tracking task [8, 9]. The monkey viewed a computer monitor displaying a target and a feedback cursor. The task involved moving a 2D manipulandum so that a cursor controlled by the manipulandum came into contact with a target. The monkey was rewarded when the target was acquired, a new target appeared and the process repeated. Several papers [9, 11, 10] have reported successfully decoding the cursor kinematics from this data using firing rates estimated from binned spike counts. The activity of a population of cells was recorded at a rate of 30kHz then sorted using an automated spike sorting method; from this we randomly selected five cells with which to demonstrate our method. As shown in Fig. 1, ri,k = [t(1) i,k, t(2) i,k, . . . , t(J) i,k ] is a vector of time intervals t(j) i,k that represents the spiking activity of a single cell i at timestep k. These intervals are the elapsed time between the time at timestep k and the time at each of j past spikes. Let Rk = [r1,k, r2,k, . . . , rN,k] be a vector concatenation of N such spiking activity representations. Let sk = [xk, yk] be the position of the manipulandum at each timestep. Our s =[x ,y ] k time t(3) k k i,k t(1) i,k t(2) i,k Figure 1: Representation of the data. Hand position at time k, sk = [xk, yk], is regularly sampled every 50ms. Spiking activity (shown as vertical bars) is retained at full data acquisition precision (30khz). Sections of spike trains from four cells are shown. The response of a single cell, i, is represented by the time intervals to the three preceding spikes; that is, ri,k = [t(1) i,k, t(2) i,k, t(3) i,k]. training data consists of 4000 points Rk, sk sampled at 50ms intervals with a history of 3 past spikes (J = 3) per neuron. Our test data is 1000 points of the same. Various empirical marginals of the data (shown in Fig 2) illustrate that the data are not well fit by canonical symmetric parametric distributions because the data is asymmetric and skewed. For such data traditional parametric models may not work well so instead we apply the FRAME model of [1] to this modeling problem. FRAME is a semi-parametric energy based model of the following form: Let dk = [sk, Rk], where sk and Rk are defined as above. Let D = [d1, . . . , dN] be a matrix of N such points. We define P(dk) = 1 Z(Θ)e−P e λT e φ(ωT e dk) (1) where ωe is a vector that projects the datum dk onto a 1-D subspace, φ : R →Ib is a “histogramming” function that produces a vector with a single 1 in a single bin per datum according to the projected value of that datum, λe ∈Rb is a weight vector, Z is a normalization constant sometimes called the partition function (as it is a function of the model parameters), b is the granularity of the histogram, and e is the number of “experts”. Taken together, λT e φ(·) can be thought of as a discrete representation of a function. In this view λT e φ(ωT e dk) is an energy function computed over a projection of the data. Models of this form are constrained maximum entropy models, and in this case by adjusting λe the model marginal projection onto ωe is constrained to be identical (ideally) to the empirical marginal over the same projection. Fig. 3 illustrates the model. To relate this to current PoE models, if λT e φ(·) were replaced with a log Student-t function then this FRAME model would take the same form as the Product-of-Student-t formulation of [12]. Distributions of this form are called Gibbs or energy-based distributions as P e λT e φ(ωT e dk) is analogous to the energy in a Boltzmann distribution. Minimizing the Figure 2: Histograms of various projections of single cell data. The top row are histograms of the values of t(1), t(2), t(3), x, and y respectively. The bottom row are random projections from the same data. All these figures illustrate skew or one-sidedness, and motivate our choice of a semi-parametric Gibbs model. 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 2 9 8 7 4 3 2 1 0 0 0 0 0 0 0 0 * ... ! Á(!d) ¸ ! d T T d Figure 3: (left) Illustration of the projection and weighting of a single point d: Here, the data point d is projected onto the projection direction ω. The isosurfaces from a hypothetical distribution p(d) are shown in dotted gray. (right) Illustration of the projection and binning of d: The upper plot shows the empirical marginal (in dotted gray) as obtained from the projection illustrated in the left figure. The function φ(·) takes a real valued projection and produces a vector of fixed length with a single 1 in the bin that is mapped to that range of the projection. This discretization of the projection is indicated by the spacing of the downward pointing arrows. The resulting vector is weighted by λ to produce an energy. This process is repeated for each of the projection directions in the model. The constraints induced by multiple projections result in a distribution very close to the empirical distribution. this energy is equivalent to maximizing the log likelihood. Our model is parameterized by Θ = {{λe, ωe} : 1 < e < E} where E is the total number of projections (or “experts”). We use gradient ascent on the log likelihood to train the λe’s. As φ(·) is not differentiable, the ωe’s must be specified or learned in another way. 2.1 Learning the λ’s Standard gradient ascent becomes intractable for large numbers of cells because computing the partition function and its gradient becomes intractable. The gradient of the log probability with respect to λ1..E is ∇Θλ log P(dk) = [∂log P(dk) ∂λ1 , . . . , ∂log P(dk) ∂λE ]. (2) Besides not being able to normalize the distribution, the right hand term of the partial ∂log P(dk) ∂λe = φ(ωT e dk) −∂log Z(Θ) ∂λe typically has no closed-form solution and is very hard to compute. Markov chain Monte Carlo (MCMC) techniques can be used to learn such models. Contrastive divergence [4] is an efficient learning algorithm for energy based models that approximates the gradient as ∂log P(dk) ∂λe ≈ ¿∂log P(dk) ∂λe À P 0 − ¿∂log P(dk) ∂λe À P m Θ (3) where P 0 is the training data and P m Θ are samples drawn according to the model. The key is that the sampler is started at the training data and does not need to be run until convergence, which typically would take much more time. The superscript indicates that we use m regular Metropolis sampling steps [13] to draw samples from the model for contrastive divergence training (m = 50 in our experiments). The intuition behind this approximation is that samples drawn from the model should have the same statistics as the training data. Maximizing the log probability of training data is equivalent to minimizing the Kullback Leibler (KL) divergence between the model and the true distribution. Contrastive divergence attempts to minimize the difference in KL divergence between the model one step towards equilibrium and the training data. Intuitively this means that the contrastive divergence opposes any tendency for the model to diverge from the true distribution. 2.2 Learning the ω’s Because φ(·) is not differentiable, we turn to the feature pursuit method of [2] to learn the projection directions ω1..E. This approach involves successively searching for a new projection in a direction where a model with the new projection would differ maximally from the model without. Their approach involves approximating the expected projection using a Parzen window method with Gaussian kernels. Gradient search on a KL-divergence objective function is used to find each subsequent projection. We refer readers to [2] for details. It was suggested by [2] that there are many local optima in this feature pursuit. Our experience tends to support this claim. In fact, it may be that feature pursuit is not entirely necessary. Additionally, in our experience, the most important aspect of the feature selection algorithm is how many feature pursuit starting points are considered. It may be as effective (and certainly more efficient) to simply guess a large number of projections and estimate the marginal KL-divergence for them all, selecting the largest as the new projection. 2.3 Normalizing the distribution Generally speaking, the partition function is intractible to compute as it involves integration over the entire domain of the joint; however, in the case where E (the number of experts) is the same as the dimensionality of d then the partition function is tractable. Each expert can be normalized individually. The per-expert normalization is Ze = X b s(b) e e−λ(b) e where b indexes the elements of λe and s(b) e is the width of the bth bin of the eth histogramming function. Using the change of variables rule Z = |det(Ω)| Y e Ze where the square matrix Ω= [ω1ω2 . . . ωE]. This is not possible when the number of experts exceeds or is smaller than the dimensionality of the data. POT IG G RF I FP -31849 -30893 -23573 -23108 -19155 -12509 Table 1: Log likelihoods of test data. The test data consists of the spiking activity of 5 cells and x, y position behavioral variables as illustrated in Fig. 1. Log likelihoods are reported for various models: POT: Product of Student-t, IG: diagonal covariance Gaussian, G: full covariance Gaussian, RF: random filter FRAME, I: 5 independent FRAME models, one per cell, and FP: feature pursuit FRAME Empirical FRAME Gaussian PoT Figure 4: This figure illustrates the modeling power of the semi-parametric Gibbs distribution over a number of symmetric, fully parametric distributions. Each row shows normalized 2-d histograms of samples projected onto a plane. The first column is the training data, column two is the Gibbs distribution, column three is a Gaussian distribution, and column four is a Product-of-Student-t distribution. 3 Results We trained several models on several datasets. We show results for complete models of the joint neuronal response of 5 real motor cortex cells plus x, y hand kinematics (3 past spikes for each cell plus 2 behavior variables equals a 17 dimension dataset). A complete model has the same number of experts as dimensions. Table 1 shows the log likelihood of test data under several models: Product of Studentt, a diagonal covariance multidimensional Gaussian (independent), multivariate Gaussian, a complete FRAME model with random projection directions, a product of 5 complete FRAME single cell models with learned projections, and a complete FRAME model with learned projection directions. Because these all are complete models, we are able to compute the partition function of each. Each model was trained on 4000 points and the log likelihood was computed using 1000 distinct test points. In Fig. 4 we show histograms of samples drawn from a full covariance Gaussian and energy-based models with two times more projection directions than the data dimensionality. These figures illustrate the modeling power of our approach in that it represents the irregularities common to real neural data better than Gaussian and other symmetric distributions. Note that the model using random marginal directions does not model the data as well as one using optimized directions; this is not surprising. It may well be the case, however, that with many more random directions such a model would perform significantly better. This overcomplete case however is unnormalized and hence cannot be directly compared here. 4 Discussion In this work we demonstrated an approach for using Gibbs distributions to model the joint spiking activity of a population of cells and an associated behavior. We developed a novel application of contrastive divergence for learning a FRAME model which can be viewed as a semi-parametric Product-of-Experts model. We showed that our model outperformed other models in representing complex monkey motor cortical spiking data. Previous methods for probabilistically modeling spiking process have focused on modeling the firing rates of a population in terms of a conditional intensity function (firing rate conditioned on various correlates and previous spiking) [15, 16, 17, 18, 19]. These functions are often formulated in terms of log-linear models and hence resemble our approach. Here we take a more direct approach of modeling the joint probability using energy-based models and exploit contrastive divergence for learning Information theoretic analysis of spiking populations calls for modeling high dimensional joint and conditional distributions. In the work of [20, 21, 22], these distributions are used to study encoding models, in particular the importance of correlation in the neural code. Our models are directly applicable to this pursuit. Given an experimental design with a relatively low dimension stimulus, where the entropy of that stimulus can be accurately computed, our models are applicable without modification. Our approach may also be applied to neural decoding. A straightforward extension of our model could include hand positions (or other kinematic variables) at multiple time instants. Decoding algorithms that exploits these joint models by maximizing the likelihood of the observed firing activity over an entire data set remain to be developed. Note that it may be possible to produce more accurate models of the un-normalized joint probability by increasing the number of marginal constraints. To exploit these overcomplete models, algorithms that do not require normalized probabilities are required (particle filtering is a good example). Not surprisingly the FRAME model performed better on the non-symmetric neural data than the related, but symmetric, Product-of-Student-t model. We have begun exploring more flexible and asymmetric experts which would offer advantages over discrete histogramming inherent to the FRAME model. Acknowledgments Thanks to J. Donoghue, W. Truccolo, M. Fellows, and M. Serruya. This work was supported by NIH-NINDS R01 NS 50967-01 as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program. References [1] S. C. Zhu, Z. N. Wu, and D. Mumford, “Minimax entropy principle and its application to texture modeling,” Neural Comp., vol. 9, no. 8, pp. 1627–1660, 1997. [2] C. Liu, S. C. Zhu, and H. Shum, “Learning inhomogeneous Gibbs model of faces by minimax entropy,” in ICCV, pp. 281–287, 2001. [3] G. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Comp., vol. 14, pp. 1771–1800, 2002. [4] Y. Teh, M. Welling, S. Osindero, and G. E. Hinton, “Energy-based models for sparse overcomplete representations,” JMLR, vol. 4, pp. 1235–1260, 2003. [5] G. Hinton, “Product of experts,” in ICANN, vol. 1, pp. 1–6, 1999. [6] A. J. Bell and L. C. Parra, “Maximising sensitivity in a spiking network,” in Advances in NIPS, vol. 17, pp. 121–128, 2005. [7] R. S. Zemel, Q. J. M. Huys, R. Natarajan, and P. Dayan, “Probabilistic computation in spiking populations,” in Advances in NIPS, vol. 17, pp. 1609–1616, 2005. [8] M. Serruya, N. Hatsopoulos, M. Fellows, L. Paninski, and J. Donoghue, “Robustness of neuroprosthetic decoding algorithms,” Biological Cybernetics, vol. 88, no. 3, pp. 201–209, 2003. [9] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue, “Brain-machine interface: Instant neural control of a movement signal,” Nature, vol. 416, pp. 141–142, 2002. [10] W. Wu, M. J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J. P. Donoghue, “Neural decoding of cursor motion using a Kalman filter,” in Advances in NIPS, vol. 15, pp. 133–140, 2003. [11] Y. Gao, M. J. Black, E. Bienenstock, S. Shoham, and J. P. Donoghue, “Probabilistic inference of arm motion from neural activity in motor cortex,” Advances in NIPS, vol. 14, pp. 221–228, 2002. [12] M. Welling, G. Hinton, and S. Osindero, “Learning sparse topographic representations with products of Student-t distributions,” in Advances in NIPS, vol. 15, pp. 1359–1366, 2003. [13] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis, 2nd ed. Chapman & Hall/CRC, 2004. [14] S. Roth and M. J. Black, “Fields of experts: A framework for learning image priors,” in CVPR, vol. 2, pp. 860–867, 2005. [15] D. R. Brillinger, “The identification of point process systems,” The Annals of Probability, vol. 3, pp. 909–929, 1975. [16] E. S. Chornoboy, L. P. Schramm, and A. F. Karr, “Maximum likelihood identification of neuronal point process systems,” Biological Cybernetics, vol. 59, pp. 265–275, 1988. [17] Y. Gao, M. J. Black, E. Bienenstock, W. Wu, and J. P. Donoghue, “A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions,” in First International IEEE/EMBS Conference on Neural Engineering, pp. 189–192, 2003. [18] M. Okatan, “Maximum likelihood identification of neuronal point process systems,” Biological Cybernetics, vol. 59, pp. 265–275, 1988. [19] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown, “A point process framework for relating neural spiking activity to spiking history,” J. Neurophysiology, vol. 93, pp. 1074–1089, 2005. [20] P. E. Latham and S. Nirenberg, “Synergy, redundancy, and independence in population codes, revisited,” J. Neuroscience, vol. 25, pp. 5195–5206, 2005. [21] S. Nirenberg and P. E. Latham, “Decoding neuronal spike trains: How important are correlations?” PNAS, vol. 100, pp. 7348–7353, 2003. [22] S. Panzeri, H. D. R. Golledge, F. Zheng, M. Tovee, and M. P. Young, “Objective assessment of the functional role of spike train correlations using information measures,” Visual Cognition, vol. 8, pp. 531–547, 2001.
|
2005
|
81
|
2,901
|
Searching for Character Models Jaety Edwards Department of Computer Science UC Berkeley Berkeley, CA 94720 jaety@cs.berkeley.edu David Forsyth Department of Computer Science UC Berkeley Berkeley, CA 94720 daf@cs.berkeley.edu Abstract We introduce a method to automatically improve character models for a handwritten script without the use of transcriptions and using a minimum of document specific training data. We show that we can use searches for the words in a dictionary to identify portions of the document whose transcriptions are unambiguous. Using templates extracted from those regions, we retrain our character prediction model to drastically improve our search retrieval performance for words in the document. 1 Introduction An active area of research in machine transcription of handwritten documents is reducing the amount and expense of supervised data required to train prediction models. Traditional OCR techniques require a large sample of hand segmented letter glyphs for training. This per character segmentation is expensive and often impractical to acquire, particularly if the corpora in question contain documents in many different scripts. Numerous authors have presented methods for reducing the expense of training data by removing the need to segment individual characters. Both Kopec et al [3] and LeCun et al [5] have presented models that take as input images of lines of text with their ASCII transcriptions. Training with these datasets is made possible by explicitly modelling possible segmentations in addition to having a model for character templates. In their research on “wordspotting”, Lavrenko et al [4] demonstrate that images of entire words can be highly discriminative, even when the individual characters composing the word are locally ambiguous. This implies that images of many sufficiently long words should have unambiguous transcriptions, even when the character models are poorly tuned. In our previous work, [2], the discriminatory power of whole words allowed us to achieve strong search results with a model trained on a single example per character. The above results have shown that A) one can learn new template models given images of text lines and their associated transcriptions, [3, 5] without needing an explicit segmentation and that B) entire words can often be identified unambiguously, even when the models for individual characters are poorly tuned. [2, 4]. The first of these two points implies that given a transcription, we can learn new character models. The second implies that for at least some parts of a document, we should be able to provide that transcription “for free”, by matching against a dictionary of known words. s1 −d s2 di s3 ix s4 xe s5 er s6 ri s7 is s8 s− Figure 1: A line, and the states that generate it. Each state st is defined by its left and right characters ctl and ctr (eg “x” and “e” for s4). In the image, a state spans half of each of these two characters, starting just past the center of the left character and extending to the center of the right character, i.e. the right half of the “x” and the left half of the “e” in s4. The relative positions of the two characters is given by a displacement vector dt (superimposed on the image as white lines). Associating states with intracharacter spaces instead of with individual characters allows for the bounding boxes of characters to overlap while maintaining the independence properties of the Markov chain. In this work we combine these two observations in order to improve character models without the need for a document specific transcription. We provide a generic dictionary of words in the target language. We then identify “high confidence” regions of a document. These are image regions for which exactly one word from our dictionary scores highly under our model. Given a set of high confidence regions, we effectively have a training corpus of text images with associated transcriptions. In these regions, we infer a segmentation and extract new character examples. Finally, we use these new exemplars to learn an improved character prediction model. As in [2], our document in this work is a 12th century manuscript of Terence’s Comedies obtained from Oxford’s Bodleian library [1]. 2 The Model Hidden Markov Models are a natural and widely used method for modeling images of text. In their simplest incarnation, a hidden state represents a character and the evidence variable is some feature vector calculated at points along the line. If all characters were known to be of a single fixed width, this model would suffice. The probability of a line under this model is given as p(line) = p(c1|α) Y t>1 p(ct|ct−1)p(im[w∗(t−1):w∗t]|ct) (1) where ct represents the tth character on the line, α represents the start state, w is the width of a character, and im[w(t−1)+1:wt] represents the column of pixels beginning at column w ∗(t −1) + 1 of the image and ending at column w ∗t, (i.e. the set of pixels spanned by c) Unfortunately, character’s widths do vary quite substantially and so we must extend the model to accommodate different possible segmentations. A generalized HMM allows us to do this. In this model a hidden state is allowed to emit a variable length series of evidence variables. We introduce an explicit distribution over the possible widths of a character. Letting dt be the displacement vector associated with the tth character, and ctx refer to the x location of the left edge of a character on the line, the probability of a line under this revised model is p(line) = p(c1|α) Y t>1 p(ct|ct−1)p(dt|ct)p(im[ctx+1:ctx+d]|dt, ct) (2) This is the model we used in [2]. It performs far better than using an assumption of fixed widths, but it still imposes unrealistic constraints on the relative positions of characters. In particular, the portion of the ink generated by the current character is assumed to be independent of the preceding character. In other words, the model assumes that the bounding boxes of characters do not overlap. This constraint is obviously unrealistic. Characters routinely overlap in our documents. “f”s, for instance, form ligatures with most following characters. In previous work, we treated this overlap as noise, hurting our ability to correctly localize templates. Under this model, local errors of alignment would also often propagate globally, adversely affecting the segmentation of the whole line. For search, this noisy segmentation still provides acceptable results. In this work, however, we need to extract new templates, and thus correct localization and segmentation of templates is crucial. In our current work, we have relaxed this constraint, allowing characters to partially overlap. We achieve this by changing hidden states to represent character bigrams instead of single characters (Figure 1). In the image, a state now spans the pixels from just past the center of the left character to the pixel containing the center of the right character. We adjust our notation somewhat to reflect this change, letting st now represent the tth hidden state and ctl and ctr be the left and right characters associated with s. dt is now the displacement vector between the centers of ctl and ctr. The probability of a line under this, our actual, model is p(line) = p(s1|α) Y t>1 p(st|st−1)p(dt|ctl, ctr)p(im[stx+1:stx+dt]|ctl, ctr, dt) (3) This model allows overlap of bounding boxes, but it does still make the assumption that the bounding box of the current character does not extend past the center of the previous character. This assumption does not fully reflect reality either. In Figure 1, for example, the left descender of the x extends back further than the center of the preceding character. It does, however, accurately reflect the constraints within the heart of the line (excluding ascenders and descenders). In practice, it has proven to generate very accurate segmentations. Moreover, the errors we do encounter no longer tend to affect the entire line, since the model has more flexibility with which to readjust back to the correct segmentation. 2.1 Model Parameters Our transition distribution between states is simply a 3-gram character model. We train this model using a collection of ASCII Latin documents collected from the web. This set does not include the transcriptions of our documents. Conditioned on displacement vector, the emission model for generating an image chunk given a state is a mixture of gaussians. We associate with each character a set of image windows extracted from various locations in the document. We initialize these sets with one example a piece from our hand cut set (Figure 2). We adjust the probability of an image given the state to include the distribution over blocks by expanding the last term of Equation 3 to reflect this mixture. Letting bck represent the kth exemplar in the set associated with character c, the conditional probability of an image region spanning the columns from x to x′ is given as p(imx:x′|ctl, ctr, dt) = X i,j p(imx:x′|bctli, bctrj, dt) (4) In principle, the displacement vectors should now be associated with an individual block, not a character. This is especially true when we have both upper and lower case letters. However, our model does not seem particularly sensitive to this displacement distribution and so in practice, we have a single, fairly loose, displacement distribution per character. Given a displacement vector, we can generate the maximum likelihood template image under our model by compositing the correct halves of the left and right blocks. Reshaping the image window into a vector, the likelihood of an image window is then modeled as a gaussian, using the corresponding pixels in the template as the means, and assuming a diagonal covariance matrix. The covariance matrix largely serves to mask out empty regions of a character’s bounding box, so that we do not pay a penalty when the overlap of two characters’ bounding boxes contains only whitespace. 2.2 Efficiency Considerations The number of possible different templates for a state is O(|B| × |B| × |D|), where |B| is the number of different possible blocks and |D| is the number of candidate displacement vectors. To make inference in this model computationally feasible, we first restrict the domain of d. For a given pair of blocks bl and br, we consider only displacement vectors within some small x distance from a mean displacement mbl,br, and we have a uniform distribution within this region. m is initialized from the known size of our single hand cut template. In the current work, we do not relearn the m. These are held fixed and assumed to be the same for all blocks associated with the same letter. Even when restricting the number of d’s under consideration as discussed above, it is computationally infeasible to consider every possible location and pair of blocks. We therefore prune our candidate locations by looking at the likelihood of blocks in isolation and only considering locations where there is a local optimum in the response function and whose value is better than a given threshold. In this case our threshold for a given location is that L(block) < .7L(background) (where L(x) represents the negative log likelihood of x). In other words, a location has to look at least marginally more like a given block than it looks like the background. After pruning locations in this manner, we are left with a discrete set of “sites,” where we define a site as the tuple (block type, x location, y location). We can enumerate the set of possible states by looking at every pair of sites whose displacement vector has a non-zero probability. 2.3 Inference In The Model The statespace defined above is a directed acyclic graph, anchored at the left edge and right edges of a line of text. A path through this lattice defines both a transcription and a segmentation of the line into individual characters. Inference in this model is relatively straightforward because of our constraint that each character may overlap only one preceding and one following character, and our restriction of displacement vectors to a small discrete range. The first restriction means that we need only consider binary relations between templates. The second preserves the independence relationships of an HMM. A given state st is independent of the rest of the line given the values of all other states within dmax of either edge of st (where dmax is the legal displacement vector with the longest x component.) We can therefore easily calculate the best path or explicitly calculate the posterior of a node by traversing the state graph in topological order, sorted from left to right. The literature on Weighted Finite State Transducers ([6], [5]) is a good resource for efficient algorithms on these types of statespace graph. 3 Learning Better Character Templates We initialize our algorithm with a set of handcut templates, exactly 1 per character, (Figure 2), and our goal is to construct more accurate character models automatically from unsupervised data. As noted above, we can easily calculate the posterior of a given site under our model. (Recall that a site is a particular character template at a given (x,y) location in the line.) The traditional EM approach to estimating new templates would be to use these Figure 2: Original Training Data These 22 glyphs are our only document specific training data. We use the model based on these characters to extract the new examples shown below Figure 3: Examples of extracted templates We extract new templates from high confidence regions. From these, we choose a subset to incorporate into the model as new exemplars. Templates are chosen iteratively to best cover the space of training examples. Notice that for “q” and “a”, we have extracted capital letters, of which there were no examples in our original set of glyphs. This happens when the combination of constraints from the dictionary the surrounding glyphs make a “q” or “a” the only possible explanation for this region, even though its local likelihood is poor. sites as training examples, weighted by their posteriors. Unfortunately, the constraints imposed by 3 and even 4-gram character models seem to be insufficient. The posteriors of sites are not discriminative enough to get learning off the ground. The key to successfully learning new templates lies is the observation from our previous work [2], that even when the posteriors of individual characters are not discriminative, one can still achieve very good search results with the same model. The search word in effect serves as its own language model, only allowing paths through the state graph that actually contain it, and the longer the word the more it constrains the model. Whole words impose much tighter constraints than a 2 or 3-gram character model, and it is only with this added power that we can successfully learn new character templates. We define the score for a search as the negative log likelihood of the best path containing that word. With sufficiently long words, it becomes increasingly unlikely that a spurious path will achieve a high score. Moreover, if we are given a large dictionary of words and no alternative word explains a region of ink nearly as well as the best scoring word, then we can be extremely confident that this is a true transcription of that piece of ink. Starting with a weak character model, we do not expect to find many of these “high confidence” regions, but with a large enough document, we should expect to find some. From these regions, we can extract new, reliable templates with which to improve our character models. The most valuable of these new templates will be those that are significantly different from any in our current set. For example, in Figure 3, note that our system identifies capital Q’s, even though our only input template was lower case. It identifies this ink as a Q in much the same way that a person solves a crossword puzzle. We can easily infer the missing character in the string “obv-ous” because the other letters constrain us to one possible solution. Similarly, if other character templates in a word match well, then we can unambiguously identify the other, more ambiguous ones. In our Latin case, “Quid” is the only likely explanation for “-uid”. 3.1 Extracting New Templates and Updating The Model Within a high confidence region we have both a transcription and a localization of template centers. It remains only to cut out new templates. We accomplish this by creating a template image for the column of pixels from the corresponding block templates and then assigning image pixels to the nearest template character (measured by Euclidean distance). Given a set of templates extracted from high confidence regions, we choose a subset of 3300 3350 3400 Score Under Model best worse Confidence Margins Figure 4: Each line segment in the lower figure represents a proposed location for a word from our dictionary. It’s vertical height is the score of that location under our model. A lower score represents a better fit. The dotted line is the score of our model’s best possible path. Three correct words, “nec”, “quin” and “dari”, are actually on the best path. We define the confidence margin of a location as the difference in score between the best fitting word from our dictionary and the next best. Figure 5: Extracting Templates For a region with sufficiently high confidence margin, we construct the maximum likelihood template from our current exemplars. left, and we assign pixels from the original image to a template based on its distance to the nearest pixel in the template image, extracting new glyph exemplars right. These new glyphs become the exemplars for our next round of training. templates that best explain the remaining examples. We do this in a greedy fashion by choosing the example whose likelihood is lowest under our current model and adding it to our set. Currently, we threshold the number of new templates for the sake of efficiency. Finally, given the new set of templates, we can add them to the model and rerun our searches, potentially identifying new high confidence regions. 4 Results Our algorithm iteratively improves the character model by gathering new training data from high confidence regions. Figure 3 shows that this method finds new templates significantly different from the originals. In this document, our set of examples after one round appears to cover the space of character images well, at least those in lower case. Our templates are not perfect. The “a”, for instance, has become associated with at least one block that is in fact an “o”. These mistakes are uncommon, particularly if we restrict ourselves to longer words. Those that do occur introduce a tolerable level noise into our model. They make certain regions of the document more ambiguous locally, but that local ambiguity can be overcome with the context provided by surrounding characters and a language model. Improved Character Models We evaluate the method more quantitatively by testing the impact of the new templates on the quality of searches performed against the document. To search for a given word, we rank lines by the ratio of the maximum likelihood transcription/segmentation that contains the search word to the likelihood of the best possible segmentation/transcription under our model. The lowest possible search score is 1, happening when the search word is actually a substring of the maximum likelihood transcription. Higher scores mean that the word is increasingly unlikely under our model. In Figure 7, the figure on the left shows the improvement in ranking of the lines that truly contain selected search words. The odd rows (in red) are search results using only the original 22 glyphs, 100 200 300 400 500 600 20 40 60 80 2600 2650 2700 Rnd 1 dotted (wrong): solid (correct): nupta nuptiis inquam (v|u)ideo videt 1840 1860 1880 1900 1920 Rnd 2 dotted (wrong): solid (correct): iam nupta nuptiis post inquam postquam (v|u)ideo videt Figure 6: Search Results with (Rnd 1) initial templates only and with (Rnd 2) templates extracted from high confidence regions. We show results that have a score within 5% of the best path. Solid Lines are the results for the correct word. Dotted lines represent other search results, where we have made a few larger in order to show those words that are the closest competitors to the true word. Many alternative searches, like the highlighted “post” are actually portions of the correct larger words. These restrict our selection of confidence regions, but do not impinge on search quality. Each correct word has significantly improved after one round of template reestimation. “iam” has been correctly identified, and is a new high confidence region. Both “nuptiis” and “postquam” are now the highest likelihood words for their region barring smaller subsequences, and “videt” has narrowed the gap between its competitor “video”. while the even rows (in green) use an additional 332 glyphs extracted from high confidence regions. Search results are markedly improved in the second model. The word “est”, for instance, only had 15 of 24 of the correct lines in the top 100 under the original model, while under the learned model all 24 are not only present but also more highly ranked. Improved Search Figure 6 shows the improved performance of our refitted model for a single line. Most words have greatly improved relative to their next best alternative. “postquam” and “iam” were not even considered by the original model and now are nearly optimal. The right of Figure 7 shows the average precision/recall curve under each model for 21 words with more than 4 occurrences in the dataset. Precision is the percentage of lines truly containing a word in the top n search results, and recall is the percentage of all lines containing the word returned in the top n results. The learned model clearly dominates. The new model also greatly improves performance for rare words. For 320 words ocurring just once in the dataset, 50% are correctly returned as the top ranked result under the original model. Under the learned model, this number jumps to 78%. 5 Conclusions and Future Work In most fonts, characters are quite ambiguous locally. An “n” looks like a “u”, looks like “ii”, etc. This ambiguity is the major hurdle to the unsupervised learning of character templates. Language models help, but the standard n-gram models provide insufficient constraints, giving posteriors for character sites too uninformative to get EM off the ground. Selected Words, Top 100 Returned Lines 10 20 30 40 50 60 70 80 90100 est (15,24)/24 nescio ( 1, 1)/ 1 postquam ( 0, 2)/ 2 quod (14,14)/14 moram ( 0, 2)/ 2 non ( 8, 8)/ 8 quid ( 9, 9)/ 9 0.2 0.4 0.6 0.8 1 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 Recall Precision Aggregate Precision/Recall Curve Original Model Refit Model Figure 7: The figure on the left shows the those lines with the top 100 scores that actually contain the specified word. The first of each set of two rows (in red) is the results from Round 1. The second (in green) is the results for Round 2. Almost all search words in our corpus show a significant improvement. The numbers to the right (x/y) mean that out of y lines that actually contained the search word in our document, x of them made it into the top ten. On the right are average precision/recall curves for 21 high frequency words under the model with our original templates (Rnd 1) and after refitting with new extracted templates (Rnd 2). Extracting new templates vastly improves our search quality An entire word is much different. Given a dictionary, we expect many word images to have a single likely transcription even if many characters are locally ambiguous. We show that we can identify these high confidence regions even with a poorly tuned character model. By extracting new templates only from these regions of the document, we overcome the noise problem and significantly improve our character models. We demonstrate this improvement for the task of search where the refitted models have drastically better search responses than with the original. Our method is indifferent to the form of the actual character emission model. There is a rich literature in character prediction from isolated image windows, and we expect that incorporating more powerful character models should provide even greater returns and help us in learning less regular scripts. Finding high confidence regions to extract good training examples is a broadly applicable concept. We believe this work should extend to other problems, most notably speech recognition. Looked at more abstractly, our use of language model in this work is actually encoding spatial constraints. The probability of a character given an image window depends not only on the identify of surrounding characters but also on their spatial configuration. Integrating context into recognition problems is an area of intense research in the computer vision community, and we are investigating extending the idea of confidence regions to more general object recognition problems. References [1] Early Manuscripts at Oxford University. Bodleian library ms. auct. f. 2.13. http://image.ox.ac.uk/. [2] J. Edwards, Y.W. Teh, D. Forsyth, R. Bock, M. Maire, and G. Vesom. Making latin manuscripts searchable using ghmm’s. In NIPS 17, pages 385–392. 2005. [3] G. Kopec and M. Lomelin. Document-specific character template estimation. In Proceedings, Document Image Recognition III, SPIE, 1996. [4] V. Lavrenko, T. Rath, and R. Manmatha. Holistic word recognition for handwritten historical documents. In dial, pages 278–287, 2004. [5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [6] M. Mohri, F. Pereira, and M. Riley. Weighted finite state transducers in speech recognition. ISCA ITRW Automatic Speech Recognition, pages 97–106, 2000.
|
2005
|
82
|
2,902
|
*School of Electrical and Information Engineering, +Institute of Perception, Action and Behaviour. An aVLSI cricket ear model André van Schaik* Richard Reeve+ The University of Sydney University of Edinburgh NSW 2006, AUSTRALIA Edinburgh, UK andre@ee.usyd.edu.au richardr@inf.ed.ac.uk Craig Jin* Tara Hamilton* craig@ee.usyd.edu.au tara@ee.usyd.edu.au Abstract Female crickets can locate males by phonotaxis to the mating song they produce. The behaviour and underlying physiology has been studied in some depth showing that the cricket auditory system solves this complex problem in a unique manner. We present an analogue very large scale integrated (aVLSI) circuit model of this process and show that results from testing the circuit agree with simulation and what is known from the behaviour and physiology of the cricket auditory system. The aVLSI circuitry is now being extended to use on a robot along with previously modelled neural circuitry to better understand the complete sensorimotor pathway. 1 Introduction Understanding how insects carry out complex sensorimotor tasks can help in the design of simple sensory and robotic systems. Often insect sensors have evolved into intricate filters matched to extract highly specific data from the environment which solves a particular problem directly with little or no need for further processing [1]. Examples include head stabilisation in the fly, which uses vision amongst other senses to estimate self-rotation and thus to stabilise its head in flight, and phonotaxis in the cricket. Because of the narrowness of the cricket body (only a few millimetres), the Interaural Time Difference (ITD) for sounds arriving at the two sides of the head is very small (10–20µs). Even with the tympanal membranes (eardrums) located, as they are, on the forelegs of the cricket, the ITD only reaches about 40µs, which is too low to detect directly from timings of neural spikes. Because the wavelength of the cricket calling song is significantly greater than the width of the cricket body the Interaural Intensity Difference (IID) is also very low. In the absence of ITD or IID information, the cricket uses phase to determine direction. This is possible because the male cricket produces an almost pure tone for its calling song. Figure 1: The cricket auditory system. Four acoustic inputs channel sounds directly or through tracheal tubes onto two tympanal membranes. Sound from contralateral inputs has to pass a (double) central membrane (the medial septum), inducing a phase delay and reduction in gain. The sound transmission from the contralateral tympanum is very weak, making each eardrum effectively a 3 input system. The physics of the cricket auditory system is well understood [2]; the system (see Figure 1) uses a pair of sound receivers with four acoustic inputs, two on the forelegs, which are the external surfaces of the tympana, and two on the body, the prothoracic or acoustic spiracles [3]. The connecting tracheal tubes are such that interference occurs as sounds travel inside the cricket, producing a directional response at the tympana to frequencies near to that of the calling song. The amplitude of vibration of the tympana, and hence the firing rate of the auditory afferent neurons attached to them, vary as a sound source is moved around the cricket and the sounds from the different inputs move in and out of phase. The outputs of the two tympana match when the sound is straight ahead, and the inputs are bilaterally symmetric with respect to the sound source. However, when sound at the calling song frequency is off-centre the phase of signals on the closer side comes better into alignment, and the signal increases on that side, and conversely decreases on the other. It is that crossover of tympanal vibration amplitudes which allows the cricket to track a sound source (see Figure 6 for example). A simplified version of the auditory system using only two acoustic inputs was implemented in hardware [4], and a simple 8-neuron network was all that was required to then direct a robot to carry out phonotaxis towards a species-specific calling song [5]. A simple simulator was also created to model the behaviour of the auditory system of Figure 1 at different frequencies [6]. Data from Michelsen et al. [2] (Figures 5 and 6) were digitised, and used together with average and “typical” values from the paper to choose gains and delays for the simulation. Figure 2 shows the model of the internal auditory system of the cricket from sound arriving at the acoustic inputs through to transmission down auditory receptor fibres. The simulator implements this model up to the summing of the delayed inputs, as well as modelling the external sound transmission. Results from the simulator were used to check the directionality of the system at different frequencies, and to gain a better understanding of its response. It was impractical to check the effect of leg movements or of complex sounds in the simulator due to the necessity of simulating the sound production and transmission. An aVLSI chip was designed to implement the same model, both allowing more complex experiments, such as leg movements to be run, and experiments to be run in the real world. Figure 2: A model of the auditory system of the cricket, used to build the simulator and the aVLSI implementation (shown in boxes). These experiments with the simulator and the circuits are being published in [6] and the reader is referred to those papers for more details. In the present paper we present the details of the circuits used for the aVLSI implementation. 2 Circuits The chip, implementing the aVLSI box in Figure 2, comprises two all-pass delay filters, three gain circuits, a second-order narrow-band band-pass filter, a first-order wide-band band-pass filter, a first-order high-pass filter, as well as supporting circuitry (including reference voltages, currents, etc.). A single aVLSI chip (MOSIS tiny-chip) thus includes half the necessary circuitry to model the complete auditory system of a cricket. The complete model of the auditory system can be obtained by using two appropriately connected chips. Only two all-pass delay filters need to be implemented instead of three as suggested by Figure 2, because it is only the relative delay between the three pathways arriving at the one summing node that counts. The delay circuits were implemented with fully-differential gm-C filters. In order to extend the frequency range of the delay, a first-order all-pass delay circuit was cascaded with a second-order all-pass delay circuit. The resulting addition of the first-order delay and the second-order delay allowed for an approximately flat delay response for a wider bandwidth as the decreased delay around the corner frequency of the first-order filter cancelled with the increased delay of the second-order filter around its resonant frequency. Figure 3 shows the first- and second-order sections of the all-pass delay circuit. Two of these circuits were used and, based on data presented in [2], were designed with delays of 28µs and 62µs, by way of bias current manipulation. The operational transconductance amplifier (OTA) in figure 3 is a standard OTA which includes the common-mode feedback necessary for fully differential designs. The buffers (Figure 3) are simple, cascoded differential pairs. V+ V- I- I+ V+ V- I- I+ V+ V- I- I+ V+ V- I- I+ V+ V- I- I+ V+ V- I- I+ Figure 3: The first-order all-pass delay circuit (left) and the second-order all-pass delay (right). The differential output of the delay circuits is converted into a current which is multiplied by a variable gain implemented as shown in Figure 4. The gain cell includes a differential pair with source degeneration via transistors N4 and N5. The source degeneration improves the linearity of the current. The three gain cells implemented on the aVLSI have default gains of 2, 3 and 0.91 which are set by holding the default input high and appropriately ratioing the bias currents through the value of vbiasp. To correct any on-chip mismatches and/or explore other gain configurations a current splitter cell [7] (p-splitter, figure 4) allows the gain to be programmed by digital means post fabrication. The current splitter takes an input current (Ibias, figure 4) and divides it into branches which recursively halve the current, i.e., the first branch gives ½ Ibias, the second branch ¼ Ibias, the third branch 1/8 Ibias and so on. These currents can be used together with digitally controlled switches as a Digital-to-Analogue converter. By holding default low and setting C5:C0 appropriately, any gain – from 4 to 0.125 – can be set. To save on output pins the program bits (C5:C0) for each of the three gain cells are set via a single 18-bit shift register in bit-serial fashion. Summing the output of the three gain circuits in the current domain simply involves connecting three wires together. Therefore, a natural option for the filters that follow is to use current domain filters. In our case we have chosen to implement log-domain filters using MOS transistors operating in weak inversion. Figure 5 shows the basic building blocks for the filters – the Tau Cell [8] and the multiplier cell – and block diagrams showing how these blocks were connected to create the necessary filtering blocks. The Tau Cell is a log-domain filter which has the firstorder response: 1 1 + = τ s I I in out , where a T a I V nC = τ and n = the slope factor, VT = thermal voltage, Ca = capacitance, and Ia = bias current. In figure 5, the input currents to the Tau Cell, Imult and A*Ia, are only used when building a second-order filter. The multiplier cell is simply a translinear loop where: or I a out mult out AI I I I ∗ = ∗ 2 1 mult = AIaIout2/Iout1. The configurations of the Tau Cell to get particular responses are covered in [8] along with the corresponding equations. The high frequency filter of Figure 2 is implemented by the high-pass filter in Figure 5 with a corner frequency of 17kHz. The low frequency filter, however, is divided into two parts since the biological filter’s response (see for example Figure 3A in [9]) separates well into a narrow second-order band-pass filter with a 10kHz resonant frequency and a wide band-pass filter made from a first-order high-pass filter with a 3kHz corner frequency followed by a first-order low-pass filter with a 12kHz corner frequency. These filters are then added together to reproduce the biological filter. The filters’ responses can be adjusted post fabrication via their bias currents. This allows for compensation due to processing and matching errors. Figure 4: The Gain Cell above is used to convert the differential voltage input from the delay cells into a single-ended current output. The gain of each cell is controllable via a programmable current cell (p_splitter). An on-chip bias generator [7] was used to create all the necessary current biases on the chip. All the main blocks (delays, gain cells and filters), however, can have their on-chip bias currents overridden through external pins on the chip. The chip was fabricated using the MOSIS AMI 1.6µm technology and designed using the Cadence Custom IC Design Tools (5.0.33). 3 Methods The chip was tested using sound generated on a computer and played through a soundcard to the chip. Responses from the chip were recorded by an oscilloscope, and uploaded back to the computer on completion. Given that the output from the chip and the gain circuits is a current, an external current-sense circuit built with discrete components was used to enable the output to be probed by the oscilloscope. Figure 5: The circuit diagrams for the log-domain filter building blocks – The Tau Cell and The Multiplier – along with the block diagrams for the three filters used in the aVLSI model. Initial experiments were performed to tune the delays and gains. After that, recordings were taken of the directional frequency responses. Sounds were generated by computer for each chip input to simulate moving the forelegs by delaying the sound by the appropriate amount of time; this was a much simpler solution than using microphones and moving them using motors. 4 Results The aVLSI chip was tested to measure its gains and delays, which were successfully tuned to the appropriate values. The chip was then compared with the simulation to check that it was faithfully modelling the system. A result of this test at 4kHz (approximately the cricket calling-song frequency) is shown in Figure 6. Apart from a drop in amplitude of the signal, the response of the circuit was very similar to that of the simulator. The differences were expected because the aVLSI circuit has to deal with real-world noise, whereas the simulated version has perfect signals. Examples of the gain versus frequency response of the two log-domain band-pass filters are shown in Figure 7. Note that the narrow-band filter peaks at 6kHz, which is significantly above the mating song frequency of the cricket which is around 4.5kHz. This is not a mistake, but is observed in real crickets as well. As stated in the introduction, a range of further testing results with both the circuit and the simulator are being published in [6]. 5 Discussion The aVLSI auditory sensor in this research models the hearing of the field cricket Gryllus bimaculatus. It is a more faithful model of the cricket auditory system than was previously built in [4], reproducing all the acoustic inputs, as well as the responses to frequencies of both the co specific calling song and bat echolocation chirps. It also generates outputs corresponding to the two sets of behaviourally relevant auditory receptor fibres. Results showed that it matched the biological data well, though there were some inconsistencies due to an error in the specification that will be addressed in a future iteration of the design. A more complete implementation across all frequencies was impractical because of complexity and size issues as well as serving no clear behavioural purpose. Figure 6: Vibration amplitude of the left (dotted) and right (solid) virtual tympana measured in decibels in response to a 4kHz tone in simulation (left) and on the aVLSI chip (right). The plot shows the amplitude of the tympanal responses as the sound source is rotated around the cricket. Figure 7: Frequency-Gain curves for the narrow-band and wide-band bandpass filters. The long-term aim of this work is to better understand simple sensorimotor control loops in crickets and other insects. The next step is to mount this circuitry on a robot to carry out behavioural experiments, which we will compare with existing and new behavioural data (such as that in [10]). This will allow us to refine our models of the neural circuitry involved. Modelling the sensory afferent neurons in hardware is necessary in order to reduce processor load on our robot, so the next revision will include these either onboard, or on a companion chip as we have done before [11]. We will also move both sides of the auditory system onto a single chip to conserve space on the robot. It is our belief and experience that, as a result of this intelligent pre-processing carried out at the sensor level, the neural circuits necessary to accurately model the behaviour will remain simple. Acknowledgments The authors thank the Institute of Neuromorphic Engineering and the UK Biotechnology and Biological Sciences Research Council for funding the research in this paper. References [1] R. Wehner. Matched filters – neural models of the external world. J Comp Physiol A, 161: 511–531, 1987. [2] A. Michelsen, A. V. Popov, and B. Lewis. Physics of directional hearing in the cricket Gryllus bimaculatus. Journal of Comparative Physiology A, 175:153–164, 1994. [3] A. Michelsen. The tuned cricket. News Physiol. Sci., 13:32–38, 1998. [4] H. H. Lund, B. Webb, and J. Hallam. A robot attracted to the cricket species Gryllus bimaculatus. In P. Husbands and I. Harvey, editors, Proceedings of 4th European Conference on Artificial Life, pages 246–255. MIT Press/Bradford Books, MA., 1997. [5] R Reeve and B. Webb. New neural circuits for robot phonotaxis. Phil. Trans. R. Soc. Lond. A, 361:2245–2266, August 2003. [6] R. Reeve, A. van Schaik, C. Jin, T. Hamilton, B. Torben-Nielsen and B. Webb Directional hearing in a silicon cricket. Biosystems, (in revision), 2005b [7] T. Delbrück and A. van Schaik, Bias Current Generators with Wide Dynamic Range, Analog Integrated Circuits and Signal Processing 42(2), 2005 [8] A. van Schaik and C. Jin, The Tau Cell: A New Method for the Implementation of Arbitrary Differential Equations, IEEE International Symposium on Circuits and Systems (ISCAS) 2003 [9] Kazuo Imaizumi and Gerald S. Pollack. Neural coding of sound frequency by cricket auditory receptors. The Journal of Neuroscience, 19(4):1508– 1516, 1999. [10] Berthold Hedwig and James F.A. Poulet. Complex auditory behaviour emerges from simple reactive steering. Nature, 430:781–785, 2004. [11] R. Reeve, B. Webb, A. Horchler, G. Indiveri, and R. Quinn. New technologies for testing a model of cricket phonotaxis on an outdoor robot platform. Robotics and Autonomous Systems, 51(1):41-54, 2005.
|
2005
|
83
|
2,903
|
An Analog Visual Pre-Processing Processor Employing Cyclic Line Access in Only-Nearest-Neighbor-Interconnects Architecture Yusuke Nakashita Department of Frontier Informatics School of Frontier Sciences The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan yusuke@else.k.u-tokyo.ac.jp Yoshio Mita Department of Electrical Engineering School of Engineering The University of Tokyo 7-3-1 Hongo, Bunkyo-ku,Tokyo 113-8656, Japan. mita@ee.t.u-tokyo.ac.jp Tadashi Shibata Department of Frontier Informatics School of Frontier Sciences The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan shibata@ee.t.u-tokyo.ac.jp Abstract An analog focal-plane processor having a 128 128 photodiode array has been developed for directional edge filtering. It can perform 4 4-pixel kernel convolution for entire pixels only with 256 steps of simple analog processing. Newly developed cyclic line access and row-parallel processing scheme in conjunction with the “only-nearest-neighbor interconnects” architecture has enabled a very simple implementation. A proof-of-conceptchip was fabricated in a 0.35- m 2-poly 3-metal CMOS technology and the edge filtering at a rate of 200 frames/sec. has been experimentally demonstrated. 1 Introduction Directional edge detection in an input image is the most essential operation in early visual processing [1, 2]. Such spatial filtering operations are carried out by taking the convolution between a block of pixels and a weight matrix, requiring a number of multiply-andaccumulate operations. Since the convolution operation must be repeated pixel-by-pixel to scan the entire image, the computation is very expensive and software solutions are not compatible to real-time applications. Therefore, the hardware implementation of focalplane parallel processing is highly demanded. However, there exists a hard problem which we call the interconnects explosion as illustrated in Fig. 1. (a) (b) Figure 1: (a) Interconnects from nearest neighbor (N.N.) and second N.N. pixels to a single pixel at the center. (b) N.N. and second N.N. interconnects for pixels in the two rows, an illustrative example of interconnecs explosion. In carrying out a filtering operation for one pixel, the luminance data must be gathered from the nearest-neighbor and second nearest-neighbor pixels. The interconnects necessary for this is illustrated in Fig. 1(a). If such wiring is formed for two rows of pixels, excessively high density overlapping interconnects are required. If we extend this to an entire chip, it is impossible to form the wiring even with the most advanced VLSI interconnects technology. Biology has solved the problem by real 3D-interconnects structures. Since only two dimensional layouts are allowed with a limited number of stacks in VLSI technology, the missing one dimension is crucial. We must overcome the difficulty by introducing new architectures. In order to achieve real-time performance in image filtering, a number of VLSI chips have been developed in both digital [3, 4] and analog [5, 6, 7] technologies. A flash-convolution processor [4] allows a single 5 5-pixel convolution operation in a single clock cycle by introducing a subtle memory access scheme. However, for an N M-pixel image, it takes N M clock cycles to complete the processing. In the line-parallel processing scheme employed in [7], both row-parallel and column-parallel processing scan the target image several times and the entire filtering finishes in O (N+M) steps. (A single step includes several clock cycles to control the analog processing.) The purpose of this work is to present an analog focal-plane CMOS image sensor chip which carries out the directional edge filtering convolution for an N M-pixel image only in M (or N) steps. In order to achieve an efficient processing, two key technologies have been introduced: “only-nearest-neighbor interconnects” architecture and “cyclic line access and row-parallel processing”. The former was first developed in [8], and has enabled the convolution including second-nearest-neighbor luminance data only using nearest neighbor interconnects, thus greatly reducing the interconnect complexity. However, the fill factor was sacrificed due to the pixel parallel organization. The problem has been resolved in the present work by “cyclic line access and row-parallel processing.” Namely, the processing elements are separated from the array of photo diodes and the “only-nearest-neighbor interconnects” architecture was realized as a separate module of row-parallel processing elements. The cyclic line access scheme first introduced in the present work has eliminated the redundant data readout operations from the photodiode array and has established a very efficient processing. As a result, it has become possible to complete the edge filtering for a 128 128 pixel image only in 128 2 steps. A proof-of-concept chip was fabricated in a 0.35- m 2-poly 3-metal CMOS technology, and the edge detection at a rate of 200 frames/sec. has been experimentally demonstrated. (a) (b) (c) (d) photodiode processing element Figure 2: Edge filtering in the “only-nearest-neighbor interconnects” architecture: (a) first step; (b) second step; (c) all interconnects necessary for pixel parallel processing; (d) PD’s involved in the convolution. 0 +1+1 0 -1 0 +2+1 -1 -2 0 +1 0 -1 -1 0 (a) 0 +1+1 0 +1+2 0 -1 +1 0 -2 -1 0 -1 -1 0 (b) 0 +1+1 0 +1+2+2+1 -1 -2 -2 -1 0 -1 -1 0 (c) 0 +1 -1 0 +1+2 -2 -1 +1+2 -2 -1 0 +1 -1 0 (d) Figure 3: Edge filtering kernels realized in “only-nearest-neighbor interconnects” architecture: (a) degree; (b) degree; (c) horizontal; (d) vertical. 2 System Organization The two key technologies employed in the present work are explained in the following. 2.1 “Only-Nearest-Neighbor Interconnects” Architecture This architecture was first proposed in [8], and experimentally verified with small-scale test circuits (7 7 processing elements without photodiodes). The key feature of the architecture is that photodiodes (PD’s) are placed at four corners of each processing element (PE), and that the luminance data of each PD are shared by four PE’s as shown in Fig. 2. The edge filtering is carried out as explained below. First, as shown in Fig. 2 (a), preprocessing is carried out in each PE using the luminance data taken from four PD’s located at its corners. Then, the result is transferred to the center PE as shown in Fig. 2 (b) and necessary computation is carried out. This accomplishes the filtering processing for one half of the entire pixels. Then the roles of pre-processing PE’s and center PE’s are interchanged and the same procedure follows to complete the processing for the rest of the pixels. The interconnects necessary for the entire parallel processing is shown in Fig. 2(c). In this manner, every PE can gather all data necessary for the processing from its nearest-neighbor and second nearest-neighbor pixels without complicated crossover interconnects. The kernels illustrated in Fig. 3 have been all realized in this architecture. The luminance data from 12 PD’s enclosed in Fig. 2 (d) are utilized to detect the edge information at the center location. photodiode array 131 pixels 131 pixels 4 rows 130 (128 + 2) processing elements 130 (128 + 2) processing elements 130 (128 + 2) processing elements 130 (128 + 2) processing elements address decoder 128 parallel output (a) RST BIAS SELECT Vout SH M1 M2 M3 C PD (c) x mod 4 = 1 x mod 4 = 2 x mod 4 = 0 x mod 4 = 3 x mod 4 = 1 analog memory processing element (PE) cyclic connection 4 rows of PEs 128 PEs for output 1 PE 1 PE (b) Figure 4: Block diagram of the chip (a), and organization of row-parallel processing module (b). in (b) represents the row number 1 131. (c) shows read out circuit of photodiode. 2.2 Cyclic Line Access and Row-Parallel Processing A block diagram of the analog edge-filtering processor is given in Fig. 4 (a). It consists of an array of 131 131 photodiodes (PD’s) and a module for row-parallel processing placed at the bottom of the PD array. Figure 4(b) illustrates the organization of the row processing module, which is composed of four rows of 130 PE’s and five rows of 131 analog memory cells that temporarily store the luminance data read out from the PD array. It should be noted that only three rows of PE’s and four rows of PD’s are sufficient to carry out a singlerow processing as explained in reference to Fig. 2(d). However, one extra row of PE’s and one extra row of analog memories for PD data storage were included in the row-parallel processing module. This is essential to carry out a seamless data read out from the PD array and computation without analog data shift within the processing module. The chip yields 2 1 1 3 4 analog PD memory (a) 2 5 5 3 4 PE (b) 6 5 5 3 4 (c) 6 5 5 7 4 (d) Figure 5: “Cyclic-line access and row-parallel processing” scheme. the kernel convolution results for one of the rows in the PD array as 128 parallel outputs. Now, the operation of the row-parallel processing module is explained with reference to Fig. 4 (b) and Fig. 5. In order to carry out the convolution for the data in Row 1 4, the PD data are temporarily stored in the analog memory array as shown in Fig. 5 (a). Imporatant to note is that the data from Row 1 are duplicated at the bottom. The convolution operation proceeds using the upper four rows of data as explained in Fig. 5 (a). In the next step, the data from Row 5 are overwritten to the sites of Row 1 data as shown in Fig. 5 (b). The operation proceeds using the lower four rows of data and the second set of outputs is produced. In the third step, the data from Row 6 is overwritten to the sites of Row 2 data (Fig. 5 (c)), and the convolution is taken using the data in the enclosure. Although a part of the data (top two rows) are separated from the rest, the topology of the hardware computation is identical to that explained in Fig. 5 (a). This is because the same set of data is stored in both top and bottom PD memories and the top and bottom PE’s are connected by “cyclic connection” as illustrated in Fig. 4 (b). By introducing such one extra row of PD memories and one extra row of PE’s with cyclic interconnections, row-parallel processing can be seamlessly performed with only a single-row PD data set download at each step. 3 Circuit Configurations In this architecture, we need only two arithmetic operations, i.e., the sum of four inputs and the subtraction. Figure 6(a) shows the adder circuit using the multiple-input floating-gate source follower [9]. The substrate of is connected to the source to avoid the body effect. The transistor operates as a current source for fast output voltage stabilization as well as to achieve good linearity. Due to the charge redistribution in the floating gate, the average of the four input voltages appears at the output as where represents the threshold voltage of . Here, the four coupling capacitors connected to the floating gate of are identical and the capacitance coupling between the floating gate and the ground was assumed to be 0 for simplicity. The electrical charge in the floating gate is initialized periodically using the reset switch ( ). The coupling capacitors themselves are also utilized as temporary memories for the PD data read out from the PD array. Figure 6(b) shows the subtraction circuit, where the same source follower was used. When SW1 and SW2 are turned on, and SW3 is turned off, the following voltage difference is M1 M2 Vin1 C1 C2 C3 C4 Vin2 Vin3 Vin4 BIAS Vout M3 RST Floating gate (a) M1 Vin_p1 Vin_p2 Vin_m1 Vin_m2 BIAS Vout Vref SW1 SW2 SW3 C1 C2 C3 C4 C5 M2 (b) Figure 6: Adder circuit (a) and subtraction circuit (b) using floating-gate MOS technology. developed across the capacitor : Then, SW1 and SW2 are turned off, and SW3 is turned on. As a result, the output voltage becomes
where represents the threshold voltage of . 4 Experimental Results A proof-of-concept chip was designed and fabricated in a 0.35- m 2-poly 3-metal CMOS technology. Figure 7 shows the photomicrograph of the chip, and the chip specifications are given in Table 1. Since the pitch of a single PE unit is larger than the pitch of the PD array, 130 PE units are laid out as two separate L-shaped blocks at the periphery of the PD array as seen in the chip photomicrograph. Successful operation of the chip was experimently verified. An example is shown in Fig. 8, where the experimental results for -degreeedge filtering are demonstrated. Since the thresholding circuitry was not implemented in the present chip, only the convolution results are shown. 128 parallel outputs from the test chip were multiplexed for observation using the external multiplexers mounted on a milled printed circuit board. The vertical stripes observed in the result are due to the resistance variation in the external interconnects poorly produced on the milled printed circuit board. It was experimentally confirmed the chip operates at 1000 frames/sec. However, the operation is limited by the integration time of PD’s and typical motion images are processed at about 200 frames/sec. The power dissipation in the PE’s was 25 mW and that in the PD array was 40mW. 5 Conclusions An analog edge-filtering processor has been developed based on the two key technologies: “only-nearest-neighborinterconnects” architecture and “cyclic line access and row-parallel 131 x 131 PD Array PEs PEs Figure 7: Chip photomicrograph. Table 1: Chip Specifications. Process Technology 0.35 m CMOS, 2-Poly, 3-Metal Die Size 9.8 mm x 9.8 mm Voltage Supply 3.3 V Operating Frequency 50M Hz Power Dissipation 25 mW (PE Array) PE Operation 1000 Frames/secl Typical Frame Ratel 200 Frames / sec (limited by PD integration time) (a) (b) Figure 8: Experimental set up (a), and measurement results of degree edge filtering convolution (b). processing”. As a result, the convolution operation involving second nearest-neighbor pixel data for an -pixel image can be performed only in steps. The edge filtering operation for 128 128-pixel images at 200 frames/sec. has been experimentally demonstrated. The chip meets the requirement of low-power and real-time-response applications. 6 Acknowledgments The VLSI chip in this study was fabricated in the chip fabrication program of VLSI Design and Education Center (VDEC), the University of Tokyo in collaboration with Rohm Corporation and Toppan Printing Corporation. The work is partially supported by the Ministry of Education, Science, Sports, and Culture under Grant-in-Aid for Scientific Research (No. 14205043). References [1] D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurons in the cat’s striate cortex,” Journal of Physiology, vol. 148, pp. 574-591, 1959. [2] M. Yagi and T. Shibata, “An image representation algorithm compatible with neuralassociative-processor-based hardware recognition systems,” IEEE Trans. Neural Networks, vol. 14(5), pp. 1144-1161, 2003. [3] J. C. Gealow and C. G. Sodini, “A pixel parallel-processor using logic pitch-matched to dynamic memory,” IEEE J. Solid-State Circuits, vol. 34, pp. 831-839, 1999. [4] K. Ito, M. Ogawa and T. Shibata, “A variable-kernel flash-convolution image filtering processor,” Dig. Tech. Papers of Int. Solid-State Circuits Conf., pp. 470-471, 2003. [5] L. D. McIlrath, “A CCD/CMOS focal plane array edge detection processor implementing the multiscale veto algorithm,” IEEE J. Solid-State Circuits, vol. 31(9), pp. 1239-1247, 1996. [6] R. Etiene-Cummings, Z. K. Kalayjian and D. Cai, “A programmablefocal plane MIMD image processor chip,” IEEE J. Solid-State Circuits, vol. 36(1), pp. 64-73, 2001. [7] T. Taguchi, M. Ogawa and T. Shibata, “An Analog Image Processing LSI Employing Scanning Line Parallel Processing,” Proc. 29th European Solid-Sate Circuits Conference (ESSCIRC 2003), pp. 65-68, 2003. [8] Y. Nakashita, Y. Mita and T. Shibata, “An Analog Edge-Filtering Processor Employing Only-Nearest-Neighbor Interconnects,” Ext. Abstracts of the International Conference on Solid State Devices and Materials (SSDM ’04), pp. 356-357, 2004. [9] T. Shibata and T. Ohmi, “A Functional MOS Transistor Featuring Gate-Level Weighted Sum and Threshold Operations,” IEEE Trans. Electron Devices, vol. 39(6), pp. 14441455, 1992.
|
2005
|
84
|
2,904
|
The Curse of Highly Variable Functions for Local Kernel Machines Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux Dept. IRO, Universit´e de Montr´eal P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7, Qc, Canada {bengioy,delallea,lerouxni}@iro.umontreal.ca Abstract We present a series of theoretical arguments supporting the claim that a large class of modern learning algorithms that rely solely on the smoothness prior – with similarity between examples expressed with a local kernel – are sensitive to the curse of dimensionality, or more precisely to the variability of the target. Our discussion covers supervised, semisupervised and unsupervised learning algorithms. These algorithms are found to be local in the sense that crucial properties of the learned function at x depend mostly on the neighbors of x in the training set. This makes them sensitive to the curse of dimensionality, well studied for classical non-parametric statistical learning. We show in the case of the Gaussian kernel that when the function to be learned has many variations, these algorithms require a number of training examples proportional to the number of variations, which could be large even though there may exist short descriptions of the target function, i.e. their Kolmogorov complexity may be low. This suggests that there exist non-local learning algorithms that at least have the potential to learn about such structured but apparently complex functions (because locally they have many variations), while not using very specific prior domain knowledge. 1 Introduction A very large fraction of the recent work in statistical machine learning has been focused on non-parametric learning algorithms which rely solely, explicitly or implicitely, on the smoothness prior, which says that we prefer as solution functions f such that when x ≈y, f(x) ≈f(y). Additional prior knowledge is expressed by choosing the space of the data and the particular notion of similarity between examples (typically expressed as a kernel function). This class of learning algorithms therefore includes most of the kernel machine algorithms (Sch¨olkopf, Burges and Smola, 1999), such as Support Vector Machines (SVMs) (Boser, Guyon and Vapnik, 1992; Cortes and Vapnik, 1995) or Gaussian processes (Williams and Rasmussen, 1996), but also unsupervised learning algorithms that attempt to capture the manifold structure of the data, such as Locally Linear Embedding (Roweis and Saul, 2000), Isomap (Tenenbaum, de Silva and Langford, 2000), kernel PCA (Sch¨olkopf, Smola and M¨uller, 1998), Laplacian Eigenmaps (Belkin and Niyogi, 2003), Manifold Charting (Brand, 2003), and spectral clustering algorithms (see (Weiss, 1999) for a review). More recently, there has also been much interest in non-parametric semi-supervised learning algorithms, such as (Zhu, Ghahramani and Lafferty, 2003; Zhou et al., 2004; Belkin, Matveeva and Niyogi, 2004; Delalleau, Bengio and Le Roux, 2005), which also fall in this category, and share many ideas with manifold learning algorithms. Since this is a very large class of algorithms and it is attracting so much attention, it is worthwhile to investigate its limitations, and this is the main goal of this paper. Since these methods share many characteristics with classical non-parametric statistical learning algorithms (such as the k-nearest neighbors and the Parzen windows regression and density estimation algorithms (Duda and Hart, 1973)), which have been shown to suffer from the so-called curse of dimensionality, it is logical to investigate the following question: to what extent do these modern kernel methods suffer from a similar problem? In this paper, we focus on algorithms in which the learned function is expressed in terms of a linear combination of kernel functions applied on the training examples: f(x) = b + n X i=1 αiKD(x, xi) (1) where optionally a bias term b is added, D = {z1, . . . , zn} are training examples (zi = xi for unsupervised learning, zi = (xi, yi) for supervised learning, and yi can take a special “missing” value for semi-supervised learning). The αi’s are scalars chosen by the learning algorithm using D, and KD(·, ·) is the kernel function, a symmetric function (sometimes expected to be positive definite), which may be chosen by taking into account all the xi’s. A typical kernel function is the Gaussian kernel, Kσ(u, v) = e−1 σ2 ||u−v||2, (2) with the width σ controlling how local the kernel is. See (Bengio et al., 2004) to see that LLE, Isomap, Laplacian eigenmaps and other spectral manifold learning algorithms such as spectral clustering can be generalized to be written as in eq. 1 for a test point x. One obtains consistency of classical non-parametric estimators by appropriately varying the hyper-parameter that controls the locality of the estimator as n increases. Basically, the kernel should be allowed to become more and more local, so that statistical bias goes to zero, but the “effective number of examples” involved in the estimator at x (equal to k for the k-nearest neighbor estimator) should increase as n increases, so that statistical variance is also driven to 0. For a wide class of kernel regression estimators, the unconditional variance and squared bias can be shown to be written as follows (H¨ardle et al., 2004): expected error = C1 nσd + C2σ4, with C1 and C2 not depending on n nor on the dimension d. Hence an optimal bandwidth is chosen proportional to n −1 4+d , and the resulting generalization error (not counting the noise) converges in n−4/(4+d), which becomes very slow for large d. Consider for example the increase in number of examples required to get the same level of error, in 1 dimension versus d dimensions. If n1 is the number of examples required to get a level of error e, to get the same level of error in d dimensions requires on the order of n(4+d)/5 1 examples, i.e. the required number of examples is exponential in d. For the k-nearest neighbor classifier, a similar result is obtained (Snapp and Venkatesh, 1998): expected error = E∞+ ∞ X j=2 cjn−j/d where E∞is the asymptotic error, d is the dimension and n the number of examples. Note however that, if the data distribution is concentrated on a lower dimensional manifold, it is the manifold dimension that matters. Indeed, for data on a smooth lower-dimensional manifold, the only dimension that say a k-nearest neighbor classifier sees is the dimension of the manifold, since it only uses the Euclidean distances between the near neighbors, and if they lie on such a manifold then the local Euclidean distances approach the local geodesic distances on the manifold (Tenenbaum, de Silva and Langford, 2000). 2 Minimum Number of Bases Required In this section we present results showing the number of required bases (hence of training examples) of a kernel machine with Gaussian kernel may grow linearly with the “variations” of the target function that must be captured in order to achieve a given error level. 2.1 Result for Supervised Learning The following theorem informs us about the number of sign changes that a Gaussian kernel machine can achieve, when it has k bases (i.e. k support vectors, or at least k training examples). Theorem 2.1 (Theorem 2 of (Schmitt, 2002)). Let f : R →R computed by a Gaussian kernel machine (eq. 1) with k bases (non-zero αi’s). Then f has at most 2k zeros. We would like to say something about kernel machines in Rd, and we can do this simply by considering a straight line in Rd and the number of sign changes that the solution function f can achieve along that line. Corollary 2.2. Suppose that the learning problem is such that in order to achieve a given error level for samples from a distribution P with a Gaussian kernel machine (eq. 1), then f must change sign at least 2k times along some straight line (i.e., in the case of a classifier, the decision surface must be crossed at least 2k times by that straight line). Then the kernel machine must have at least k bases (non-zero αi’s). Proof. Let the straight line be parameterized by x(t) = u + tw, with t ∈R and ∥w∥= 1 without loss of generality. Define g : R →R by g(t) = f(u + tw). If f is a Gaussian kernel classifier with k′ bases, then g can be written g(t) = b + k′ X i=1 βi exp −(t −ti)2 2σ2 where u + tiw is the projection of xi on the line Du,w = {u + tw, t ∈R}, and βi ̸= 0. The number of bases of g is k′′ ≤k′, as there may exist xi ̸= xj such that ti = tj. Since g must change sign at least 2k times, thanks to theorem 2.1, we can conclude that g has at least k bases, i.e. k ≤k′′ ≤k′. The above theorem tells us that if we are trying to represent a function that locally varies a lot (in the sense that its sign along a straight line changes many times), then we need many training examples to do so with a Gaussian kernel machine. Note that it says nothing about the dimensionality of the space, but we might expect to have to learn functions that vary more when the data is high-dimensional. The next theorem confirms this suspicion in the special case of the d-bits parity function: parity : (b1, . . . , bd) ∈{0, 1}d 7→ 1 if Pd i=1 bi is even −1 otherwise We will show that learning this apparently simple function with Gaussians centered on points in {0, 1}d is difficult, in the sense that it requires a number of Gaussians exponential in d (for a fixed Gaussian width). Note that our corollary 2.2 does not apply to the d-bits parity function, so it represents another type of local variation (not along a line). However, we are also able to prove a strong result about that case. We will use the following notations: Xd = {0, 1}d = {x1, x2, . . . , x2d} H0 d = {(b1, . . . , bd) ∈Xd | bd = 0} (3) H1 d = {(b1, . . . , bd) ∈Xd | bd = 1} (4) We say that a decision function f : Rd →R solves the parity problem if sign(f(xi)) = parity(xi) for all i in {1, . . . , 2d}. Lemma 2.3. Let f(x) = P2d i=1 αiKσ(xi, x) be a linear combination of Gaussians with same width σ centered on points xi ∈Xd. If f solves the parity problem, then αiparity(xi) > 0 for all i. Proof. We prove this lemma by induction on d. If d = 1 there are only 2 points. Obviously one Gaussian is not enough to classify correctly x1 and x2, so both α1 and α2 are nonzero, and α1α2 < 0 (otherwise f is of constant sign). Without loss of generality, assume parity(x1) = 1 and parity(x2) = −1. Then f(x1) > 0 > f(x2), which implies α1(1 − Kσ(x1, x2)) > α2(1 −Kσ(x1, x2)) and α1 > α2 since Kσ(x1, x2) < 1. Thus α1 > 0 and α2 < 0, i.e. αiparity(xi) > 0 for i ∈{1, 2}. Suppose now lemma 2.3 is true for d = d′ −1, and consider the case d = d′. We denote by x0 i the points in H0 d and by α0 i their coefficient in the expansion of f (see eq. 3 for the definition of H0 d). For x0 i ∈H0 d, we denote by x1 i ∈H1 d its projection on H1 d (obtained by setting its last bit to 1), whose coefficient in f is α1 i . For any x ∈H0 d and x1 j ∈H1 d we have: Kσ(x1 j, x) = exp −∥x1 j −x∥2 2σ2 ! = exp −1 2σ2 exp −∥x0 j −x∥2 2σ2 ! = γKσ(x0 j, x) where γ = exp −1 2σ2 ∈(0, 1). Thus f(x) for x ∈H0 d can be written f(x) = X x0 i ∈H0 d α0 i Kσ(x0 i , x) + X x1 j∈H1 d α1 jγKσ(x0 j, x) = X x0 i ∈H0 d α0 i + γα1 i Kσ(x0 i , x). Since H0 d is isomorphic to Xd−1, the restriction of f to H0 d implicitely defines a function over Xd−1 that solves the parity problem (because the last bit in H0 d is 0, the parity is not modified). Using our induction hypothesis, we have that for all x0 i ∈H0 d: α0 i + γα1 i parity(x0 i ) > 0. (5) A similar reasoning can be made if we switch the roles of H0 d and H1 d. One has to be careful that the parity is modified between H1 d and its mapping to Xd−1 (because the last bit in H1 d is 1). Thus we obtain that the restriction of (−f) to H1 d defines a function over Xd−1 that solves the parity problem, and the induction hypothesis tells us that for all x1 j ∈H1 d: − α1 j + γα0 j −parity(x1 j) > 0. (6) and the two negative signs cancel out. Now consider any x0 i ∈H0 d and its projection x1 i ∈H1 d. Without loss of generality, assume parity(x0 i ) = 1 (and thus parity(x1 i ) = −1). Using eq. 5 and 6 we obtain: α0 i + γα1 i > 0 α1 i + γα0 i < 0 It is obvious that for these two equations to be simultaneously verified, we need α0 i and α1 i to be non-zero and of opposite sign. Moreover, because γ ∈(0, 1), α0 i + γα1 i > 0 > α1 i + γα0 i ⇒α0 i > α1 i , which implies α0 i > 0 and α1 i < 0, i.e. α0 i parity(x0 i ) > 0 and α1 i parity(x1 i ) > 0. Since this is true for all x0 i in H0 d, we have proved lemma 2.3. Theorem 2.4. Let f(x) = b + P2d i=1 αiKσ(xi, x) be an affine combination of Gaussians with same width σ centered on points xi ∈Xd. If f solves the parity problem, then there are at least 2d−1 non-zero coefficients αi. Proof. We begin with two preliminary results. First, given any xi ∈Xd, the number of points in Xd that differ from xi by exactly k bits is d k . Thus, X xj∈Xd Kσ(xi, xj) = d X k=0 d k exp −k2 2σ2 = cσ. (7) Second, it is possible to find a linear combination (i.e. without bias) of Gaussians g such that g(xi) = f(xi) for all xi ∈Xd. Indeed, let g(x) = f(x) −b + X xj∈Xd βjKσ(xj, x). (8) g verifies g(xi) = f(xi) iff P xj∈Xd βjKσ(xj, xi) = b, i.e. the vector β satisfies the linear system Mσβ = b1, where Mσ is the kernel matrix whose element (i, j) is Kσ(xi, xj) and 1 is a vector of ones. It is well known that Mσ is invertible as long as the xi are all different, which is the case here (Micchelli, 1986). Thus β = bM −1 σ 1 is the only solution to the system. We now proceed to the proof of the theorem. By contradiction, suppose f solves the parity problem with less than 2d−1 non-zero coefficients αi. Then there exist two points xs and xt in Xd such that αs = αt = 0 and parity(xs) = 1 = −parity(xt). Consider the function g defined as in eq. 8 with β = bM −1 σ 1. Since g(xi) = f(xi) for all xi ∈Xd, g solves the parity problem with a linear combination of Gaussians centered points in Xd. Thus, applying lemma 2.3, we have in particular that βsparity(xs) > 0 and βtparity(xt) > 0 (because αs = αt = 0), so that βsβt < 0. But, because of eq. 7, Mσ1 = cσ1, which means 1 is an eigenvector of Mσ with eigenvalue cσ > 0. Consequently, 1 is also an eigenvector of M −1 σ with eigenvalue c−1 σ > 0, and β = bM −1 σ 1 = bc−1 σ 1, which is in contradiction with βsβt < 0: f must therefore have at least 2d−1 non-zero coefficients. The bound in theorem 2.4 is tight, since it is possible to solve the parity problem with exactly 2d−1 Gaussians and a bias, for instance by using a negative bias and putting a positive weight on each example satisfying parity(xi) = 1. When trained to learn the parity function, a SVM may learn a function that looks like the opposite of the parity on test points (while still performing optimally on training points), but it is an artefact of the specific geometry of the problem, and only occurs when the training set size is appropriate compared to |Xd| = 2d (see (Bengio, Delalleau and Le Roux, 2005) for details). Note that if the centers of the Gaussians are not restricted anymore to be points in Xd, it is possible to solve the parity problem with only d + 1 Gaussians and no bias (Bengio, Delalleau and Le Roux, 2005). One may argue that parity is a simple discrete toy problem of little interest. But even if we have to restrict the analysis to discrete samples in {0, 1}d for mathematical reasons, the parity function can be extended to a smooth function on the [0, 1]d hypercube depending only on the continuous sum b1 + . . . + bd. Theorem 2.4 is thus a basis to argue that the number of Gaussians needed to learn a function with many variations in a continuous space may scale linearly with these variations, and thus possibly exponentially in the dimension. 2.2 Results for Semi-Supervised Learning In this section we focus on algorithms of the type described in recent papers (Zhu, Ghahramani and Lafferty, 2003; Zhou et al., 2004; Belkin, Matveeva and Niyogi, 2004; Delalleau, Bengio and Le Roux, 2005), which are graph-based non-parametric semi-supervised learning algorithms. Note that transductive SVMs, which are another class of semi-supervised algorithms, are already subject to the limitations of corollary 2.2. The graph-based algorithms we consider here can be seen as minimizing the following cost function, as shown in (Delalleau, Bengio and Le Roux, 2005): C( ˆY ) = ∥ˆYl −Yl∥2 + µ ˆY ⊤L ˆY + µϵ∥ˆY ∥2 (9) with ˆY = (ˆy1, . . . , ˆyn) the estimated labels on both labeled and unlabeled data, and L the (un-normalized) graph Laplacian derived from a similarity function W between points such that Wij = W(xi, xj) corresponds to the weights of the edges in the graph. Here, ˆYl = (ˆy1, . . . , ˆyl) is the vector of estimated labels on the l labeled examples, whose known labels are given by Yl = (y1, . . . , yl), and one may constrain ˆYl = Yl as in (Zhu, Ghahramani and Lafferty, 2003) by letting µ →0. We define a region with constant label as a connected subset of the graph where all nodes xi have the same estimated label (sign of ˆyi), and such that no other node can be added while keeping these properties. Proposition 2.5. After running a label propagation algorithm minimizing the cost of eq. 9, the number of regions with constant estimated label is less than (or equal to) the number of labeled examples. Proof. By contradiction, if this proposition is false, then there exists a region with constant estimated label that does not contain any labeled example. Without loss of generality, consider the case of a positive constant label, with xl+1, . . . , xl+q the q samples in this region. The part of the cost of eq. 9 depending on their labels is C(ˆyl+1, . . . , ˆyl+q) = µ 2 l+q X i,j=l+1 Wij(ˆyi −ˆyj)2 + µ l+q X i=l+1 X j /∈{l+1,...,l+q} Wij(ˆyi −ˆyj)2 + µϵ l+q X i=l+1 ˆy2 i . The second term is stricly positive, and because the region we consider is maximal (by definition) all samples xj outside of the region such that Wij > 0 verify ˆyj < 0 (for xi a sample in the region). Since all ˆyi are stricly positive for i ∈{l + 1, . . . , l + q}, this means this second term can be stricly decreased by setting all ˆyi to 0 for i ∈{l + 1, . . . , l + q}. This also sets the first and third terms to zero (i.e. their minimum), showing that the set of labels ˆyi are not optimal, which conflicts with their definition as labels minimizing C. This means that if the class distributions are such that there are many distinct regions with constant labels (either separated by low-density regions or regions with samples from the other class), we will need at least the same number of labeled samples as there are such regions (assuming we are using a sparse local kernel such as the k-nearest neighbor kernel, or a thresholded Gaussian kernel). But this number could grow exponentially with the dimension of the manifold(s) on which the data lie, for instance in the case of a labeling function varying highly along each dimension, even if the label variations are “simple” in a non-local sense, e.g. if they alternate in a regular fashion. When the kernel is not sparse (e.g. Gaussian kernel), obtaining such a result is less obvious. However, there often exists a sparse approximation of the kernel. Thus we conjecture the same kind of result holds for dense weight matrices, if the weighting function is local in the sense that it is close to zero when applied to a pair of examples far from each other. 3 Extensions and Conclusions In (Bengio, Delalleau and Le Roux, 2005) we present additional results that apply to unsupervised learning algorithms such as non-parametric manifold learning algorithms (Roweis and Saul, 2000; Tenenbaum, de Silva and Langford, 2000; Sch¨olkopf, Smola and M¨uller, 1998; Belkin and Niyogi, 2003). We find that when the underlying manifold varies a lot in the sense of having high curvature in many places, then a large number of examples is required. Note that the tangent plane is defined by the derivatives of the kernel machine function f, for such algorithms. The core result is that the manifold tangent plane at x is mostly defined by the near neighbors of x in the training set (more precisely it is constrained to be in the span of the vectors x −xi, with xi a neighbor of x). Hence one needs to cover the manifold with small enough linear patches with at least d + 1 examples per patch (where d is the dimension of the manifold). In the same paper, we present a conjecture that generalizes the results presented here for Gaussian kernel classifiers to a larger class of local kernels, using the same notion of locality of the derivative summarized above for manifold learning algorithms. In that case the derivative of f represents the normal of the decision surface, and we find that at x it mostly depends on the neighbors of x in the training set. It could be argued that if a function has many local variations (hence is not very smooth), then it is not learnable unless having strong prior knowledge at hand. However, this is not true. For example consider functions that have low Kolmogorov complexity, i.e. can be described by a short string in some language. The only prior we need in order to quickly learn such functions (in terms of number of examples needed) is that functions that are simple to express in that language (e.g. a programming language) are preferred. For example, the functions g(x) = sin(x) or g(x) = parity(x) would be easy to learn using the C programming language to define the prior, even though the number of variations of g(x) can be chosen to be arbitrarily large (hence also the number of required training examples when using only the smoothness prior), while keeping the Kolmogorov complexity constant. We do not propose to necessarily focus on the Kolmogorov complexity to design new learning algorithms, but we use this example to illustrate that it is possible to learn apparently complex functions (because they vary a lot), as long as one uses a “non-local” learning algorithm, corresponding to a broad prior, not solely relying on the smoothness prior. Of course, if additional domain knowledge about the task is available, it should be used, but without abandoning research on learning algorithms that can address a wider scope of problems. We hope that this paper will stimulate more research into such learning algorithms, since we expect local learning algorithms (that only rely on the smoothness prior) will be insufficient to make significant progress on complex problems such as those raised by research on Artificial Intelligence. Acknowledgments The authors would like to thank the following funding organizations for support: NSERC, MITACS, and the Canada Research Chairs. The authors are also grateful for the feedback and stimulating exchanges that helped shape this paper, with Yann Le Cun and L´eon Bottou, as well as for the anonymous reviewers’ helpful comments. References Belkin, M., Matveeva, I., and Niyogi, P. (2004). Regularization and semi-supervised learning on large graphs. In Shawe-Taylor, J. and Singer, Y., editors, COLT’2004. Springer. Belkin, M. and Niyogi, P. (2003). Using manifold structure for partially labeled classification. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press. Bengio, Y., Delalleau, O., and Le Roux, N. (2005). The curse of dimensionality for local kernel machines. Technical Report 1258, D´epartement d’informatique et recherche op´erationnelle, Universit´e de Montr´eal. Bengio, Y., Delalleau, O., Le Roux, N., Paiement, J.-F., Vincent, P., and Ouimet, M. (2004). Learning eigenfunctions links spectral embedding and kernel PCA. Neural Computation, 16(10):2197–2219. Boser, B., Guyon, I., and Vapnik, V. (1992). A training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory, pages 144– 152, Pittsburgh. Brand, M. (2003). Charting a manifold. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15. MIT Press. Cortes, C. and Vapnik, V. (1995). Support vector networks. Machine Learning, 20:273– 297. Delalleau, O., Bengio, Y., and Le Roux, N. (2005). Efficient non-parametric function induction in semi-supervised learning. In Cowell, R. and Ghahramani, Z., editors, Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Jan 6-8, 2005, Savannah Hotel, Barbados, pages 96–103. Society for Artificial Intelligence and Statistics. Duda, R. and Hart, P. (1973). Pattern Classification and Scene Analysis. Wiley, New York. H¨ardle, W., M¨uller, M., Sperlich, S., and Werwatz, A. (2004). Nonparametric and Semiparametric Models. Springer, http://www.xplore-stat.de/ebooks/ebooks.html. Micchelli, C. A. (1986). Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11–22. Roweis, S. and Saul, L. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326. Schmitt, M. (2002). Descartes’ rule of signs for radial basis function neural networks. Neural Computation, 14(12):2997–3011. Sch¨olkopf, B., Burges, C. J. C., and Smola, A. J. (1999). Advances in Kernel Methods — Support Vector Learning. MIT Press, Cambridge, MA. Sch¨olkopf, B., Smola, A., and M¨uller, K.-R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319. Snapp, R. R. and Venkatesh, S. S. (1998). Asymptotic derivation of the finite-sample risk of the k nearest neighbor classifier. Technical Report UVM-CS-1998-0101, Department of Computer Science, University of Vermont. Tenenbaum, J., de Silva, V., and Langford, J. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323. Weiss, Y. (1999). Segmentation using eigenvectors: a unifying view. In Proceedings IEEE International Conference on Computer Vision, pages 975–982. Williams, C. and Rasmussen, C. (1996). Gaussian processes for regression. In Touretzky, D., Mozer, M., and Hasselmo, M., editors, Advances in Neural Information Processing Systems 8, pages 514–520. MIT Press, Cambridge, MA. Zhou, D., Bousquet, O., Navin Lal, T., Weston, J., and Sch¨olkopf, B. (2004). Learning with local and global consistency. In Thrun, S., Saul, L., and Sch¨olkopf, B., editors, Advances in Neural Information Processing Systems 16, Cambridge, MA. MIT Press. Zhu, X., Ghahramani, Z., and Lafferty, J. (2003). Semi-supervised learning using Gaussian fields and harmonic functions. In ICML’2003.
|
2005
|
85
|
2,905
|
Fast Gaussian Process Regression using KD-Trees Yirong Shen Electrical Engineering Dept. Stanford University Stanford, CA 94305 Andrew Y. Ng Computer Science Dept. Stanford University Stanford, CA 94305 Matthias Seeger Computer Science Div. UC Berkeley Berkeley, CA 94720 Abstract The computation required for Gaussian process regression with n training examples is about O(n3) during training and O(n) for each prediction. This makes Gaussian process regression too slow for large datasets. In this paper, we present a fast approximation method, based on kd-trees, that significantly reduces both the prediction and the training times of Gaussian process regression. 1 Introduction We consider (regression) estimation of a function x 7→u(x) from noisy observations. If the data-generating process is not well understood, simple parametric learning algorithms, for example ones from the generalized linear model (GLM) family, may be hard to apply because of the difficulty of choosing good features. In contrast, the nonparametric Gaussian process (GP) model [19] offers a flexible and powerful alternative. However, a major drawback of GP models is that the computational cost of learning is about O(n3), and the cost of making a single prediction is O(n), where n is the number of training examples. This high computational complexity severely limits its scalability to large problems, and we believe has proved a significant barrier to the wider adoption of the GP model. In this paper, we address the scaling issue by recognizing that learning and predictions with a GP regression (GPR) model can be implemented using the matrix-vector multiplication (MVM) primitive z 7→Kz. Here, K ∈Rn,n is the kernel matrix, and z ∈Rn is an arbitrary vector. For the wide class of so-called isotropic kernels, MVM can be approximated efficiently by arranging the dataset in a tree-type multiresolution data structure such as kd-trees [13], ball trees [11], or cover trees [1]. This approximation can sometimes be made orders of magnitude faster than the direct computation, without sacrificing much in terms of accuracy. Further, the storage requirements for the tree is O(n), while a direct storage of the kernel matrix would require O(n2) spare. We demonstrate the efficiency of the tree approach on several large datasets. In the sequel, for the sake of simplicity we will focus on kd-trees (even though it is known that kd-trees do not scale well to high dimensional data). However, it is also completely straightforward to apply the ideas in this paper to other tree-type data structures, for example ball trees and cover trees, which typically scale significantly better to high dimensional data. 2 The Gaussian Process Regression Model Suppose that we observe some data D = {(xi, yi) | i = 1, . . . , n}, xi ∈X, yi ∈R, sampled independently and identically distributed (i.i.d.) from some unknown distribution. Our goal is to predict the response y∗on future test points x∗with small mean-squared error under the data distribution. Our model consists of a latent (unobserved) function x 7→u so that yi = ui + εi, where ui = u(xi), and the εi are independent Gaussian noise variables with zero mean and variance σ2 > 0. Following the Bayesian paradigm, we place a prior distribution P(u(·)) on the function u(·) and use the posterior distribution P(u(·)|D) ∝N(y|u, σ2I)P(u(·)) in order to predict y∗on new points x∗. Here, y = [y1, . . . , yn]T and u = [u1, . . . , un]T are vectors in Rn, and N(·|µ, Σ) is the density of a Gaussian with mean µ and covariance Σ. For a GPR model, the prior distribution is a (zero-mean) Gaussian process defined in terms of a positive definite kernel (or covariance) function K : X 2 →R. For the purposes of this paper, a GP can be thought of as a mapping from arbitrary finite subsets {˜xi} ⊂X of points, to corresponding zero-mean Gaussian distributions with covariance matrix ˜K = (K(˜xi, ˜xj))i,j. (This notation indicates that ˜K is a matrix whose (i, j)element is K(˜xi, ˜xj).) In this paper, we focus on the problem of speeding up GPR under the assumption that the kernel is monotonic isotropic. A kernel function K(x, x′) is called isotropic if it depends only on the Euclidean distance r = ∥x −x′∥2 between the points, and it is monotonic isotropic if it can be written as a monotonic function of r. 3 Fast GPR predictions Since u(x1), u(x2), . . . , u(xn) and u(x∗) are jointly Gaussian, it is easy to see that the predictive (posterior) distribution P(u∗|D), u∗= u(x∗) is given by P(u∗|D) = N u∗| kT ∗M −1y, K(x∗, x∗) −kT ∗M −1k∗ , (1) where k∗= [K(x∗, x1), . . . , K(x∗, xn)]T ∈Rn, and M = K + σ2I, K = (K(xi, xj))i,j. Therefore, if p = M −1y, the optimal prediction under the model is ˆu∗= kT ∗p, and the predictive variance (of P(u∗|D)) can be used to quantify our uncertainty in the prediction. Details can be found in [19]. ([16] also provides a tutorial on GPs.) Once p is determined, making a prediction now requires that we compute kT ∗p = n X i=1 K(x∗, xi)pi = n X i=1 wipi (2) which is O(n) since it requires scanning through the entire training set and computing K(x∗, xi) for each xi in the training set. When the training set is very large, this becomes prohibitively slow. In such situations, it is desirable to use a fast approximation instead of the exact direct implementation. 3.1 Weighted Sum Approximation The computations in Equation 2 can be thought of as a weighted sum, where wi = K(x∗, xi) is the weight on the i-th summand pi. We observe that if the dataset is divided into groups where all data points in a group have similar weights, then it is possible to compute a fast approximation to the above weighted sum. For example, let G be a set of data points that all have weights near some value w. The contribution to the weighted sum by points in G is X i:xi∈G wipi = X i:xi∈G wpi + X i:xi∈G (wi −w)pi = w X i:xi∈G pi + X i:xi∈G ϵipi Where ϵi = wi −w. Assuming that P i:xi∈G pi is known in advance, w P i:xi∈G pi can then be computed in constant time and used as an approximation to P i:xi∈G wipi if P i:xi∈G ϵipi is small. We note that for a continuous isotropic kernel function, the weights wi = K(x∗, xi) and wj = K(x∗, xj) will be similar if xi and xj are close to each other. In addition, if the Figure 1: Example of bounding rectangles for nodes in the first three levels of a kd-tree. kernel function monotonically decreases to zero with increasing ||xi −xj||, then points that are far away from the query point x∗will all have weights near zero. Given a new query, we would like to automatically group points together that have similar weights. But the weights are dependent on the query point and hence the best grouping of the data will also be dependent on the query point. Thus, the problem we now face is, given query point, how to quickly divide the dataset into groups such that data points in the same group have similar weights. Our solution to this problem takes inspiration and ideas from [9], and uses an enhanced kd-tree data structure. 3.2 The kd-tree algorithm A kd-tree [13] is a binary tree that recursively partitions a set of data points. Each node in the kd-tree contains a subset of the data, and records the bounding hyper-rectangle for this subset. The root node contains the entire dataset. Any node that contains more than 1 data point has two child nodes, and the data points contained by the parent node are split among the children by cutting the parent node’s bounding hyper-rectangle in the middle of its widest dimension.1 An example with inputs of dimension 2 is illustrated in Figure 1. For our algorithm, we will enhance the kd-tree with additional cached information at each node. At a node ND whose set of data points is XND, in addition to the bounding box we also store 1. NND = |XND|: the number of data points contained by ND. 2. SUnweighted ND = P xi∈XND pi: the unweighted sum corresponding to the data contained by ND. Now, let SWeighted ND = X i:xi∈XND K(x∗, xi)pi (3) be the weighted sum corresponding to node ND. One way to calculate SWeighted ND is to simply have the 2 children of ND recursively compute SWeighted Left(ND) and SWeighted Right(ND) (where 1There are numerous other possible kd-tree splitting criteria. Our criteria is the same as the one used in [9] and [5] Left(ND) and Right(ND) are the 2 children of ND) and then sum the two results. This takes O(n) time—same as the direct computation—since all O(n) nodes need to be processed. However, if we only want an approximate result for the weighted sum, then we can cut off the recursion at nodes whose data points have nearly identical weights for the given query point. Since each node maintains a bounding box of the data points that it owns, we can easily bound the maximum weight variation of the data points owned by a node (as in [9]). The nearest and farthest points in the bounding box to the query point can be computed in O(input dimension) operations, and since the kernel function is isotropic monotonic, these points give us the maximum and minimum possible weights wmax and wmin of any data point in the bounding box. Now, whenever the difference between wmax and wmin is small, we can cutoff the recursion and approximate the weighted sum in Equation 3 by w∗SUnweighted ND where w = 1 2(wmin + wmax). The speed and accuracy of the approximation is highly dependent on the cutoff criteria. Moore et al. used the following cutoff rule in [9]: wmax −wmin ≤2ϵ(WSoFar + NNDwmin). Here, WSoFar is the weight accumulated so far in the computation and WSoFar+NNDwmin serves as a lower bound on the total sum of weights involved in the regression. In our experiments, we found that although the above cutoff rule ensures the error incurred at any particular data point in ND is small, the total error incurred by all the data points in ND can still be high if NND is very large. In our experiments (not reported here), their method gave poor performance on the GPR task, in many cases incurring significant errors in the predictions (or, alternatively running no faster than exact computation, if sufficiently small ϵ is chosen to prevent the large accumulation of errors). Hence, we chose instead the following cutoff rule: NND(wmax −wmin) ≤2ϵ(WSoFar + NNDwmin), which also takes into account the total number of points contained in a node. From the forumla above, we see that the decision of whether to cutoff computation at a node depends on the value of WSoFar (the total weight of all the points that have been added to the summation so far). Thus it is desirable to quickly accumulate weights at the beginning of the computations, so that more of the later recursions can be cut off. This can be accomplished by going into the child node that’s nearer to the query point first when we recurse into the children of a node that doesn’t meet the cutoff criteria. (In contrast, [9] always visits the children in left-right order, which in our experiments also gave significantly worse performance than our version.) Our overall algorithm is summarized below: WeightedSum(x∗, ND, WSoFar, ϵ) compute wmaxand wminfor the given query point x∗ SWeighted ND = 0 if (wmax −wmin) ≤2ϵ(WSoFar + NNDwmin) then SWeighted ND = 1 2(wmin + wmax)SUnweighted ND WSoFar = WSoFar + wminNND return SWeighted ND else determine which child is nearer to the query point x∗ SWeighted Nearer = WeightedSum(x∗, Nearer child of ND, WSoFar, ϵ) SWeighted Farther = WeightedSum(x∗, Farther child of ND, WSoFar, ϵ) SWeighted ND = SWeighted Nearer + SWeighted Farther return SWeighted ND 4 Fast Training Training (or first-level inference) in the GPR model requires solving the positive definite linear system M p = y, M = K + σ2I (4) for the vector p, which in the previous section we assumed had already been pre-computed. Directly calculating p by inverting the matrix M costs about O(n3) in general. However, in practice there are many ways to quickly obtain approximate solutions to linear systems. Since the system matrix is symmetric positive definite, the conjugate gradient (CG) algorithm can be applied. CG is an iterative method which searches for p by maximizing the quadratic function q(z) = yT z −1 2zT M z. Briefly, CG ensures that z after iteration k is a maximizer of q over a (Krylow) subspace of dimension k. For details about CG and many other approximate linear solvers, see [15]. Thus, z “converges” to p (the unconstrained maximizer of q) after n steps, but intermediate z can be used as approximate solutions. The speed of convergence depends on the eigenstructure of M . In our case, M typically has only a few large eigenvalues, and most of the spectrum is close to the lower bound σ2; under these conditions CG is known to produce good approximations after only a few iterations. Crucially, the only operation on M performed in each iteration of CG is a matrix-vector multiplication (MVM) with M . Since M = K + σ2I, speeding up MVM with M is critically dependent on our ability to perform fast MVM with the kernel matrix K. We can apply the algorithm from Section 3 to perform fast MVM. Specifically, observe that the i-th row of K is given by ki = [K(xi, x1), . . . , K(xi, xn)]T . Thus, ki has the same form as that of the vector k∗used in the prediction step. Hence to compute the matrix-vector product Kv, we simply need to compute the inner products kT i v = n X j=1 K(xi, xj)vj for i = 1, . . . , n. Following exactly the method presented in Section 3, we can do this efficiently using a kd-tree, where here v now plays the role of p in Equation 2. Two additional optimizations are possible. First, in different iterations of conjugate gradient, we can use the same kd-tree structure to compute kT i v for different i and different v. Indeed, given a dataset, we need only ever find a single kd-tree structure for it, and the same kd-tree structure can then be used to make multiple predictions or multiple MVM operations. Further, given fixed v, to compute kT i v for different i = 1, . . . , n (to obtain the vector resulting from one MVM operation), we can also share the same pre-computed partial unweighted sums in the internal nodes of the tree. Only when v (or p) changes do we need to change the partial unweighted sums (discussed in Section 3.2) of v stored in the internal nodes (an O(n) operation). 5 Performance Evaluation We evaluate our kd-tree implementation of GPR and an implementation that uses direct computation for the inner products. Our experiments were performed on the nine regression datasets in Table 1. 2 2Data for the Helicopter experiments come from an autonomous helicopter flight project, [10] and the three tasks were to model three subdynamics of the helicopter, namely its yaw rate, forward velocity, and lateral velocity one timestep later as a function of the helicopter’s current state. The temperature and humidity experiments use data from a sensornet comprising a network of simple sensor motes, [2] and the goal here is to predict the conditions at a mote from the measurements Data set name Input dimension Training set size Test set size Helicopter yaw rate 3 40000 4000 Helicopter x-velocity 2 40000 4000 Helicopter y-velocity 2 40000 4000 Mote 10 temperature 2 20000 5000 Mote 47 temperature 3 20000 5000 Mote 47 humidity 3 20000 5000 Housing income 2 18000 2000 Housing value 2 18000 2000 Housing age 2 18000 2000 Table 1: Datasets used in our experiments. Exact cost Tree cost Speedup Exact error Tree error Helicopter yaw rate 14.95 0.31 47.8 0.336 0.336 Helicopter x-velocity 12.37 0.41 30.3 0.594 0.595 Helicopter y-velocity 11.25 0.41 27.3 0.612 0.614 Mote 10 temperature 4.54 0.69 6.6 0.278 0.258 Mote 47 temperature 4.34 1.11 3.9 0.385 0.433 Mote 47 humidity 3.87 0.82 4.7 1.189 1.273 Housing income 2.75 0.76 3.6 0.478 0.478 Housing value 4.47 0.51 8.8 0.496 0.496 Housing age 3.21 1.15 2.8 0.787 0.785 Table 2: Prediction performance on 9 regression problems. Exact uses exact computation of Equation 2. Tree is the kd-tree based implementation described in Section 3.2. Cost is the computation time measured in milliseconds per prediction. The error reported is the mean absolute prediction error. For all experiments, we used the Gaussian RBF kernel K(x, x′) = exp −∥x −x′∥2 2d2 , which is monotonic isotropic, with d and σ chosen to be reasonable values for each problem (via cross validation). The ϵ parameter used in the cutoff rule was set to be 0.001 for all experiments. 5.1 Prediction performance Our first set of experiments compare the prediction time of the kd-tree algorithm with exact computation, given a precomputed p. Our average prediction times are given in Table 2. These numbers include the cost of building the kd-tree (but remain small since the cost is then amortized over all the examples in the test set). As we see, our algorithm runs 2.847.8 times faster than exact computation. Further, it incurs only a very small amount of additional error compared to the exact algorithm. 5.2 Learning performance Our second set of experiments examine the running times for learning (i.e., solving the system of Equations 4,) using our kd-tree algorithm for the MVM operation, compared to exact computation. For both approximate and exact MVM, conjugate gradient was used of nearby motes. The housing experiments make use of data collected from the 1990 Census in California. [12] The median income of a block group is predicted from the median house value and average number of rooms per person; the median house value is predicted using median housing age and median income; the median housing age is predicted using median house value and average number of rooms per household. Exact cost Tree cost Speedup Exact error Tree error Helicopter yaw rate 22885 279 82.0 0.336 0.336 Helicopter x-velocity 23412 619 37.9 0.594 0.595 Helicopter y-velocity 14341 443 32.4 0.612 0.614 Mote 10 temperature 2071 253 8.2 0.278 0.258 Mote 47 temperature 2531 487 5.2 0.385 0.433 Mote 47 humidity 2121 398 5.3 1.189 1.273 Housing income 1922 581 3.3 0.478 0.478 Housing value 997 138 7.2 0.496 0.496 Housing age 1496 338 4.4 0.787 0.785 Table 3: Training time on the 9 regression problems. Cost is the computationtime measured in seconds. (with the same number of iterations). Here, we see that our algorithm performs 3.3-82 times faster than exact computation.3 6 Discussion 6.1 Related Work Multiresolution tree data structures have been used to speed up the computation of a wide variety of machine learning algorithms [9, 5, 7, 14]. GP regression was introduced to the machine learning community by Rasmussen and Williams [19]. The use of CG for efficient first-level inference is described by Gibbs and MacKay [6]. The stability of Krylov subspace iterative solvers (such as CG) with approximate matrix-vector multiplication is discussed in [4]. Sparse approximations to GP inference provide a different way of overcoming the O(n3) scaling [18, 3, 8], by selecting a representative subset of D of size d ≪n. Sparse methods can typically be trained in O(n d2) (including the active forward selection of the subset) and require O(d) prediction time only. In contrast, in our work here we make use of all of the data for prediction, achieving better scaling by exploiting cluster structure in the data through a kd-tree representation. More closely related to our work is [20], where the MVM primitive is also approximated using a special data structure for D. Their approach, called the improved fast Gauss transform (IFGT), partitions the space with a k-centers clustering of D and uses a Taylor expansion of the RBF kernel in order to cache repeated computations. The IFGT is limited to the RBF kernel, while our method can be used with all monotonic isotropic kernels. As a topic for future work, we believe it may be possible to apply IFGT’s Taylor expansions at each node of the kd-tree’s query-dependent multiresolution clustering, to obtain an algorithm that enjoys the best properties of both. 6.2 Isotropic Kernels Recall that an isotropic kernel K(x, x′) can be written as a function of the Euclidean distance r = ∥x −x′∥. While the RBF kernel of the form exp(−r2) is the most frequently used isotropic kernel in machine learning, there are many other isotropic kernels to which our method here can be applied without many changes (since the kd-tree cutoff criteria depends on the pairwise Euclidean distances only). An interesting class of kernels is the Mat´ern model (see [17], Sect. 2.10) K(r) ∝(αr)νKν(αr), α = 2ν1/2, where Kν is the modified Bessel function of the second kind. The parameter ν controls the roughness of functions sampled from the process, in that they are ⌊ν⌋times mean-square differentiable. 3The errors reported in this table are identical to Table 2, since for the kd-tree results we always trained and made predictions both using the fast approximate method. This gives a more reasonable test of the “end-to-end” use of kd-trees. For ν = 1/2 we have the “random walk” Ornstein-Uhlenbeck kernel of the form e−αr, and the RBF kernel is obtained in the limit ν →∞. The RBF kernel forces u(·) to be very smooth, which can lead to bad predictions for training data with partly rough behaviour, and its uncritical usage is therefore discouraged in Geostatistics (where the use of GP models was pioneered). Here, other Mat´ern kernels are sometimes preferred. We believe that our kd-trees approach holds rich promise for speeding up GPR with other isotropic kernels such the Mat´ern and Ornstein-Uhlenbeck kernels. References [1] Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. (Unpublished manuscript), 2005. [2] Phil Buonadonna, David Gay, Joseph M. Hellerstein, Wei Hong, and Samuel Madden. Task: Sensor network in a box. In Proceedings of European Workshop on Sensor Networks, 2005. [3] Lehel Csat´o and Manfred Opper. Sparse on-line Gaussian processes. Neural Computation, 14:641–668, 2002. [4] Nando de Freitas, Yang Wang, Maryam Mahdaviani, and Dustin Lang. Fast krylov methods for n-body learning. In Advances in NIPS 18, 2006. [5] Kan Deng and Andrew Moore. Multiresolution instance-based learning. In Proceedings of the Twelfth International Joint Conference on Artificial Intellingence, pages 1233–1239. Morgan Kaufmann, 1995. [6] Mark N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis, University of Cambridge, 1997. [7] Alexander Gray and Andrew Moore. N-body problems in statistical learning. In Advances in NIPS 13, 2001. [8] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In Advances in NIPS 15, pages 609–616, 2003. [9] Andrew Moore, Jeff Schneider, and Kan Deng. Efficient locally weighted polynomial regression predictions. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 236–244. Morgan Kaufmann, 1997. [10] Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger, and Eric Liang. Inverted autonomous helicopter flight via reinforcement learning. In International Symposium on Experimental Robotics, 2004. [11] Stephen M. Omohundro. Five balltree construction algorithms. Technical Report TR-89-063, International Computer Science Institute, 1989. [12] R. Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics and Probability Letters, 33(3):291–297, May 5 1997. [13] F.P. Preparata and M. Shamos. Computational Geometry. Springer-Verlag, 1985. [14] Nathan Ratliff and J. Andrew Bagnell. Kernel conjugate gradient. Technical Report CMU-RITR-05-30, Robotics Institute, Carnegie Mellon University, June 2005. [15] Y. Saad. Iterative Methods for Sparse Linear Systems. International Thomson Publishing, 1st edition, 1996. [16] M. Seeger. Gaussian processes for machine learning. International Journal of Neural Systems, 14(2):69–106, 2004. [17] M. Stein. Interpolation of Spatial Data: Some Theory for Kriging. Springer, 1999. [18] Michael Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001. [19] C. Williams and C. Rasmussen. Gaussian processes for regression. In Advances in NIPS 8, 1996. [20] C. Yang, R. Duraiswami, and L. Davis. Efficient kernel machines using the improved fast Gauss transform. In Advances in NIPS 17, pages 1561–1568, 2005.
|
2005
|
86
|
2,906
|
Using “epitomes” to model genetic diversity: Rational design of HIV vaccine cocktails Nebojsa Jojic, Vladimir Jojic, Brendan Frey, Chris Meek and David Heckerman Microsoft Research Abstract We introduce a new model of genetic diversity which summarizes a large input dataset into an epitome, a short sequence or a small set of short sequences of probability distributions capturing many overlapping subsequences from the dataset. The epitome as a representation has already been used in modeling real-valued signals, such as images and audio. The discrete sequence model we introduce in this paper targets applications in genetics, from multiple alignment to recombination and mutation inference. In our experiments, we concentrate on modeling the diversity of HIV where the epitome emerges as a natural model for producing relatively small vaccines covering a large number of immune system targets known as epitopes. Our experiments show that the epitome includes more epitopes than other vaccine designs of similar length, including cocktails of consensus strains, phylogenetic tree centers, and observed strains. We also discuss epitome designs that take into account uncertainty about Tcell cross reactivity and epitope presentation. In our experiments, we find that vaccine optimization is fairly robust to these uncertainties. 1 Introduction Within and across instances of a certain class of a natural signal, such as a facial image, a bird song recording, or a certain type of a gene, we find many repeating fragments. The repeating fragmentscanvaryslightlyandcanhavearbitrary(andusuallyunknown)sizes. Forinstance, in cropped images of human faces, a small patch capturing an eye appears in an image twice (with a symmetry transformation applied), and across different facial images many times, as humans have a limited number of eye types. Another repeating structure across facial images is the nose, which occupies a larger patch. In mammalian DNA sequences, we find repeating regulatory elements within a single sequence, and repeating larger structures (genes, or gene fragments) across species. Instead of defining size, variability and typical relative locations of repeating fragments manually, in an application-driven way, the ‘epitomic analysis’ [5] is an unsupervised approach to estimating repeating fragment models, and simultaneously aligning the data to them. This is achieved by considering data in terms of randomly selected overlapping fragments, or patches, of various sizes and mapping them onto an ‘epitome,’ a learned structure which is considerably larger than any of the fragments, and yet much smaller than the total size of the dataset. We first introduced this model for image analysis [5], and it has since been used for video and audio analysis [2, 6], as well. This paper introduces a new form of the epitome as a sequence of multinomial distributions (Fig. 1), and describe its applications to HIV diversity modeling and rational vaccine design. We show that the vaccines optimized using our algorithms are likely to have broader predicted coverage of immune targets in HIV than the previous rational designs. The generating profile sequence e p(T) Data (colorcoded according to the posterior epitome mapping Q(T)) Figure1: Theepitome(e)learnedfromdatasynthesizedfromthegeneratingprofilesequence(Section 5). A color coding in the epitome and data sequences is used to show the mapping between epitome and data positions. A white color indicates that the letter was likely generated from the garbage component of the epitome. The distribution p(T ) shows which 9mers from the epitome were more likely to generate patches of the data. 2 Sequence epitome The central part of Fig. 1 illustrates a small set of amino acid sequences X = {xij} of size MN (with i indexing a sequence, and j indexing a letter within a sequence, and M = max i, N = max j). The sequences share patterns (although sometimes with discrepancies in isolated amino-acids) but one sequence may be similar to other sequences in different regions. The sequences are generated synthetically by combining the pieces of the profile sequence given in the first line of the figure, with occasional insertions of random sequence fragments, as discussed in Section 5. Sequence variability in this synthetic example is slightly higher than that found in the NEF protein of the human immunodeficiency virus (HIV) [7], while the envelope proteins of the same virus exhibit more variability. Examples of high genetic diversity can also be found in higher-level organisms, for example in the regions coding for immune system’s pattern recognition molecules. The last row in the figure illustrates an epitome optimized to represent the variability in the sequences above. In general, the epitome is a smaller array E = {emn} of size Me × Ne, where MeNe ≪MN. In the figure, Me = 1. An epitome can be parameterized in different ways, but in the figure, each epitome element emn is a multinomial distribution with the probability of each letter represented by its height. The epitome’s summarization quality is defined by a simple generative model which considers the data X in terms of shorter subsequences, XS. A subsequence XS is defined as an ordered subset of letters from X taken from positions listed in the ordered index set S. For instance, the set S = {(4, 8), (4, 9), (4, 10), (4, 11)} points to a contiguous patch of letters in the fourth sequence XS = RQKK. Similarly, set S = {(6, 2), (6, 3), (6, 4), (6, 5), (6, 6)} points to the patch XS = LDRQK in the sixth sequence. A number of such patches1 of various lengths can be taken randomly (and with overlap). The quality of the epitome is then defined as the total likelihood of these patches under the generative model which generates each patch from a set of distributions ET , where T is an ordered set of indices into the epitome (In the figure, the epitome is defined on a circle, so that the index progression continues from Ne to 1. (This reduces local minima problems in the EM algorithm for epitome learning as discussed in Sections 4 and 5). For each data patch, the mapping T is considered a hidden variable, 1In principal, noncontiguous patches can be taken as well, if the application so requires. and the generative process is assumed to consist of the following two steps • Sample a patch ET from E according to p(T ). To illustrate p(T ) in Fig. 1, we consider only the set of of all 9-long contiguous patches. For such patches, which are sometimes called nine-mers, we can index different sets T by their first elements and plot p(T ) as a curve with the domain {1, ..., Ne −8}. • Generate a patch XS from ET according to p(XS|ET ) = |T | k=1 eT (k)(XS(k)), with T (k) and S(k) denoting the k-th element in the epitome and data patches. Each execution of these two steps can, in principle, generate any pattern. The probability (likelihood) of generating a particular pattern indicated by S is p(XS) = T p(XS|ET )p(T ). (1) Given the epitome, we can perform inference in this model and compute the posterior distribution over mappings T for a particular model. For instance, for XS = RQKK, the most probable mapping is T = {(1, 4), (1, 5), (1, 6), (1, 7)}. In Section 4, we discuss algorithms for estimating the epitome distributions. Our illustration points to possible applications of epitomes to multiple sequence alignment, and therefore requires a short discussion on similarity to other biological sequence models [3]. While the epitome is a fully probabilistic model and thus defines a precise cost function for optimization, as was the case with HMM-based models, or dynamic programming solutions to sequence alignment, the main novelty in our approach is the consideration of both the data and the model parameters in terms of overlapping patches. This leads to the alignment of different parts of the sequences to the joint representation without explicit constraints on contiguity of the mappings or temporal models used in HMMs. Also, as we discuss in the next section, our goal is diversity modeling, and not multiple alignment. The epitome’s robustness to the length, position and variability of repeating sequence fragments allows us to bypass both the task of optimal global alignment, and the problem of defining the notion of global alignment. In addition, consideration of overlapping patches in a biological sequence can be viewed as modeling independent binding processes, making the patch independence assumption of our generative model biologically relevant. We illustrate these properties of the epitome on the problem of HIV diversity modeling and rational vaccine design. 3 HIV evolution and rational vaccine design Recent work on the rational design of HIV vaccines has turned to cocktail approaches with the intention of protecting a person against many possible variants of the HIV virus. One of the potential difficulties with cocktail design is vaccine size. Vaccines with a large number of nucleotides or amino acids are expensive to manufacture and more difficult to deliver. In this section, we will show that epitome modeling can overcome this limitation by providing a means for generating smaller vaccines representing a wide diversity of HIV in an immunologically relevant way. We focus on the problem of constructing an optimal cellular vaccine in terms of its coverage of MHC-I epitopes, short contiguous patterns of 8-11 aminoacids in HIV proteins [8]. Major histocompatibility complex (MHC) molecules are responsible for presentation of short segments of internal proteins, called “epitopes,” on the surface of a cell. These peptides (protein segments) can then be observed from outside the cell by killer T-cells, which normally react only to foreign peptides, instructing the cell to self-distruct. The killer cells and their offspring have the opportunity to bind to multiple infected cells, and so their first binding to a particular foreign epitope is used to accelerate an immune reaction to other infected cells exposing the same epitope. Such responses are called memory responses and can persist for a long time after the infection has been cleared, providing longer-term immunity to the disease. The goal of vaccine design is to create artifical means to produce such immunological memory of a particular virus without the danger of developing the disease. In the case of a less variable virus, the vaccination may be possible by delivering a foreign protein similar to the viral protein into a patient’s cells, triggering the immune response. However, HIV is capable of assuming many different forms, and immunization against a single strain is largely expected to be insufficient. In fact, without appropriate optimization, the number of different proteins needed to cover the viral diversity would be too large for the known vaccine delivery mechanisms. It is well known that epitopes within and across the strains in a population overlap [7]. The epitome model naturally exploits this overlap to construct a vaccine that can prime the immune system to attack as many potential epitopes as possible. For instance, if the sequences in Fig 1 were HIV fragments from different strains of the virus, then the epitome would contain many potential epitopes of lengths 8-11 from these sequences. Furthermore, the context of the captured epitopes in the epitome is similar to the context in the epitomized sequences, which increases the chances of equivalent presentation of the epitome and data epitopes. MHC molecules are encoded within the most diverse region of the human genome. This gives our species a diversity advantage in numerous clashes with viruses. Each individual has a slightly different set of MHC molecules which bind to different motifs in the proteins expressed and cleaved in the cell. Due to the limitation in MHC binding, each person’s cells are capable of presenting only a small number of epitopes from the invading virus, but an entire human population attacks a diverse set of epitopes. The MHC molecule selects the protein fragments for presentation through a binding process which is loosely motifspecific. There are several other processes that precede or follow the MHC binding, and the combination of all of these processes can be characterized either by the concentration of presented epitopes, or by the combination of the binding energies involved in these processes2. Some of these processes can be influenced by a context of the epitope (short amino acid fragments in the regions on either side of the epitope). Another issue to be considered in HIV evolution and vaccine design is the T-cell cross reactivity: The killer cells primed with one epitope may be capable of binding to other related epitopes, and therefore a small set of priming epitopes may induce a broader immunity. As in the case of MHC binding, the likelihood of priming a T-cell, as well as cross-reaction with a different epitope, can be linked to the binding energies. The epitome model maps directly to these immunity variables. If the epitome content is to be delivered to a cell in the vaccination phase, then each patch ET indexed by data index set T corresponds either to an epitope or to a longer contiguous patch (e.g. 12 amino acids or more) containing both an epitope and its context that influences presentation. The prior p(T ) reflects the probability of presentation of the epitome fragments, and should reflect processes invloved in presentation, including MHC binding. The presented epitome fragments ET in different patients’cells may primeT-cells capable of cross-reacting with some of the epitopes XS presented by the infected cells infected by one of the known strains in the dataset X. The cross-reaction distribution corresponds to the epitome distribution p(XS|ET ). Vaccination is successful if the vaccine primes the immune system to attack targets found in the known circulating strains. A natural criterion to optimize is the similarity between the distribution over the epitopes learned by the immune systems of patients vaccinated with the epitome (taking into account the cross-reactivity) and the distribution over the epitopes from circulating strains. Therefore, the vaccine quality directly depends on the likelihood of the designated epitopes p(XS) under the epitome. To see this, consider directly optimizing the KL divergence between the distribution pd(Xs) over epitopes found in the data and the distribution over the targets for which the T-cells are primed according to p(Xs). This KL distance differs from the log likelihood of all the data patches weighted by pd(Xs), log p({XS}d) = S pd(XS) log T p(XS|ET )p(T ), (2) only by a constant (the entropy of pd(Xs)). The distribution pd(Xs) can serve as the indicator of epitopes and be equal to either zero or a constant for all patches, and then the above weighted likelihood is equivalent to the total likelihood of selected patches. This 2The probabilities of physical events are often modeled as having an exponential relationship with the energy changes. distribution can also reflect the probability of presentation of epitopes XS, or the uncertainty of the experiment or the prediction algorithm used to predict which parts of the circulating strains correspond to MHC epitopes. While the epitome can serve as a diversity model and be used to construct evolutionary models and peptides for experimental epitope discovery, it can also serve as as an actual immungen (the pattern containing the immunologically important message to the cell) in vaccine. The most general version of epitome as a sequence of mutlinomial distributions could be relevant for sequence classification, recombination modeling, and design of peptides for binding essays. In some of these applications, the distribution p(XS|ET ) may have a semantics different than cross-reactivity, and could for instance represent mutations dependent on the immune type of the host, or the subtype of the virus. On the other hand, when the epitome is used for immunogen design, then cross-reactivity p(XS|ET ) can be conveniently captured by constraining each distribution emn to have probabilities for the twenty aminoacids from the set { ϵ 19, 1 −ϵ}. The mode of the epitome can then be used as a deterministic vaccine immunogen3, and the probability of cross-reaction will then directly depend on the number of letters in XS that are different from the mode of ET . While the epitome model components are mapped here to the elements of the interaction between HIV and the immune system of the host, other applications in biology would probably be based on a different semantics for the epitome components. We would expect that the epitome would map to biological sequence analysis problems more naturally than to image and audio modeling tasks, where the issue of the partition function arises. Epitome as a generative model over-generates - generated patches overlap, and so each data element is generated multiple times. In the image applications, we have avoided this problem through constraints on the posterior distributions, while the traditional approach would be to deal with the partition function (perhaps through sampling). However, the strains of a virus are observed by the immune system through overlapping patches, independently sampled from the viral proteins by biological processes. This fits epitome as a vaccination model. More generally, epitome is compatible with the evolutionary forces that act independently on overlapping patches of a biological sequence. 4 Epitome learning Since epitomes can have multiple applications, we provide a general discussion of optimization of all parameters of the epitome, although in some applications, some of the parameters may be known a priori. As a unified optimization criterion we use the free energy [9] of the model (2), F({XS}d|E) = S pd(XS) T q(T |S) log q(T |S) p(XS|ET ) p(T ), (3) where q(T |S) is an variational distribution, where −log p({XS}d|E) = arg min q F({XS}d|E). (4) The model can be learned by iteratively reducing F, varying in each iteration either q or the model parameters. When modeling biological sequences, the free energy may be associated with real physical events, such as molecular binding processes, where log probabilities correspond to molecular binding energies. Setting to zero the derivatives of F with respect to the q distributions, the distribution p(T ), and the distributions em(ℓ) for all positions m, we obtain the EM algorithm [5]: • For each XS, compute the posterior distribution over patches q(T |S): q(T |S) ← p(XS|ET ) p(T ) T p(XS|ET ) p(T ). (5) 3To our knowledge, there is no effective way of delivering epitome as a distribution over proteins or fragments into the cell • Using these q distributions, update the profile sequence: em(ℓ) ← S pd(XS) k T |T (k)=m q(T |S)[XS(k) = ℓ] S pd(XS) k T |T (k)=m q(T |S) , (6) where [·] is the indicator function ([true] = 1; [false] = 0). If desired, also update p(T ): p(T ) ← S pd(XS) q(T |S) S pd(XS) . (7) The E step assigns a responsibility for S to each possible epitome patch. The M step reestimates the epitome multinomials using these responsibilities. As mentioned, this step can re-estimate the usage probabilities of patches in the epitome, or this distribution can be kept constant. It is often useful to construct the index sets T such that they wrap around from one end to another. Such circular topologies can deter the EM algorithm from settling in a poor local maximum of log likelihood. It is also sometimes useful to include a garbage component (a component that generates patches containing random letters) in the model. In general, the EM algorithm is prone to problems of local maxima. For example, if we allowed the epitome to be longer, then some of the sites with two equally likely letters could be split into two separate regions of the epitome (and in some applications, such as vaccine optimization, this is preferred, as the epitomes need to become deterministic). Epitomes situated at different local maxima, however, often define similar probability distributions p({XS}|E), and can be used for various inference tasks such as sequence recognition/classification, noise removal, and context-dependent mutation prediction. Of course, there are optimization algorithms other than EM that can learn a profile sequence by minimizing the free energy, E = arg minE minq F({XS}d|E). In some situations, such as vaccine design, it is desirable to produce deterministic epitomes (containing pointmass probability distributions). Such profile sequences can be obtained by annealing the parameter ϵ that controls the amount of probability allowed to be distributed to the letter different from the most likely letter ˆℓm = arg maxℓem(ℓ): E = lim ϵ→0 arg min E min q F({XS}d|E). (8) Finally, in cases when the probability mass is uniformly spread over the letters other than the modes of the epitome distributions, i.e.,emn(ℓ) ∈{ ϵ 19, 1 −ϵ}, the myopic optimization is a faster way of creating epitomes of high fragment (epitope) coverage than the EM with multipleinitializations. Themyopicoptimizationconsistsofiterativelyincreasingthelength of the epitome by appending a patch (possibly with overlap) from the data which maximally reduces the free energy. The process stops once the desired length is achieved (rather than when the entire set of patches is included as in the superstring problem). 5 Experiments To illustrate the EM algorithm for epitome learning, we created the synthetic data shown (in part) in Figure 1. The data, eighty sequences in all, were synthesized from the generating profile sequence of length fifty shown on the top line of the figure. In particular, each data sequence was created by extracting one to four (mean two) patches from the generating sequence of length three to thirty (mean sixteen), sampling from these patches to produce corresponding patches of amino acids in the data sequence, and then filling in the gaps in the data sequence with amino acids sampled from a uniform distribution over amino acids. In addition, five percent of the sites in the each data sequence were subsequently replaced with an amino acid sampled from a uniform distribution. The resulting data sequences ranged in length from 38 to 43; and on average 80% of aminoacids in each sequence come from the generating sequence. Thus, the synthesized data roughly simulates genetic diversity resulting from a combination of mutation, insertion, deletion, and recombination. We learned an epitome model using the EM algorithm applied to all 9mer patches from the data, equally weighted. We used a two-component epitome mixture, where the first component is an (initially unknown) sequence of probability distributions, and the second component is a garbage component, useful for representing the random insertions and mutations. Each site in the first component was initialized to a distribution slightly (and randomly) perturbed from uniform. The length of this component was set to be slightly longer than the original generating sequence. In previous experiments, we have found that a longer length helps to prevent the EM algorithm from settling in a poor local maximum of log likelihood, and it is subsequently possible to cut out unnecessary parts which can be detected in the learned prior p(T ). Also, we used an epitome with a circular topology. The first (non-garbage) component of the epitome learned after sixty iterations, shown in Figure 1, closely resembels the generating sequence even though it never saw this generating sequence during learning. (Roughly, the generating sequence starts near the end of the epitome with the patch “LIC” coded in red, and wraps around to the patch “EHQ” coded in yellow. The portion of the epitome between yellow and red is not responsible for many patches, as reflected in the distribution p(T ).) The sixty iterations of EM are illustrated in the video available at www.research.microsoft.com/∼jojic/pEpitome.mpg. For each iteration, we show the first (non-garbage) component of the epitome E, the distribution p(T ), and the first ten sequences in the dataset, color-coded according to the mode of q(T |S), as in Figure 1. The video illustrates how the EM algorithm simultaneously learns the epitome model and aligns the data sequences. When used for vaccine optimization, some epitome parameters can be preset based on biological knowledge. In particular, in the experiments we report on 176 gag HIV proteins from the WA cohort [8], we assume no cross reactivity (i.e., we set ϵ = 0) and we consider two different possibilities for the patch data distribution pd(XS). The first parameter setting we consider is that pd(XS) is uniform for all ten amino-acid blocks found in the sequence data. The advantage of the uniform data distribution is that we only need sequence data for vaccine optimization, and not the epitope identities. The free energy criterion can be easily shown to be proportional (with a negative constant) to the coverage - the percentage of all 10mers from the data covered by the epitome, where the 10mer is considered covered if it can be found as a contiguous patch in the epitome’s mode. Another advantage of this approach is that it can not miss epitopes due to errors in prediction algorithms or experimental epitope discovery, as long as sufficient coverage can be guaranteed for the given vaccine length. The second setting of the parameters pd(XS) we consider is based the SYFPEITHI database [10] of known epitopes. We trained pd(XS) on this data using a decision tree model to represent the probability that an observed 10mer contains a presentable epitope. The advantage of this approach is that we can potentially focus our modeling power only to immunologically important variability, as long as the known epitope dataset is sufficient to capture properly the epitope distribution for at least the most frequent MHC-I molecules. Thus, for a given epitome length, we may obtain more potent vaccines than using the first parameter setting. Since ϵ = 0, the resulting optimization reduces to optimizing expected epitope coverage, i.e., the sum of all probabilities For both epitome settings, we epitomized the 176 gag proteins in the dataset, using the myopic algorithm, and compared the expected epitope coverage of our vaccine candidates with those of other designs, including cocktails of tree centers, consensus, and actual strains (Fig. 2). Phylogenies were constructed using neighbor joining, as is used in Phylip [4]. Clusters were generated using a mixture model of independent multinomials [1]. Observed sequences in the sequence cocktails were chosen at random. Both epitome models yield better coverage and the expected epitope coverage than other designs for any fixed length. Results are similar for the pol, nef, and env proteins. An interesting finding to note is that the epitome optimized for coverage (using uniform distribution pd(XS)) provides essentially equally good expected coverage as the epitome directly optimized for the expected coverage. This is less surprising than it may seem - both true and predicted epitopes overlap in the sequence data, and so epitomizing all 10mers leads to similar epitomes as optimizing for coverage of the select few, but frequently overlapping epitopes. This is a direct consequence of the epitome representation, which was found appealing in previous applications for the same robustness to the number and sizes of the overlapping patches. It also indicates the possibility that an effective vaccine can be optimized without precise knowledge of all HIV epitopes. 0 1000 2000 3000 4000 5000 6000 0 10 20 30 40 50 60 70 80 Expected coverage (%) Length (aa) Optimized for expected coverage Optimized for coverage Epitome Consensus cocktail COT cocktail Strain cocktail Figure 2: Expected coverage for 176 Perth gag proteins using candidate sequences of length ten. For comparison, we show expected coverage for the epitome optimized to cover all 10mers. 6 Conclusions We have introduced the epitome as a new model of genetic diversity, especially well suited to highly variable biological sequences. We show that our model can be used to optimize HIV vaccines with larger predicted coverage of MHC-I epitopes than other constructs of similar lengths and so epitome can be used to create vaccines that cover a large fraction of HIV diversity. We also show that epitome optimization leads to good vaccines even when all subsequence of length 10 are considered epitopes. This suggests that the vaccines could be optimized directly from sequence data, which are technologically much easier to obtain than epitope data. Our analysis of cross-reactivity which provided similar empirical evidence of epitome robustness to cross-reactivity assumptions (see www.research.microsoft.com/∼jojic/HIVepitome.html for the full set of results). References and Notes [1] P. Cheeseman and J. Stutz. Bayesian classification (AutoClass): Theory and results. In Advances in Knowledge Discovery and Data Mining, Fayyad, U., Piatesky-Shapiro, G., Smyth, P., and Uthurusamy, R., eds. (AAAI Press, 1995). [2] V. Cheung, B. Frey, and N. Jojic. Video epitome. CVPR 2005. [3] R. Durbin et al. Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, 1998. [4] J. Felsenstein. Phylip (phylogeny inference package) version 3.6, 2004. [5] N. Jojic, B. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In Proceedings of the Ninth International Conference on Computer Vision, Nice (2003). Video available at http://www.robots.ox.ac.uk/ awf/iccv03videos/. [6] A. Kapoor and S. Basu. The audio epitome: A new representation for modeling and classifying auditory phenomena. ICASSP 2004. [7] B.T.M. Korber, C. Brander, B.F. Haynes, R. Koup, C. Kuiken, J.P. Moore, B.D. Walker, and D.I. Watkins. HIV Molecular Immunology. Los Alamos National Laboratory, Theoretical Biology and Biophysics, Los Alamos, NM, 2002. [8] C. Moore, M. John, I. James, F. Christiansen, C. Witt, and S. Mallal. Evidence of HIV-1 adaptation to HLA-restricted immune responses at a population level. Science, 296:1439–1443, 2002. [9] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, M. Jordan ed. (MIT Press,1999). [10] H Rammensee, J Bachmann, N P Emmerich, O A Bachor, and S Stevanovic. SYFPEITHI: database for MHC ligands and peptide motifs. Immunogenetics, 50(3-4):213–219, Nov 1999.
|
2005
|
87
|
2,907
|
Learning Cue-Invariant Visual Responses Jarmo Hurri HIIT Basic Research Unit, University of Helsinki P.O.Box 68, FIN-00014 University of Helsinki, Finland Abstract Multiple visual cues are used by the visual system to analyze a scene; achromatic cues include luminance, texture, contrast and motion. Singlecell recordings have shown that the mammalian visual cortex contains neurons that respond similarly to scene structure (e.g., orientation of a boundary), regardless of the cue type conveying this information. This paper shows that cue-invariant response properties of simple- and complex-type cells can be learned from natural image data in an unsupervised manner. In order to do this, we also extend a previous conceptual model of cue invariance so that it can be applied to model simple- and complex-cell responses. Our results relate cue-invariant response properties to natural image statistics, thereby showing how the statistical modeling approach can be used to model processing beyond the elemental response properties visual neurons. This work also demonstrates how to learn, from natural image data, more sophisticated feature detectors than those based on changes in mean luminance, thereby paving the way for new data-driven approaches to image processing and computer vision. 1 Introduction When segmenting a visual scene, the brain utilizes a variety of visual cues. Spatiotemporal variations in the mean luminance level – which are also called first-order cues – are computationally the simplest of these; the name ’first-order’ comes from the idea that a single linear filtering operation can detect these cues. Other types of visual cues include contrast, texture and motion; in general, cues related to variations in other characteristics than mean luminance are called higher-order (also called non-Fourier) cues; the analysis of these is thought to involve more than one level of processing/filtering. Single-cell recordings have shown that the mammalian visual cortex contains neurons that are selective to both first- and higher-order cues. For example, a neuron may exhibit similar selectivity to the orientation of a boundary, regardless of whether the boundary is a result of spatial changes in mean luminance or contrast [1]. Monkey cortical areas V1 and V2, and cat cortical areas 17 and 18, contain both simple- (orientation-, frequency- and phase-selective) and complex-type (orientation- and frequency-selective, phase-invariant) cells that exhibit such cue-invariant response properties [2, 1, 3, 4, 5]. Previous research has been unable to pinpoint the connectivity that gives rise to cue-invariant responses. Recent computational modeling of the visual system has produced fundamental results relating stimulus statistics to first-order response properties of simple and complex cells (see, e.g., [6, 7, 8, 9]). The contribution of this paper is to introduce a similar, natural image A IMAGE IMAGE Nonlinear stream Linear stream Integration First stage filters Rectification Second stage filters B + IMAGE + + + Simplecell level Complexcell level Input First-order cells Cue-invariant cells IMAGE Feedback path Complexcell output a,c f e g h b,d Figure 1: (A) The two-stream model [1], with a linear stream (on the right) and a nonlinear stream (on the left). The linear stream responds to first-order cues, while the nonlinear stream responds to higher-order cues. In the nonlinear stream, the stimulus (image) is first filtered with multiple high-frequency filters, whose outputs are transformed nonlinearly (rectified), and subsequently used as inputs for a second-stage filter. Cue-invariant responses are obtained when the outputs of these two streams are integrated. (B) Our model of cue-invariant responses. The model consists of simple cells, complex cells and a feedback path leading from a population of high-frequency first-order complex cells to low-frequency cue-invariant simple cells. In a cue-invariant simple cell, the feedback is filtered with a filter that has similar spatial characteristics as the feedforward filter of the cell. The output of a cue-invariant simple cell is given by the sum of the linearly filtered input and the filtered feedback. Note that while our model results in cue-invariant response properties, it is not a model of cue integration, because in the sum the two paths can cancel out. However, this simplification does not affect our results, that is, learning, since the summed output is not used in learning (see Section 3), or measurements, which excite only one of the paths significantly and do not consider integration effects (see Figures 3 and 4). In this instance of the model, the high-frequency cells prefer horizontal stimuli, while the low-frequency cue-invariant cells prefer vertical stimuli; in other instances, this relationship can be different. For actual filters used in an implementation of this model, see Figure 2. Lowercase letters a–g refer to the corresponding subfigures in Figure 2. statistics -based framework for cue-invariant responses of both simple and complex cells. In order to achieve this, we also extend the two-stream model of cue-invariant responses (Figure 1A) to account for cue-invariant responses at both simple- and complex-cell levels. The rest of this paper is organized as follows. In Section 2 we describe our version of the two-stream model of cue-invariant responses, which is based on feedback from complex cells to simple cells. In Section 3 we formulate an unsupervised learning rule for learning these feedback connections. We apply our learning rule to natural image data, and show that this results in the emergence of connections that give rise to cue-invariant responses at both simple- and complex-cell levels. We end this paper with conclusions in Section 4. 2 A model of cue-invariant responses The most prominent model of cue-invariant responses introduced in previous research is the two-stream model (see, e.g., [1]), depicted in Figure 1A. In this research we have extended this model so that it can be applied directly to model the cue-invariant responses of simple and complex cells. Our model, shown in Figure 1B, employs standard linear-filter a b c d e f g h Figure 2: The filters used in an implementation of our model. The reader is referred to Figure 1B for the correspondence between subfigures (a)–(h) and the schematic model of Figure 1B. (a) The feedforward filter (Gabor function [10]) of a high-frequency first-order simple cell; the filter has size 19 × 19 pixels, which is the size of the image data in our experiments. (b) The feedforward filter of another first-order simple cell. This feedforward filter is otherwise similar to the one in (a), except that there is a phase difference of π/2 between the two; together, the feedforward filters in (a) and (b) are used to implement an energy model of a complex cell. (c) A lattice of size 7 × 7 of high-frequency filters of the type shown in (a); these filters are otherwise identical, except that their spatial locations vary. (d) A lattice of filters of the type shown in (b). Together, the lattices shown in (c) and (d) are used to implement a 7 × 7 lattice of energy-model complex cells with different spatial positions; the output of this lattice is the feedback relayed to the low-frequency cueinvariant cells. (e,f ) Feedforward filters of low-frequency simple cells. (g) A feedback filter of size 7 × 7 for the simple cell whose feedforward filter is shown in (e); in order to avoid confusion between feedforward filters and feedback filters, the latter are visualized as lattices of slightly rounded rectangles. (h) A feedback filter for the simple cell whose feedforward filter is shown in (f). The feedback filters in (g) and (h) have been obtained by applying the learning algorithm introduced in this paper (see Section 3 for details). models of simple cells and energy models of complex cells [10], and a feedback path from the complex-cell level to the simple-cell level. This feedback path introduces a second, nonlinear input stream to cue-invariant cells, and gives rise to cue-invariant responses in these cells. To avoid confusion between the two types of filters – one type operating on the input image and the other on the feedback – we will use the term ’feedforward filter’ for the former and the term ’feedback filter’ for the latter. Figure 2 shows the feedforward and feedback filters of a concrete instance (implementation) of our model. Gabor functions [10] are used to model simple-cell feedforward filters. Figure 3 illustrates the design of higher-order gratings, and shows how the complex-cell lattice of the model transforms higher-order cues into feedback activity patterns that resemble corresponding first-order cues. A quantitative evaluation of the model is given in Figure 4. These measurements show that our model possesses the fundamental cueinvariant response properties: in our model, a cue-invariant neuron has similar selectivity to the orientation, frequency and phase of a grating stimulus, regardless of cue type (see figure caption for details). We now proceed to show how the feedback filters of our model (Figures 2g and h) can be learned from natural image data. 3 Learning feedback connections in an unsupervised manner 3.1 The objective function and the learning algorithm In this section we introduce an unsupervised algorithm for learning feedback connection weights from complex cells to simple cells. When this learning algorithm is applied to natural image data, the resulting feedback filters are those shown in Figures 2g and h – as cue type sinusoidal constituents of stimulus stimulus [equation] feedback activity luminance A B C [=A] texture D E F G H [=DE+(1-D)F] contrast I J K L [=IJ] Figure 3: The design of grating stimuli with different cues, and the feedback activity for these gratings. Design of grating stimuli: Each row illustrates how, for a particular cue, a grating stimulus is composed of sinusoidal constituents; the equation of each stimulus (B, G, K) as a function of the constituents is shown under the stimulus. Note that the orientation, frequency and phase of each grating is determined by the first sinusoidal constituent (A, D, I); here these parameters are the same for all stimuli. Here (E) and (F) are two different textures, and (I) is called the envelope and (J) the carrier of a contrast-defined stimulus. Feedback activity: The rightmost column shows the feedback activity – that is, response of the complex-cell lattice (see Figures 2c and d) – for the three types of stimuli. (C) There is no response to the luminance stimuli, since the orientation and frequency of the stimulus are different from those of the high-frequency feedforward filters. (H, L) For other cue types, the lattice detects the locations of energy of the vertical high-frequency constituent (E, J), thereby resulting in feedback activity that has a spatial pattern similar to a corresponding luminance pattern (A). Thus, the complex-cell lattice transforms higherorder cues into activity patterns that resemble first-order cues, and these can subsequently produce a strong response in a feedback filter (compare (H) and (L) with the feedback filter in Figure 2g). For a quantitative evaluation of the model with these stimuli, see Figure 4. was shown in Figure 4, these feedback filters give rise to cue-invariant response properties. The intuitive idea behind the learning algorithm is the following: in natural images, higherorder cues tend to coincide with first-order cues. For example, when two different textures are adjacent, there is often also a luminance border between them; two examples of this phenomenon are shown in Figure 5. Therefore, cue-invariant response properties could be a result of learning in which large responses in the feedforward channel (first-order responses) have become associated with large responses in the feedback channel (higherorder responses). Previous research has demonstrated the importance of such energy dependencies in modeling the visual system (see, e.g., [11, 9, 12, 13, 14]). To turn this idea into equations, let us introduce some notation. Let vector c(n) = [c1(n) c2(n) · · · cK(n)]T denote the responses of a set of K first-order high-frequency complex cells for the input image with index n. In our case the number of these complex cells is K = 7 × 7 = 49 (see Figures 2c and d), so the dimension of this vector is 49. This vectorization can be done in a standard manner [15] by scanning values from the 2D lattice column-wise into a vector; when the learned feedback filter is visualized, the filter is “unvectorized” with a reverse procedure. Let s(n) denote the response of a single lowcue-invariant simple cell (with feedback) cue-invariant complex cell (with feedback) standard simple cell (without feedback) A 0 π/2 π 0 1 cue orientation response B 0 π/2 π 0 1 cue orientation response C 0 π/2 π 0 1 cue orientation response D 0 1 0 0.1 0.2 0.3 cue frequency response E 0 1 0 0.1 0.2 0.3 cue frequency response F 0 1 0 0.1 0.2 0.3 cue frequency response G 0 π 2π −1 0 1 cue phase response H 0 π 2π 0 1 cue phase response I 0 π 2π −1 0 1 cue phase response J 0 π/2 π 0 0.5 carrier orientation response K 0 π/2 π 0 0.5 carrier orientation response L 0 0.1 0.2 0.3 0 0.5 carrier frequency response M 0 0.1 0.2 0.3 0 0.5 carrier frequency response Figure 4: Our model fulfills the fundamental properties of cue-invariant responses. The plots show tuning curves for a cue-invariant simple cell – corresponding to the filters of Figures 2e and g – and complex cell of our new model (two leftmost columns), and a standard simple-cell model without feedback processing (rightmost column). Solid lines show responses to luminance-defined gratings (Figure 3B), dotted lines show responses to texture-defined gratings (Figure 3G), and dashed lines show responses to contrast-defined gratings (Figure 3K). (A–I) In our model, a neuron has similar selectivity to the orientation, frequency and phase of a grating stimulus, regardless of cue type; in contrast, a standard simple-cell model, without the feedback path, is only selective to the parameters of a luminance-defined grating. The preferred frequency is lower for higher-order gratings than for first-order gratings; similar observations have been made in single-cell recordings [4]. (J–M) In our model, the neurons are also selective to the orientation and frequency of the carrier (Figure 3J) of a contrast-defined grating (Figure 3K), thus conforming with single-cell recordings [1]. Note that these measurements were made with the feedback filters learned by our unsupervised algorithm (see Section 3); thus, these measurements confirm that learning results in cue-invariant response properties. A B Figure 5: Two examples of coinciding first- and higher-order boundary cues. Image in (A) contains a near-vertical luminance boundary across the image; the boundary in (B) is nearhorizontal. In both (A) and (B), texture is different on different sides of the luminance border. (For image source, see [8].) A B C D E F G H I J Figure 6: (A-D, F-I) Feedback filters (top row) learned from natural image data by using our unsupervised learning algorithm; the bottom row shows the corresponding feedforward filters. For a quantitative evaluation of the cue-invariant response properties resulting from the learned filters (A) and (B), see Figure 4. (E, J) The result of a control experiment, in which Gaussian white noise was used as input data; (J) shows the feedforward filter used in this control experiment. frequency simple cell for the input image with index n. In our learning algorithm all the feedforward filters are fixed and only a feedback filter is learned; this means that c(n) and s(n) can be computed for all n (all images) prior to applying the learning algorithm. Let us denote the K-dimensional feedback filter with w; this filter is learned by our algorithm. Let b(n) = wT c(n), that is, b(n) is the signal obtained when the feedback activity from the complex-cell lattice is filtered with the feedback filter; the overall activity of a cueinvariant simple cell is then s(n) + b(n). Our objective function measures the correlation of energies of the feedforward response s(n) and the feedback response b(n): f(w) = E s2(n)b2(n) = wT E s2(n)c(n)c(n)T w = wT Mw, (1) where M = E s2(n)c(n)c(n)T is a positive-semidefinite matrix that can be computed from samples prior to learning. To keep the output of the feedback filter b(n) bounded, we enforce a unit energy constraint on b(n), leading into constraint h(w) = E b2(n) = wT E c(n)c(n)T w = wT Cw = 1, (2) where C = E c(n)c(n)T is also positive-semidefinite and can be computed prior to learning. The problem of maximizing objective (1) with constraint (2) is a well-known quadratic optimization problem with a norm constraint, the solution of which is given by an eigenvalue-eigenvector problem (see below). However, in order to handle the case where C is not invertible – which will be the case below in our experiments – and to attenuate the noise in the data, we first use a technique called dimensionality reduction (see, e.g., [15]). Let C = EDET be the eigenvalue decomposition of C; in the decomposition, the eigenvectors corresponding to the r smallest eigenvalues (subspaces with smallest energy; the exact value for r is given in Section 3.2) have been dropped out, so E is a K ×(K −r) matrix of K −r eigenvectors and D is a (K −r) × (K −r) diagonal matrix containing the largest eigenvalues. Now let v = D1/2ET w. A one-to-one correspondence between v and w can be formed by using the pseudoinverse solution w = ED−1/2v. Now let z(n) = D−1/2ET c(n). Using these definitions of v and z(n), it is straightforward to show that the objective and constraint become f(v) = vT E s2(n)z(n)z(n)T v and h(v) = ∥v∥2 = 1. The global maximum vopt is the eigenvector of E s2(n)z(n)z(n)T that corresponds to the largest eigenvalue. In practice learning from sampled data s(n) and c(n) proceeds as follows. First the eigenvalue decomposition of C is computed. Then the transformed data set z(n) is computed, and vopt is calculated from the eigenvalue-eigenvector problem. Finally, the optimal filter wopt is obtained from the pseudoinverse relationship. In learning from sampled data, all expectations are replaced with sample averages. 3.2 Experiments The algorithm described above was applied to natural image data, which was sampled from a set of over 4,000 natural images [8]. The size of the sampled image patches was 19 × 19 pixels, and the number of samples was 250,000. The local mean (DC component) was removed from each image sample. Simple-cell feedforward responses s(n) were computed using the filter shown in Figure 2e, and the set of high-frequency complex-cell lattice activities c(n) was computed using the filters shown in Figures 2c and d. A form of contrast gain control [16], which can be used to compensate for the large variation in contrast in natural images, was also applied to the natural image data: prior to filtering a natural image sample with a feedforward filter, the energy of the image was normalized inside the Gaussian modulation window of the Gabor function [10] of the feedforward filter. This preprocessing tends to weaken contrast borders, implying that in our experiments, learning higher-order responses is mostly based on texture boundaries that coincide with luminance boundaries. It should be noted, however, that in spite of this preprocessing step, the resulting feedback filters produce cue-invariant responses to both texture- and contrast-defined cues (see Figure 4). In order to make the components of c(n) have zero mean, and focus on the structure of feedback activity patterns instead of overall constant activation, the local mean (DC component) was removed from each c(n). To attenuate the noise in the data, the dimensionality of c(n) was reduced to 16 (see Section 3.1); this retains 85% of original signal energy. The algorithm described in Section 3.1 was then applied to this data. The resulting feedback filter is shown in Figure 6A (see also Figure 2g). Data sampling, preprocessing and the learning algorithm were then repeated, but this time using the feedforward filter shown in Figure 2f; the feedback filter obtained from this run is shown in Figure 6B (see also Figure 2h). The measurements in Figure 4 show that these feedback filters result in cueinvariant response properties at both simple- and complex-cell levels. Thus, our unsupervised algorithm learns cue-invariant response properties from natural image data. The results shown in Figures 6C and D were obtained with feedforward filters whose orientation was different from vertical, demonstrating that the observed phenomenon applies to other orientations also (in these experiments, the orientation of the high-frequency filters was orthogonal to that of the low-frequency feedforward filter). To make sure that the results shown in Figures 6A–D are not a side effect of the preprocessing or the structure of our model, but truly reflect the statistical properties of natural image data, we ran a control experiment by repeating our first experiment, but using Gaussian white noise as input data (instead of natural image data). All other steps, including preprocessing and dimensionality reduction, were the same as in the original experiment. The result is shown in Figure 6E; as can be seen, the resulting filter lacks any spatial structure. This verifies that our original results do reflect the statistics of natural image data. 4 Conclusions This paper has shown that cue-invariant response properties can be learned from natural image data in an unsupervised manner. The results were based on a model in which there is a feedback path from complex cells to simple cells, and an unsupervised algorithm which maximizes the correlation of the energies of the feedforward and filtered feedback signals. The intuitive idea behind the algorithm is that in natural visual stimuli, higher-order cues tend to coincide with first-order cues. Simulations were performed to validate that the learned feedback filters give rise to in cue-invariant response properties. Our results are important for three reasons. First, for the first time it has been shown that cue-invariant response properties of simple and complex cells emerge from the statistical properties of natural images. Second, our results suggest that cue invariance can result from feedback from complex cells to simple cells; no feedback from higher cortical areas would thus be needed. Third, our research demonstrates how higher-order feature detectors can be learned from natural data in an unsupervised manner; this is an important step towards general-purpose data-driven approaches to image processing and computer vision. Acknowledgments The author thanks Aapo Hyvärinen and Patrik Hoyer for their valuable comments. This research was supported by the Academy of Finland (project #205742). References [1] I. Mareschal and C. Baker, Jr. A cortical locus for the processing of contrast-defined contours. Nature Neuroscience 1(2):150–154, 1998. [2] Y.-X. Zhou and C. Baker, Jr. A processsing stream in mammalian visual cortex neurons for non-Fourier responses. Science 261(5117):98–101, 1993. [3] A. G. Leventhal, Y. Wang, M. T. Schmolesky, and Y. Zhou. Neural correlates of boundary perception. Visual Neuroscience 15(6):1107–1118, 1998. [4] I. Mareschal and C. Baker, Jr. Temporal and spatial response to second-order stimuli in cat area 18. Journal of Neurophysiology 80(6):2811–2823, 1998. [5] J. A. Bourne, R. Tweedale, and M. G. P. Rosa. Physiological responses of New World monkey V1 neurons to stimuli defined by coherent motion. Cerebral Cortex 12(11):1132–1145, 2002. [6] B. A. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583):607–609, 1996. [7] A. Bell and T. J. Sejnowski. The independent components of natural scenes are edge filters. Vision Research 37(23):3327–3338, 1997. [8] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London B 265(1394):359–366, 1998. [9] A. Hyvärinen and P. O. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research 41(18):2413–2423, 2001. [10] P. Dayan and L. F. Abbott. Theoretical Neuroscience. The MIT Press, 2001. [11] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience 4(8):819–825, 2001. [12] J. Hurri and A. Hyvärinen. Simple-cell-like receptive fields maximize temporal coherence in natural video. Neural Computation 15(3):663–691, 2003. [13] J. Hurri and A. Hyvärinen. Temporal and spatiotemporal coherence in simple-cell responses: a generative model of natural image sequences. Network: Computation in Neural Systems 14(3):527–551, 2003. [14] Y. Karklin and M. S. Lewicki. Higher-order structure of natural images. Network: Computation in Neural Systems 14(3):483–499, 2003. [15] A. Hyvärinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001. [16] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience 9(2):181–197, 1992.
|
2005
|
88
|
2,908
|
Learning Topology with the Generative Gaussian Graph and the EM Algorithm Micha¨el Aupetit CEA - DASE BP 12 - 91680 Bruy`eres-le-Chˆatel, France aupetit@dase.bruyeres.cea.fr Abstract Given a set of points and a set of prototypes representing them, how to create a graph of the prototypes whose topology accounts for that of the points? This problem had not yet been explored in the framework of statistical learning theory. In this work, we propose a generative model based on the Delaunay graph of the prototypes and the ExpectationMaximization algorithm to learn the parameters. This work is a first step towards the construction of a topological model of a set of points grounded on statistics. 1 Introduction 1.1 Topology what for? Given a set of points in a high-dimensional euclidean space, we intend to extract the topology of the manifolds from which they are drawn. There are several reasons for this among which: increasing our knowledge about this set of points by measuring its topological features (connectedness, intrinsic dimension, Betti numbers (number of voids, holes, tunnels...)) in the context of exploratory data analysis [1], allowing to compare two sets of points wrt their topological characteristics or to find clusters as connected components in the context of pattern recognition [2], or finding shortest path along manifolds in the context of robotics [3]. There are two families of approaches which deal with ”topology” : on one hand, the ”topology preserving” approaches based on nonlinear projection of the data in lower dimensional spaces with a constrained topology to allow visualization [4, 5, 6, 7, 8]; on the other hand, the ”topology modelling” approaches based on the construction of a structure whose topology is not constrained a priori, so it is expected to better account for that of the data [9, 10, 11] at the expense of the visualisability. Much work has been done about the former problem also called ”manifold learning”, from Generative Topographic Mapping [4] to Multi-Dimensional Scaling and its variants [5, 6], Principal Curves [7] and so on. In all these approaches, the intrinsic dimension of the model is fixed a priori which eases the visualization but arbitrarily forces the topology of the model. And when the dimension is not fixed as in the mixture of Principal Component Analyzers [8], the connectedness is lost. The latter problem we deal with had never been explored in the statistical learning perspective. Its aim is not to project and visualize a high-dimensional set of points, but to extract the topological information from it directly in the high-dimensional space, so that the model must be freed as much as possible from any a priori topological constraint. 1.2 Learning topology: a state of the art As we may learn a complicated function combining simple basis functions, we shall learn a complicated manifold1 combining simple basis manifolds. A simplicial complex2 is such a model based on the combination of simplices, each with its own dimension (a 1-simplex is a line segment, a 2-simplex is a triangle...a k-simplex is the convex hull of a set of k +1 points). In a simplicial complex, the simplices are exclusively connected by their vertices or their faces. Such a structure is appealing because it is possible to extract from it topological information like Betti numbers, connectedness and intrinsic dimension [10]. A particular simplicial complex is the Delaunay complex defined as the set of simplices whose Vorono¨ı cells3 of the vertices are adjacent assuming general position for the vertices. The Delaunay graph is made of vertices and edges of the Delaunay complex [12]. All the previous work about topology modelling is grounded on the result of Edelsbrunner and Shah [13] which prove that given a manifold M ⊂RD and a set of N0 vector prototypes w ∈(RD)N0 nearby M, it exists a simplicial subcomplex of the Delaunay complex of w which has the same topology as M under what we call the ”ES-conditions”. In the present work, the manifold M is not known but through a finite set of M data points v ∈MM. Martinetz and Schulten proposed to build a graph of the prototypes with an algorithm called ”Competitive Hebbian Learning” (CHL)[11] to tackle this problem. Their approach has been extended to simplicial complexes by De Silva and Carlsson with the definition of ”weak witnesses” [10]. In both cases, the ES-conditions about M are weakened so they can be verified by a finite sample v of M, so that the graph or the simplicial complex built over w is proved to have the same topology as M if v is a sufficiently dense sampling of M. The CHL consists in connecting two prototypes in w if they are the first and the second closest neighbors to a point of v (closeness wrt the Euclidean norm). Each point of v leads to an edge, and is called a ”weak witness” of the connected prototypes [10]. The topology representing graph obtained is a subgraph of the Delaunay graph. The region of RD in which any data point would connect the same prototypes, is the ”region of influence” (ROI) of this edge (see Figure 2 d-f). This principle is extended to create k-simplices connecting k + 1 prototypes, which are part of the Delaunay simplicial-complex of w [10]. Therefore, the model obtained is based on regions of influence: a simplex exists in the model if there is at least one datum in its ROI. Hence, the capacity of this model to correctly represent the topology of a set of points, strongly depends on the shape and location of the ROI wrt the points, and on the presence of noise in the data. Moreover, as far as N0 > 2, it cannot exist an isolated prototype allowing to represent an isolated bump in the data distribution, because any datum of this bump will have two closest prototypes to connect to each other. An aging process has been proposed by Martinetz and Schulten to filter out the noise, which works roughly such that edges with fewer data than a threshold in there ROI are pruned from the graph. This looks like a filter based on the probability density of the data distribution, but no statistical criterion is proposed to tune the parameters. Moreover the area of the ROI may be intractable in high dimension and is not trivially related to the 1For simplicity, we call ”manifold” what can be actually a set of manifolds connected or not to each other with possibly various intrinsic dimensions. 2The terms ”simplex” or ”graph” denote both the abstract object and its geometrical realization. 3Given a set of points w in RD, Vi = {v ∈RD|(v −wi)2 ≤(v −wj)2, ∀j} defines the Vorono¨ı cell associated to wi ∈w. corresponding line segment, so measuring the frequency over such a region is not relevant to define a useful probability density. At last, the line segment associated to an edge of the graph is not part of the model: data are not projected on it, data drawn from such a line segment may not give rise to the corresponding edge, and the line segment may not intersect at all its associated ROI. In other words, the model is not self-consistent, that is the geometrical realization of the graph is not always a good model of its own topology whatever the density of the sampling. We proposed to define Vorono¨ı cells of line segments as ROI for the edges and defined a criterion to cut edges with a lower density of data projecting on their middle than on their borders [9]. This solves some of the CHL limits but it still remains one important problem common to both approaches: they rely on the visual control of their quality, i.e. no criterion allows to assess the quality of the model especially in dimension greater than 3. 1.3 Emerging topology from a statistical generative model For all the above reasons, we propose another way for modelling topology. The idea is to construct a ”good” statistical generative model of the data taking the noise into account, and to assume that its topology is therefore a ”good” model of the topology of the manifold which generated the data. The only constraint we impose on this generative model is that its topology must be as ”flexible” as possible and must be ”extractible”. ”Flexible” to avoid at best any a priori constraint on the topology so as to allow the modelling of any one. ”Extractible” to get a ”white box” model from which the topological characteristics are tractable in terms of computation. So we propose to define a ”generative simplicial complex”. However, this work being preliminary, we expose here the simpler case of defining a ”generative graph” (a simplicial complex made only of vertices and edges) and tuning its parameters. This allows to demonstrate the feasibility of this approach and to foresee future difficulties when it is extended to simplicial complexes. It works as follows. Given a set of prototypes located over the data distribution using e.g. Vector Quantization [14], the Delaunay graph (DG) of the prototypes is constructed [15]. Then, each edge and each vertex of the graph is the basis of a generative model so that the graph generates a mixture of gaussian density functions. The maximization of the likelihood of the data wrt the model, using Expectation-Maximization, allows to tune the weights of this mixture and leads to the emergence of the expected topology representing graph through the edges with non-negligible weights that remain after the optimization process. We first present the framework and the algorithm we use in section 2. Then we test it on artificial data in section 3 before the discussion and conclusion in section 4. 2 A Generative Gaussian Graph to learn topology 2.1 The Generative Gaussian Graph In this work, M is the support of the probability density function (pdf) p from which are drawn the data v. In fact, this is not the topology of M which is of interest, but the topology of manifolds Mprin called ”principal manifolds” of the distribution p (in reference to the definition of Tibshirani [7]) which can be viewed as the manifold M without the noise. We assume the data have been generated by some set of points and segments constituting the set of manifolds Mprin which have been corrupted with additive spherical gaussian noise with mean 0 and unknown variance σ2 noise. Then, we define a gaussian mixture model to account for the observed data, which is based on both gaussian kernels that we call ”gaussian-points”, and what we call ”gaussian-segments”, forming a ”Generative Gaussian Graph” (GGG). The value at point vj ∈v of a normalized gaussian-point centered on a prototype wi ∈w with variance σ2 is defined as: g0(vj, wi, σ) = (2πσ2)−D/2 exp(−(vj−wi)2 2σ2 ) A normalized gaussian-segment is defined as the sum of an infinite number of gaussianpoints evenly spread on a line segment. Thus, this is the integral of a gaussian-point along a line segment. The value at point vj of the gaussian-segment [waiwbi] associated to the ith edge {ai, bi} in DG with variance σ2 is: g1(vj, {wai, wbi}, σ) = Z wbi wai exp −(vj−w)2 2σ2 dw (2πσ2) D 2 Laibi = exp − (vj −qj i )2 2σ2 (2πσ2) D−1 2 · erf Qj aibi σ √ 2 ! −erf Qj aibi −Laibi σ √ 2 ! 2Laibi (1) where Laibi = ∥wbi −wai∥, Qj aibi = ⟨vj−wai|wbi−wai⟩ Laibi and qj i = wai +(wbi −wai) Qj aibi Laibi is the orthogonal projection of vj on the straight line passing through wai and wbi. In the case where wai = wbi, we set g1(vj, {wai, wbi}, σ) = g0(vj, wai, σ). The left part of the dot product accounts for the gaussian noise orthogonal to the line segment, and the right part for the gaussian noise integrated along the line segment. The functions g0 and g1 are positive and we can prove that: R RD g0(v, wi, σ)dv = 1 and R RD g1(v, {wa, wb}, σ)dv = 1, so they are both probability density functions. A gaussianpoint is associated to each prototype in w and a gaussian-segment to each edge in DG. The gaussian mixture is obtained by a weighting sum of the N0 gaussian-points and N1 gaussian-segments, such that the weights π sum to 1 and are non-negative: p(vj|π, w, σ, DG) = 1 X k=0 Nk X i=1 πk i gk(vj, sk i , σ) (2) with P1 k=0 PNk i=1 πk i = 1 and ∀i, k, πk i ≥0, where s0 i = wi and s1 i = {wai, wbi} such that {ai, bi} is the ith edge in DG. The weight π0 i (resp. π1 i ) is the probability that a datum v was drawn from the gaussian-point associated to wi (resp. the gaussian-segment associated to the ith edge of DG). 2.2 Measure of quality The function p(vj|π, w, σ, DG) is the probability density at vj given the parameters of the model. We measure the likelihood P of the data v wrt the parameters of the GGG model: P = P(π, w, σ, DG) = M Y j=1 p(vj|π, w, σ, DG) (3) 2.3 The Expectation-Maximization algorithm In order to maximize the likelihood P or equivalently to minimize the negative loglikelihood L = −log(P) wrt π and σ, we use the Expectation-Maximization algorithm. We refer to [2] (pages 59 −73) and [16] for further details. The minimization of the negative log-likelihood consists in tmax iterative steps updating π and σ which ensure the decrease of L. The updating rules take into account the constraints about positivity or sum to unity of the parameters: πk[new] i = 1 M PM j=1 P(k, i|vj) σ2[new] = 1 DM PM j=1[PN0 i=1 P(0, i|vj)(vj −wi)2 + PN1 i=1 P(1, i|vj) (2πσ2)−D/2 exp(− (vj −qj i )2 2σ2 )(I1[(vj−qj i )2+σ2]+I2) Laibi·g1(vj,{wai,wbi},σ) ] (4) where I1 = σp π 2 (erf( Qj aibi σ √ 2 ) −erf( Qj aibi−Laibi σ √ 2 )) I2 = σ2 (Qj aibi −Laibi) exp(− (Qj aibi−Laibi)2 2σ2 )−Qj aibi exp(− (Qj aibi)2 2σ2 ) (5) and P(k, i|vj) = πk i gk(vj,sk i ,σ) p(vj|π,w,σ,DG) is the posterior probability that the datum vj was generated by the component associated to (k, i). 2.4 Emerging topology by maximizing the likelihood Finally, to get the topology representing graph from the generative model, the core idea is to prune from the initial DG the edges for which there is probability ϵ they generated the data. The complete algorithm is the following: 1. Initialize the location of the prototypes w using vector quantization [14]. 2. Construct the Delaunay graph DG of the prototypes. 3. Initialize the weights π to 1/(N0 + N1) to give equiprobability to each vertices and edges. 4. Given w and DG, use updating rules (4) to find σ2∗and π∗maximizing the likelihood P. 5. Prune the edges {aibi} of DG associated to the gaussian segments with probability π1 i ≤ϵ where π1 i ∈π∗. The topology representing graph emerges from the edges with probabilities π∗> ϵ. It is the graph which best models the topology of the data in the sense of the maximum likelihood wrt π, σ, ϵ and the set of prototypes w and their Delaunay graph. 3 Experiments In these experiments, given a set of points and a set of prototypes located thanks to vector quantization [14], we want to verify the relevance of the GGG to learn the topology in various noise conditions. The principle of the GGG is shown in the Figure 1. In the Figure 2, we show the comparison of the GGG to a CHL for which we filter out edges which have a number of hits lower than a threshold T. The data and prototypes are the same for both algorithms. We set T ∗such that the graph obtained matches visually as close as possible the expected solution. We optimize σ and π using (4) for tmax = 100 steps and ϵ = 0.001. Conditions and conclusions of the experiments are given in the captions. −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 (a) (b) (c) (d) Figure 1: Principle of the Generative Gaussian Graph: (a) Data drawn from an oblique segment, an horizontal one and an isolated point with respective density {0.25; 0.5; 0.25}. The prototypes are located at the extreme points of the segments, and at the isolated point. They are connected with edges from the Delaunay graph. (b) The corresponding initial Generative Gaussian Graph. (c) The optimal GGG obtained after optimization of the likelihood according to σ and π. (d) The edges of the optimal GGG associated to non-negligible probabilities model the topology of the data. 4 Discussion We propose that the problem of learning the topology of a set of points can be posed as a statistical learning problem: we assume that the topology of a statistical generative model of a set of points is an estimator of the topology of the principal manifold of this set. From this assumption, we define a topologically flexible statistical generative mixture model that we call Generative Gaussian Graph from which we can extract the topology. The final topology representing graph emerges from the edges with non-negligible probability. We propose to use the Delaunay graph as an initial graph assuming it is rich enough to contain as a subgraph a good topological model of the data. The use of the likelihood criterion makes possible cross-validation to select the best generative model hence the best topological model in terms of generalization capacities. The GGG allows to avoid the limits of the CHL for modelling topology. In particular, it allows to take into account the noise and to model isolated bumps. Moreover, the likelihood of the data wrt the GGG is maximized during the learning, allowing to measure the quality of the model even when no visualization is possible. For some particular data distributions where all the data lie on the Delaunay line segments, no maximum of the likelihood exists. This case is not a problem because σ = 0 effectively defines a good solution (no noise in a data set drawn from a graph). If only some of the data lie exactly on the line segments, a maximum of the likelihood still exists because σ2 defines the variance for all the generative gaussian points and segments at the same time so it cannot vanish to 0. The computing time complexity of the GGG is o(D(N0 + N1)Mtmax) plus the time O(DN 3 0 ) [15] needed to build the Delaunay graph which dominates the overall worst time complexity. The Competitive Hebbian Learning is in time o(DN0M). As in general, the CHL builds too much edges than needed to model the topology, it would be interesting to use the Delaunay subgraph obtained with the CHL as a starting point for the GGG model. The Generative Gaussian Graph can be viewed as a generalization of gaussian mixtures to points and segments: a gaussian mixture is a GGG with no edge. GGG provides at the same time an estimation of the data distribution density more accurate than the gaussian mixture based on the same set of prototypes and the same noise isovariance hypothesis (because it adds gaussian-segments to the pool of gaussian-points), and intrinsically an explicit model of the topology of the data set which provides most of the topological information at once. In contrast, other generative models do not provide any insight about the topology of the data, except the Generative Topographic Map (GTM) [4], the revisited Principal Manifolds [7] or the mixture of Probabilistics Principal Component Analysers (PPCA) [8]. However, in the two former cases, the intrinsic dimension of the model is fixed a priori and σnoise = 0.05 σnoise = 0.15 σnoise = 0.2 −1 −0.5 0 0.5 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.5 0 0.5 1 1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 1.5 −1 −0.5 0 0.5 1 (a) GGG: σ∗= 0.06 (b) GGG: σ∗= 0.17 (c) GGG: σ∗= 0.21 −0.5 0 0.5 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 (d) CHL: T = 0 (e) CHL: T = 0 (f) CHL: T = 0 −0.5 0 0.5 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 (g) CHL: T ∗= 60 (h) CHL: T ∗= 65 (i) CHL: T ∗= 58 Figure 2: Learning the topology of a data set: 600 data drawn from a spirale and an isolated point corrupted with additive gaussian noise with mean 0 and variance σ2 noise. Prototypes are located by vector quantization [14]. (a-c) The edges of the GGG with weights greater than ϵ allow to recover the topology of the principal manifolds except for large noise variance (c) where a triangle was created at the center of the spirale. σ∗over-estimates σnoise because the model is piecewise linear while the true manifolds are non-linear. (d-f) The CHL without threshold (T=0) is not able to recover the true topology of the data for even small σnoise. In particular, the isolated bump cannot be recovered. The grey cells correspond to ROI of the edges (darker cells contain more data). It shows these cells are not intuitively related to the edges they are associated to (e.g. they may have very tiny areas (e), and may partly (d) or never (f) contain the corresponding line segment). (g-h) The CHL with a threshold T allows to recover the topology of the data only for small noise variance (g) (Notice T1 < T2 ⇒DGCHL(T2) ⊆DGCHL(T1)). Moreover, setting T requires visual control and is not associated to the optimum of any energy function which prevents its use in higher dimensional space. not learned from the data, while in the latter the local intrinsic dimension is learned but the connectedness between the local models is not. One obvious way to follow to extend this work is considering a simplicial complex in place of the graph to get the full topological information extractible. Some other interesting questions arise about the curse of the dimension, the selection of the number of prototypes and the threshold ϵ, the theoretical grounding of the connection between the likelihood and some topological measure of accuracy, the possibility to devise a ”universal topology estimator”, the way to deal with data sets with multi-scale structures or background noise... This preliminary work is an attempt to bridge the gap between Statistical Learning Theory [17] and Computational Topology [18][19]. We wish it to cross-fertilize and to open new perspectives in both fields. References [1] M. Aupetit and T. Catz. High-dimensional labeled data analysis with topology representing graphs. Neurocomputing, Elsevier, 63:139–169, 2005. [2] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford Univ. Press, New York, 1995. [3] M. Zeller, R. Sharma, and K. Schulten. Topology representing network for sensor-based robot motion planning. World Congress on Neural Networks, INNS Press, pages 100–103, 1996. [4] C. M. Bishop, M. Svens´en, and C. K. I. Williams. Gtm: the generative topographic mapping. Neural Computation, MIT Press, 10(1):215–234, 1998. [5] V. de Silva and J. B. Tenenbaum. Global versus local methods for nonlinear dimensionality reduction. In S. Becker, S. Thrun, K. Obermayer (Eds) Advances in Neural Information Processing Systems, MIT Press,Cambridge, MA, 15:705–712, 2003. [6] J. A. Lee, A. Lendasse, and M. Verleysen. Curvilinear distance analysis versus isomap. Europ. Symp. on Art. Neural Networks, Bruges (Belgium), d-side eds., pages 185–192, 2002. [7] R. Tibshirani. Principal curves revisited. Statistics and Computing, (2):183–190, 1992. [8] M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443–482, 1999. [9] M. Aupetit. Robust topology representing networks. European Symp. on Artificial Neural Networks, Bruges (Belgium), d-side eds., pages 45–50, 2003. [10] V. de Silva and G. Carlsson. Topological estimation using witness complexes. In M. Alexa and S. Rusinkiewicz (Eds) Eurographics Symposium on Point-Based Graphics, ETH, Z¨urich,Switzerland, June 2-4, 2004. [11] T. M. Martinetz and K. J. Schulten. Topology representing networks. Neural Networks, Elsevier London, 7:507–522, 1994. [12] A. Okabe, B. Boots, and K. Sugihara. Spatial tessellations: concepts and applications of Vorono¨ı diagrams. John Wiley, Chichester, 1992. [13] H. Edelsbrunner and N. R. Shah. Triangulating topological spaces. International Journal on Computational Geometry and Applications, 7:365–378, 1997. [14] T. M. Martinetz, S. G. Berkovitch, and K. J. Schulten. “neural-gas” network for vector quantization and its application to time-series prediction. IEEE Trans. on NN, 4(4):558–569, 1993. [15] E. Agrell. A method for examining vector quantizer structures. Proceedings of IEEE International Symposium on Information Theory, San Antonio, TX, page 394, 1993. [16] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977. [17] V.N. Vapnik. Statistical Learning Theory. John Wiley, 1998. [18] T. Dey, H. Edelsbrunner, and S. Guha. Computational topology. In B. Chazelle, J. Goodman and R. Pollack, editors, Advances in Discrete and Computational Geometry. American Math. Society, Princeton, NJ, 1999. [19] V. Robins, J. Abernethy, N. Rooney, and E. Bradley. Topology and intelligent data analysis. IDA-03 (International Symposium on Intelligent Data Analysis), Berlin, 2003.
|
2005
|
89
|
2,909
|
Identifying Distributed Object Representations in Human Extrastriate Visual Cortex Rory Sayres David Ress Department of Neuroscience Department of Neuroscience Stanford University Brown University Stanford, CA 94305 Providence, RI 02912 sayres@stanford.edu ress@brown.edu Kalanit Grill-Spector Departments of Neuroscience and Psychology Stanford University Stanford, CA 94305 kalanit@psych.stanford.edu Abstract The category of visual stimuli has been reliably decoded from patterns of neural activity in extrastriate visual cortex [1]. It has yet to be seen whether object identity can be inferred from this activity. We present fMRI data measuring responses in human extrastriate cortex to a set of 12 distinct object images. We use a simple winner-take-all classifier, using half the data from each recording session as a training set, to evaluate encoding of object identity across fMRI voxels. Since this approach is sensitive to the inclusion of noisy voxels, we describe two methods for identifying subsets of voxels in the data which optimally distinguish object identity. One method characterizes the reliability of each voxel within subsets of the data, while another estimates the mutual information of each voxel with the stimulus set. We find that both metrics can identify subsets of the data which reliably encode object identity, even when noisy measurements are artificially added to the data. The mutual information metric is less efficient at this task, likely due to constraints in fMRI data. 1 Introduction Humans and other primates can perform fast and efficient object recognition. This ability is mediated within a large extent of occipital and temporal cortex, sometimes referred to as the ventral processing stream [10]. This cortex has been examined using electrophysiological recordings, optical imaging techniques, and a variety of neuroimaging techniques including functional magnetic resonance imaging (fMRI) [refs]. With fMRI, these regions can be reliably identified by their strong preferential response to intact objects over other visual stimuli [9,10]. The functional organization of object-selective cortex is unclear. A number of regions have been identified within this cortex, which preferentially respond to particular categories of images [refs]; it has been proposed that these regions are specialized for processing visual information about those categories [refs]. A recent study by Haxby and colleagues [1] found that the category identity of different stimuli could be decoded from fMRI response patterns, using a simple classifier in which half of each data set was used as a training set and half as a test set. These results were interpreted as evidence for a distributed representation of objects across ventral cortex, in which both positive and negative responses contribute information about object identity. It is not clear, however, to what extent information about objects is processed at the category level, and to what extent it reflects individual object identity, or features within objects [1,8]. The study in [1] is one of a growing number of recent attempts to decode stimulus identity by examining fMRI response patterns across cortex [1-4]. fMRI data has particular advantages and disadvantages for this approach. Among its advantages are the ability to make many measurements across a large extent of cortex in awake, behaving humans. Its disadvantages include temporal and spatial resolution constraints, which limit the number of trials that may be collected; the ability to examine trial-by-trial variation; and potentially limit the localization of small neuronal populations. A further potential disadvantage arises from the little-understood functional organization of object-selective cortical regions. Because it is not clear which parts of this cortex are involved in representing different objects and which aren’t, analyses may include fMRI image locations (voxels) which are not involved in object representation. The present study addresses a number of these questions by examining the response patterns across object-selective cortex to a set of 12 individual object images, using highresolution fMRI. We sought to address the following experimental questions: (1) Can individual object identity be decoded from fMRI responses in object-selective cortex? (2) How can one identify those subsets of fMRI voxels which reliably encode identity about a stimulus, among a large set of potentially unrelated voxels? We adopt a similar approach to that described in [1], subdividing each data set into training and test subsets, and evaluate the efficiency of a set of voxels in discriminating object identity among the 12 possible images with a simple winner-take-all classifier. We then describe two metrics from which to identify sets of voxels which reliably discriminate different objects. The first metric estimates the replicability of voxels to each stimulus between the training and the test data. The second metric estimates the mutual information each voxel has with the stimulus set. 2 Experimental design and data collection Our experimental design is summarized in Figure 1. We chose a stimulus set of 12 line drawings of different object stimuli, shown in Figure 1a. These objects can be readily categorized as faces, animals, or vehicles; these categories have been previously identified as producing distinct patterns of blood-oxygenation-level-dependent (BOLD) response in object-selective cortex [10]. This allows us to compare category and object identity as potential explanatory factors for BOLD response patterns. Further, the use of black-and-white line drawings reduces the number of stimulus features which differentiate the stimuli, such as spatial frequency bands. A typical trial is illustrated in Figure 1b. We presented one of the 12 object images to the subject within the foveal 5 degrees of visual field for 2 sec, then masked the image with a scrambled version of a random image for 10 sec. These scrambled images are known to produce minimal response in our regions of interest [11], and serve as a baseline condition for these experiments. Each scan contained one trial per image, presented in a randomized order. We ran 10-15 event-related scans for each scanning session. This allowed us to collect full hemodynamic responses to each image, which in BOLD signal lags several seconds after stimulus onset. In this way we were able to analyze trial-bytrial variations in response to different images, without the analytic and design restrictions involved in analyzing fMRI data with more closely-spaced trials [5]. This feature was essential for computing the mutual information of a voxel with the stimulus set. Figure 1: Experimental Design. (a) The 12 object stimuli used. (b) Example of a typical trial. (c) Depiction of imaged region during one session. The image is an axial slice from a T1-weighted anatomical image for one subject. The blue region shows the region imaged at high resolution. The white outlines show gray matter within the imaged area. We obtained high-resolution fMRI images at 3 Tesla using a spiral-out protocol. We used a custom-built receive-only surface coil. This coil was small and flexible, with a 7.5 cm diameter, and could be placed on a subject’s skull directly over the region to be imaged. Because of the restricted field of view of this coil, we imaged only right hemisphere cortex for these experiments. We imaged 4 subjects (1 female), each of whom participated in multiple recording sessions. For each recording session, we imaged 12 oblique slices, with voxel dimensions of 1 x 1 x 1 mm and a frame period of 2 seconds. (More typical fMRI resolutions are around 3 x 3 x 3 mm–3x3x6 mm, at least 27 times lower in resolution.) A typical imaging prescription, superimposed over a high-resolution T1-weighted anatomical image, is shown in Figure 1c. Functional data from these experiments are illustrated in Figure 2. Within each session, we identified object-selective voxels by applying a general linear model to the time series data, estimating the amplitude of BOLD response to different images [5]. We then computed contrast maps representing T tests of response of different images against the baseline scrambled condition. An example of voxels localized in this way is illustrated in Figure 2a, superimposed over mean T1-weighted anatomical images for two slices. Our criterion for defining object-selective voxels was that a voxel needed to respond to at least one of the 12 stimulus images relative to baseline with a significance level of p ≤ 0.001. Each data set contained between 600 and 2500 object-selective voxels. The design of our surface coil, combined with its proximity to the imaged cortex, allowed us to observe significant event-related responses within single voxels. Figure 2b shows peri-stimulus time courses to each image from four sample voxels. These responses are summarized by subtracting the mean BOLD response after stimulus onset with the response during the baseline period, as illustrated in Figure 2c. In this way we can summarize a data set as a matrix A of response amplitudes to different voxels, where Ai,j represents the response to the ith image of the jth voxel. These responses are statistically significant (T test, p < 0.001) for many stimuli, yet the voxels are heterogeneous in their responses—different voxels respond to different stimuli. This response diversity prompts the questions of deciding which sets of responses, if any, are informative of image identity. face1 face2 face3 face4 bull donkey buffalo ferret dragster truck bus boxster a) 2 sec 10 sec b) Left ↔ Right Posterior ↔ Anterior c) Figure 2: Experimental Data. (a) T1-weighted anatomical images from a sample session, with object-selective voxels indicated in orange. (b) Mean peristimulus time courses from 4 object-selective voxels in the lower slice of (a) (locations indicated by arrow), for each image. Dotted lines indicate trial onset; dark bars at bottom indicate stimulus presentation duration. Scale bars indicate 10 seconds duration and 10 percent BOLD signal change relative to baseline. (c) Mean response amplitudes from the voxels depicted in (b), represented as a set of column vectors for each voxel. Color indicates mean amplitude during post-stimulus period relative to pre-stimulus period. 3 Winner-take-all classifier Given a set of response amplitudes across object-selective voxels, how can we characterize the discriminabilty of responses to different stimuli? This question can be answered by constructing a classifier, which takes a set of responses to an unknown stimulus, and compares it to a training set of responses to known stimuli. This general approach has been successfully applied to fMRI responses in early visual cortex [3-4], object-selective cortex [1], and across multiple cortical regions [2]. For our classifier, we adopt the approach used in [1], with a few refinements. As in the previous study, we subdivide each data set into a training set and a test set, with the training set representing odd-numbered runs and the test set representing even-numbered runs. (Since each run contains one trial per image, this is equivalent to using odd- and even-numbered trials). We construct a training matrix, Atraining, in which each row represents the response across voxels to a different image in the training data set. We construct a second matrix, Atest, which contains the responses to different images during the test set. These matrices are illustrated for one data set in Figure 3a. Each row of Atest is considered to be the response to an unknown stimulus, and is compared to each of the rows in Atraining. The overall performance of the classifier is evaluated by its success rate at classifying test responses based on the correlation to training responses. face1 face2 face3 face4 bull donkey buffalo ferret dragster truck bus boxster 1 2 3 4 Voxel 10% Signal Change 10 sec b) c) a) Voxel 1 2 3 4 face1 face2 face3 face4 bull donkey buffalo ferret dragster truck bus boxster Response Amplitude [% Signal] -1 0 1 2 3 4 5 Anterior Posterior Right Left Figure 3: Illustration of winner-take-all classifier for two sample sessions. (a) Response amplitudes for all object-selective voxels for the training (top) and test (bottom) data sets, for one recording session. (b) Classifier results for the same session as in (a). Left: Correlation matrix between the training and test sets. Right: Results of the winner-takeall algorithm. The red square in each row represents the image from the test set that produced the highest correlation with the training set, and is the “guess” of the classifier. The percent correct is evaluated as the number of guesses that lie along the diagonal (the same image in the training and test sets produces the highest correlation). (c) Results for a second session, in the same format as (b). We evaluate classifier performance with a winner-take-all criterion, which is more conservative than the criterion in [1]. First, a correlation matrix R is constructed containing correlation coefficients for each pairwise comparison of rows in Atraining and Atest (shown on the left in Figure 3b and 3c for two data sets). The element Ri,j represents the correlation coefficient between row i of Atest and row j of Atraining. Then, for each row in the correlation matrix, the classifier “guesses” the identity of the test stimulus by selecting the element with the highest coefficient (shown on the right in Figure 3b and 3c). Correct guesses lie along the diagonal of this matrix, Ri,i. The previously-used method evaluated classifier performance by successively pairing off the correct stimulus with incorrect stimuli from the training set [1]. With this criterion, responses from the test set which do not correlate maximally with the same stimulus in the training set might still lead to high classifier performance. For instance, if an element Ri,i is larger than all but one coefficient in row i, pairwise comparisons would reveal correct guesses for 10 out of 11 comparisons, or 91% correct, while the winner-take-all criterion would consider this 0%. This conservative criterion reduces chance performance from 1/2 to 1/12, and ensures that high classifier performance reflects a high level of discriminability between different stimuli, providing a stringent test for decoding. 4 Identifying voxels which distinguish objects When we examined response patterns across all object-selective voxels, we observed high levels of classifier performance from some recording sessions, as shown in Session A in Figure 3. Many sessions, however, were more similar to Session B: limited success at decoding object identity when using all voxels. Training Data a) face1 face2 face3 face4 bull donkey buffalo ferret dragster truck bus boxster Voxels Test Data 200 400 600 800 1000 face1 face2 face3 face4 bull donkey buffalo ferret dragster truck bus boxster Response Amplitude [% Signal] -20 0 20 Training Image Test Image Session A b) 1 2 3 4 5 6 7 8 9101112 123456 789 10 11 12 Training Image Test Image Percent Correct: 100 % 1 2 3 4 5 6 7 8 9101112 123456 789 10 11 12 Training Image Test Image Session B c) 1 2 3 4 5 6 7 8 9101112 12 3456 789 10 11 12 Correlation Coefficient -0.1 0 0.1 0.2 Training Image Test Image Percent Correct: 42 % 1 2 3 4 5 6 7 8 9101112 12 3456 789 10 11 12 For both cases, a relevant question is the extent to which information is contained within a subset of the selected voxel. The distributed representation implied in Session A may be driven by only a few informative voxels; conversely, excessively noisy or unrelated activity from other voxels may be affected classifier performance on Session B. This is of particular concern given that the functional organization of this cortex is not well understood. In addition to using such classifiers to test a hypothesis that a pre-defined region of interest can discriminate stimuli, it would be highly useful to use the classifier to identify cortical regions which represent a stimulus. To identify subsets of the data which reliably represent different stimuli, we search among the set of object-selective voxels using two metrics to rank voxels: (1) The reliability of each voxel between the training and test data subsets; and (2) The mutual information of each voxel with the stimulus set. 4.1 Voxel reliability metric The voxel reliability metric is computed for each voxel by taking the vectors of 12 response amplitudes to each stimulus in the training and test sets, and calculating their correlation coefficient. Voxels with high reliability will have high values for the diagonal elements in the R correlation matrix, but this does not place constraints on correlations for the off-diagonal comparisons. For instance, persistently active and nonspecific voxels (such as might be expected from draining veins or sinuses) would have high voxel reliability, but also high correlation for all pairwise comparisons between stimuli in test and training sets, so as not to guarantee high classifier performance. 4.2 Mutual information metric The mutual information for a voxel is computed as the difference between the overall entropy of the voxel and the “noise entropy”, the sum over all stimuli of the entropy of the voxel given each stimulus [6]: I m=H −H noise=−∑ r Prlog 2Pr∑ s ,r PsPr∣slog2 Pr∣s 1 In this formula, P(r) represents the probability of observing a response level r and P(r|s) represents the probability of observing response r given stimulus s. Computing these probabilities presents a difficulty for fMRI data, since an accurate estimate requires many trials. Given the hemodynamic lag of 9-16 sec inherent to measuring BOLD signal, and the limitations of keeping a human observer in an MRI scanner before motion artifacts or attentional drifts confound the signals, it is difficult to obtain many trials over which to evaluate different response probabilities. There are two possible solutions to this: find ways of obtaining large number of trials, e. g. through co-registering data across many sessions; and reduce the number of possible response bins for the data. While the first option is an area of active pursuit for us, we will focus here on the second approach. Given the low number of trials per image, we reduce the number of possible response levels to only two bins, 0 and 1. This allows for a wider range of possible values for P(r) and P(r|s) at the expense of ignoring potential information contained in varying response levels. Given these two bins, the next question is deciding how to threshold responses to decide if a given voxel responded significantly (r=1) or not (r=0) on a given trial. Since we do not have an a priori hypothesis about the value of this threshold, we choose it separately for each voxel, such that it maximizes the mutual information of that voxel. This approach has been used previously to reduce free parameters while developing artificial recognition models[7]. Figure 4: Comparison of metrics for identifying reliable subsets of voxels in data sets. (a) Performance on winner-take-all classifier of different-sized subsets of one data set (“Session B” in Figure 3), sorted by voxel reliability (gray, solid) and mutual information (red, dashed) metrics. (b) Performance of the two metrics across 12 data sets. Each curve represents the mean (thick line) ± standard error of the mean across data sets. (c) Performance on data set from (a) when reverse-sorting voxels by each metric. Dotted black line indicates chance performance. After ranking each voxel with the two metrics, we evaluated how well these voxels found reliable object representations. To do this, we sorted the voxels in descending order according to each metric; selected progressively larger subsets of voxels, starting with the 10 highest-ranked voxels and proceeding to the full set of voxels; and evaluated performance on the classifier for each subset. Results of these analyses are summarized in Figure 4. Figure 4a shows performance curves for the two sortings on data from the “Session B” data set illustrated in Figure 3. As can be seen, while performance using all voxels is at 42% correct, by removing voxels, performance quickly reaches 100% using the reliability criterion. The mutual information metric also converges to 100%, albeit slightly more slowly. Also note that for very small subset sizes, performance decreases again: correct discrimination requires information distributed across a set of voxels. Finally, we repeated our analyses across 12 data sets collected from 4 subjects. Figure 4c shows the mean performance across sessions for the two metrics. These curves are normalized by the proportion of total available voxels for each data set. Overall, the voxel reliability metric was significantly better at identifying subsets of voxels which could discriminate object identity, although both metrics performed significantly better than the 1/12 chance performance at the classifier task, and both produced pronounced improvements in performance for smaller subsets compared to using the entire data sets. Note that simply removing voxels does not guarantee the better performance on the classifier. If the voxels are sorted in reverse order, starting with e. g. the lowest values of voxel reliability or mutual information, subsets containing half the voxels are consistently at or below chance performance (Figure 4c). 5 Summary and conclusions Developing and training classifiers to identify cognitive states based on fMRI data is a growing and promising approach for neuroscience [1-4]. One drawback to these methods, however, is that they often require prior knowledge of which voxels are involved in specifying a cognitive state, and which aren’t. Given the poorly-understood functional organization of the majority of cortex, an important goal is to develop methods to search across cortex for regions which represent such states. The results described here represent one step in this direction. Our voxel-ranking metrics successfully identified subsets of object-selective voxels a) b) c) Voxel Reliability Mutual Information Chance Performance 0 500 1000 1500 0 50 100 Subset Size [Voxels] Classifier Performance [% Correct] 0 0.5 1 0 50 100 Subset Size [Proportion of all voxels] 0 500 1000 1500 0 50 100 Subset Size [Voxels] which discriminate object identity. This demonstrates the feasibility of adapting classifier methods to search across cortical regions. However, these methods can be refined considerably. The most important improvement is providing a larger set of trials from which to compute response probabilities. This is currently being pursued by combining data sets from multiple recording sessions in a reference volume. Given more extensive data, the set of possible response bins can be increased from the current binary set, which should improve performance of our mutual information metric. Our results also have several implications for object recognition. We found a high ability to discriminate between individual images in our data sets. Moreover, this discrimination could be performed with sets of voxels of widely varying sizes. For some sessions, perfect discrimination could be achieved using all object-selective voxels, which number in the thousands (Figure 3a, 3b); for many others, perfect discrimination was possible using subsets as small as a few dozen voxels. This has implications for the distributed nature of object representation in extrastriate cortex. However, it raises the question of identifying redundant information within these representations. The distributed representations may reflect functionally distinct areas which are processing different aspects of each stimulus, as in earlier visual cortex. Mutual information approaches have succeeded at identifying redundant coding of information in other sensory areas [10], and can be tested on the known functional subdivisions in early visual cortex. In this way, we can use intuitions generated by ideal observers of the data, such as the classifier described here,and apply them to understanding how the brain processes this information. Acknowledgments We would like to thank Gal Chechik and Brian Wandell for input on analysis techniques. This work was supported by NEI National Research Service Award 5F31EY015937-02 to RAS, and a research grant 2005-05-111-RES from the Whitehall Foundation to KGS. References [1] Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, and Pietrini P. (2001) Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425-30. [2] Wang X, Hutchinson R, and Mitchell TM (2004) Training fMRI classifiers to distinguish cognitive states across multiple subjects. In S. Thrun, L. Saul and B. Scholköpf (eds.), Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press. [3] Kamitani Y and Tong F. (2005) Decoding the visual and subjective contents of the human brain. Nat Neurosci.8:679-85. [4] Haynes JD and Rees G. (2005) Predicting the orientation of invisible stimuli from activity in human primary visual cortex. Nat Neurosci.8:686-691. [5] Burock MA and Dale AM. (2000) Estimation and Detection of Event-Related fMRI Signals with temporally correlated noise: a statistically efficient and unbiased approach. Human Brain Mapping 11:249-260. [6] Abbott L and Dayan P (2001) Theoretical Neuroscience. Cambridge, MA: MIT Press. [7] Ullman S, Vidal-Naquet M, and Sali E. Visual features of intermediate complexity and their use in classification. Nat Neurosci. 5(7):682-7. [8] Tsunoda K, Yamane Y, Nishizaki M, and Tanifuji M. (2001) Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nat Neurosci.4:832-8. [9] Grill-Spector K, Kushnir T, Hendler T, and Malach R. (2000) The dynamics of object-selective activation correlate with recognition performance in humans. Nat Neurosci. 3:837-43. [10] Malach R, Reppas JB, Benson RR, Kwong KK, Jiang H, Kennedy WA, Ledden PJ, Brady TJ, Rosen BR, and Tootell RB. (1995) Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc Natl Acad Sci U S A 92:8135-8139. [11] Chechik G, Globerson A, Anderson MJ, Young ED, Nelken I, and Tishby N. (2001) Groups redundancy measures reveal redundancy reduction along the auditory pathway. Advances in Neural Information Processing Systems 14. Cambridge, MA: MIT Press.
|
2005
|
9
|
2,910
|
Optimizing spatio-temporal filters for improving Brain-Computer Interfacing Guido Dornhege1, Benjamin Blankertz1, Matthias Krauledat1,3, Florian Losch2, Gabriel Curio2 and Klaus-Robert Müller1,3 1Fraunhofer FIRST.IDA, Kekuléstr. 7, 12 489 Berlin, Germany 2Campus Benjamin Franklin, Charité University Medicine Berlin, Hindenburgdamm 30, 12 203 Berlin, Germany. 3University of Potsdam, August-Bebel-Str. 89, 14 482 Germany {dornhege,blanker,kraulem,klaus}@first.fhg.de, {florian-philip.losch,gabriel.curio}@charite.de Abstract Brain-Computer Interface (BCI) systems create a novel communication channel from the brain to an output device by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability of multi-channel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates the superiority of the proposed algorithm. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms. 1 Introduction Brain-Computer Interface (BCI) research aims at the development of a system that allows direct control of, e.g., a computer application or a neuroprosthesis, solely by human intentions as reflected in suitable brain signals, cf. [1, 2, 3, 4, 5, 6, 7, 8, 9]. We will be focussing on noninvasive, electroencephalogram (EEG) based BCI systems. Such devices can be used as tools of communication for the disabled or for healthy subjects that might be interested in exploring a new path of man-machine interfacing, say when playing BCI operated computer games. The classical approach to establish EEG-based control is to set up a system that is controlled by a specific EEG feature which is known to be susceptible to conditioning and to let the subjects learn the voluntary control of that feature. In contrast, the Berlin BrainComputer Interface (BBCI) uses well established motor competences in control paradigms and a machine learning approach to extract subject-specific discriminability patterns from high-dimensional features. This approach has the advantage that the long subject training needed in the operant conditioning approach is replaced by a short calibration measurement (20 minutes) and machine training (1 minute). The machine adapts to the specific characteristics of the brain signals of each subject, accounting for the high inter-subject variability. With respect to the topographic patterns of brain rhythm modulations the Common Spatial Patterns (CSP) (see [10]) algorithm has proven to be very useful to extract subject-specific, discriminative spatial filters. On the other hand the frequency band on which the CSP algorithm operates is either selected manually or unspecifically set to a broad band filter, cf. [10, 5]. Obviously, a simultaneous optimization of a frequency filter with the spatial filter is highly desirable. Recently, in [11] the CSSP algorithm was presented, in which very simple frequency filters (with one delay tap) for each channel are optimized together with the spatial filters. Although the results showed an improvement of the CSSP algorithm over CSP, the flexibility of the frequency filters is very limited. Here we present a method that allows to simultaneously optimize an arbitrary FIR filter within the CSP analysis. The proposed algorithm outperforms CSP and CSSP on average, and in cases where a separation of the discriminative rhythm from dominating non-discriminative rhythms is of importance, a considerable increase of classification accuracy can be achieved. 2 Experimental Setup In this paper we investigate data from 60 EEG experiments with 22 different subjects. All experiments included so called training sessions which are used to train subject-specific classifiers. Many experiments also included feedback sessions in which the subject could steer a cursor or play a computer game like brain-pong by BCI control. Data from feedback sessions are not used in this a-posteriori study since they depend on an intricate interaction of the subject with the original classification algorithm. In the experimental sessions used for the present study, labeled trials of brain signals were recorded in the following way: The subjects were sitting in a comfortable chair with arms lying relaxed on the armrests. All 4.5–6 seconds one of 3 different visual stimuli indicated for 3–3.5 seconds which mental task the subject should accomplish during that period. The investigated mental tasks were imagined movements of the left hand (l), the right hand (r), and one foot (f). Brain activity was recorded from the scalp with multi-channel EEG amplifiers using 32, 64 resp. 128 channels. Besides EEG channels, we recorded the electromyogram (EMG) from both forearms and the leg as well as horizontal and vertical electrooculogram (EOG) from the eyes. The EMG and EOG channels were used exclusively to make sure that the subjects performed no real limb or eye movements correlated with the mental tasks that could directly (artifacts) or indirectly (afferent signals from muscles and joint receptors) be reflected in the EEG channels and thus be detected by the classifier, which operates on the EEG signals only. Between 120 and 200 trials for each class were recorded. In this study we investigate only binary classifications, but the results can be expected to safely transfer to the multi-class case. 3 Neurophysiological Background According to the well established model called homunculus, first described by [12], for each part of the human body there exists a corresponding region in the motor and somatosensory area of the neocortex. The ’mapping’ from the body to the respective brain areas preserves in big parts topography, i.e., neighboring parts of the body are almost represented in neighboring parts of the cortex. While the region of the feet is located at the center of the vertex, the left hand is represented lateralized on the right hemisphere and the right hand on the left hemisphere. Brain activity during rest and wakefulness is describable by different rhythms located over different brain areas. These rhythms reflect functional states of different neuronal cortical networks and can be used for brain-computer interfacing. These rhythms are blocked by movements, independent of their active, passive or reflexive origin. Blocking effects are visible bilaterally but pronounced contralaterally in the cortical area that corresponds to the moved limb. This attenuation of brain rhythms is 10 15 20 Cz 10 15 20 C4 10 15 20 28 30 32 34 36 38 40 dB Pz Figure 1: The plot shows the spectra for one subject during left hand (light line) and foot (dark line) motor imagery between 5 and 25 Hz at scalp positions Pz, Cz and C4. In both central channels two peaks, one at 8 Hz and one at 12 Hz are visible. Below each channel the r2value which measures discriminability is added. It indicates that the second peak contains more discriminative information. termed event-related desynchronization (ERD), see [13]. Over sensorimotor cortex a so called idle- or µ-rhythm can be measured in the scalp EEG. The most common frequency band of µ-rhythm is about 10 Hz (precentral α- or µ-rhythm, [14]). Jasper and Penfield ([12]) described a strictly local so called beta-rhythm about 20 Hz over human motor cortex in electrocorticographic recordings. In Scalp EEG recording one can find µ-rhythm over motor areas mixed and superimposed by 20 Hz-activity. In this context µ-rhythm is sometimes interpreted as a subharmonic of cortical faster activity. These brain rhythms described above are of cortical origin but the role of a thalomo-cortical pacemaker has been discussed since the first description of EEG by Berger ([15]) and is still a point of discussion. Lopes da Silva ([16]) showed that cortico-cortical coherence is much larger than thalamo-cortical. However, since the focal ERD in the motor and/or sensory cortex can be observed even when a subject is only imagining a movement or sensation in the specific limb, this feature can well be used for BCI control. The discrimination of the imagination of movements of left hand vs. right hand vs. foot is based on the topography of the attenuation of the µ and/or β rhythm. There are two problems when using ERD features for BCI control: (1) The strength of the sensorimotor idle rhythms as measured by scalp EEG is known to vary strongly between subjects. This introduces a high intersubject variability on the accuracy with which an ERD-based BCI system works. There is another feature independent from the ERD reflecting imagined or intended movements, the movement related potentials (MRP), denoting a negative DC shift of the EEG signals in the respective cortical regions. See [17, 18] for an investigation of how this feature can be exploited for BCI use and combined with the ERD feature. This combination strategy was able to greatly enhance classification performance in offline studies. In this paper we focus only on improving the ERD-based classification, but all the improvements presented here can also be used in the combined algorithm. (2) The precentral µ-rhythm is often superimposed by the much stronger posterior αrhythm, which is the idle rhythm of the visual system. It is best articulated with eyes closed, but also present in awake and attentive subjects, see Fig. 1 at channel Pz. Due to volume conduction the posterior α-rhythm interferes with the precentral µ-rhythm in the EEG channels over motor cortex. Hence a µ-power based classifier is susceptible to modulations of the posterior α-rhythm that occur due to fatigue, change in attentional focus while performing tasks, or changing demands of visual processing. When the two rhythms have different spectral peaks as in Fig. 1, channels Cz and C4, a suitable frequency filter can help to weaken the interference. The optimization of such a filter integrated in the CSP algorithm is addressed in this paper. 4 Spatial Filter - the CSP Algorithm The common spatial pattern (CSP) algorithm ([19]) is very useful in calculating spatial filters for detecting ERD effects ([20]) and for ERD-based BCIs, see [10], and has been extended to multi-class problems in [21]. Given two distributions in a high-dimensional space, the (supervised) CSP algorithm finds directions (i.e., spatial filters) that maximize variance for one class and at the same time minimize variance for the other class. After having band-pass filtered the EEG signals to the rhythms of interest, high variance reflects a strong rhythm and low variance a weak (or attenuated) rhythm. Let us take the example of discriminating left hand vs. right hand imagery. According to Sec. 3, the spatial filter that focusses on the area of the left hand is characterized by a strong motor rhythm during imagination of right hand movements (left hand is in idle state), and by an attenuated motor rhythm during left hand imagination. This criterion is exactly what the CSP algorithm optimizes: maximizing variance for the class of right hand trials and at the same time minimizing variance for left hand trials. Furthermore the CSP algorithm calculates the dual filter that will focus on the area of the right hand (and it will even calculate several filters for both optimizations by considering orthogonal subspaces). The CSP algorithm is trained on labeled data, i.e., we have a set of trials si, i = 1,2,..., where each trial consists of several channels (as rows) and time points (as columns). A spatial filter w ∈IR#channels projects these trials to the signal ˆsi(w) = w⊤si with only one channel. The idea of CSP is to find a spatial filter w such that the projected signal has high variance for one class and low variance for the other. In other words we maximize the variance for one class whereas the sum of the variances of both classes remains constant, which is expressed by the following optimization problem: max w ∑ i:Trial in Class 1 var(ˆsi(w)), s.t. ∑ i var(ˆsi(w)) = 1, (1) where var(·) is the variance of the vector. An analoguous formulation can be formed for the second class. Using the definition of the variance we simplify the problem to max w w⊤Σ1w, s.t. w⊤(Σ1 +Σ2)w = 1, (2) where Σy is the covariance matrix of the trial-concatenated matrix of dimension [channels × concatenated time-points] belonging to the respective class y ∈{1,2}. Formulating the dual problem we can find that the problem can be solved by calculating a matrix Q and diagonal matrix D with elements in [0,1] such that QΣ1Q⊤= D and QΣ2Q⊤= I −D (3) and by choosing the highest and lowest eigenvalue. Equation (3) can be accomplished in the following way. First we whiten the matrix Σ1 +Σ2, i.e., determine a matrix P such that P(Σ1 + Σ2)P⊤= I which is possible due to positive definiteness of Σ1 +Σ2. Then define ˆΣy = PΣyP⊤and calculate an orthogonal matrix R and a diagonal maxtrix D by spectral theory such that ˆΣ1 = RDR⊤. Therefore ˆΣ2 = R(I −D)R⊤ since ˆΣ1 + ˆΣ2 = I and Q := R⊤P satisfies (3). The projection that is given by the j-th row of matrix R has a relative variance of d j (j-th element of D) for trials of class 1 and relative variance 1 −d j for trials of class 2. If d j is near 1 the filter given by the j-th row of R maximizes variance for class 1, and since 1 −d j is near 0, minimizes variance for class 2. Typically one would retain some projections corresponding to the highest eigenvalues d j, i.e., CSPs for class 1, and some corresponding to the lowest eigenvalues, i.e., CSPs for class 2. 5 Spectral Filter As discussed in Sec. 3 the content of discriminative information in different frequency bands is highly subject-dependent. For example the subject whose spectra are visualized in Fig. 1 shows a highly discriminative peak at 12 Hz whereas the peak at 8 Hz does not show good discrimination. Since the lower frequency peak is stronger a better performance in classification can be expected, if we reduce the influence of the lower frequency peak for this subject. However, for other subjects the situation looks differently, i.e., the classification might fail if we exclude this information. Thus it is desirable to optimize a spectral filter for better discriminability. Here are two approaches to this task. CSSP. In [11] the following was suggested: Given si the signal sτ i is defined to be the signal si delayed by τ timepoints. In CSSP the usual CSP approach is applied to the concatenation of si and sτ i in the channel dimension, i.e., the delayed signals are treated as new channels. By this concatenation step the ability to neglect or emphasize specific frequency bands can be achieved and strongly depends on the choice of τ which can be accomplished by some validation approach on the training set. More complex frequency filters can be found by concatenating more delayed EEG-signals with several delays. In [11] it was concluded that in typical BCI situations where only small training sets are available, the choice of only one delay tap is most effective. The increased flexibility of a frequency filter with more delay taps does not trade off the increased complexity of the optimization problem. CSSSP. The idea of our new CSSSP algorithm is to learn a complete global spatialtemporal filter in the spirit of CSP and CSSP. A digital frequency filter consists of two sequences a and b with length na and nb such that the signal x is filtered to y by a(1)y(t) = b(1)x(t)+b(2)x(t −1)+...+b(nb)x(t −nb −1) − a(2)y(t −1)−...−a(na)y(t −na −1) Here we restrict ourselves to FIR (finite impulse response) filters by defining na = 1 and a = 1. Furthermore we define b(1) = 1 and fix the length of b to some T with T > 1. By this restriction we resign some flexibility of the frequency filter but it allows us to find a suitable solution in the following way: We are looking for a real-valued sequence b1,...,T with b(1) = 1 such that the trials si,b = si + ∑ τ=2,...,T bτsτ i (4) can be classified better in some way. Using equation (1) we have to solve the problem max w,b,b(1)=1 ∑ i:Trial in Class 1 var(ˆsi,b(w)), s.t. ∑ i var(ˆsi,b(w)) = 1, (5) which can be simplified to max b,b(1)=1max w w⊤ ∑ τ=0,...,T−1 ∑ j=1,...,T−τ b(j)b(j +τ) ! Στ 1 ! w, s.t. w⊤ ∑ τ=0,...,T−1 ∑ j=1,...,T−τ b(j)b(j +τ) ! (Στ 1 +Στ 2) ! w = 1. (6) where Στ y = E(⟨si(sτ i )⊤+ sτ i s⊤ i |i : Trial in Class y⟩), namely the correlation between the signal and the by τ timepoints delayed signal. Since we can calculate for each b the optimal w by the usual CSP techniques (see equation (2) and (3)) a (T −1)-dimensional (b(1)=1) problem remains which we can solve with usual line-search optimization techniques if T is not too large. Consequently we get for each class a frequency band filter and a pattern (or similar to CSP more than one pattern by choosing the next eigenvectors). However, with increasing T the complexity of the frequency filter has to be controlled in order to avoid overfitting. This control is achieved by introducing a regularization term in 5 10 15 20 25 −20 −10 0 10 Frequency (Hz) Magnitude (dB) 10 15 20 28 32 36 40 44 48 dB Cz 10 15 20 C4 Figure 2: The plot on the left shows one learned frequency filter for the subject whose spectra was shown Fig. 1. In the plot on the right the resulting spectra are visualized after applying the frequency filter on the left. By this technique the classification error could be reduced from 12.9 % to 4.3 %. the following way: max b,b(1)=1max w w⊤ ∑ τ=0,...,T−1 ∑ j=1,...,T−τ b(j)b(j +τ) ! Στ 1 ! w−C/T||b||1, s.t. w⊤ ∑ τ=0,...,T−1 ∑ j=1,...,T−τ b(j)b(j +τ) ! (Στ 1 +Στ 2) ! w = 1. (7) Here C is a non-negative regularization constant, which has to be chosen, e.g., by crossvalidation. Since a sparse solution for b is desired, we use the 1-norm in this formulation. With higher C we get sparser solutions for b until at one point the usual CSP approach remains, i.e., b(1) = 1,b(m) = 0 for m > 1. We call this approach Common Sparse Spectral Spatial Pattern (CSSSP) algorithm. 6 Feature Extraction, Classification and Validation 6.1 Feature Extraction After choosing all channels except the EOG and EMG and a few of the outermost channels of the cap we apply a causal band-pass filter from 7–30Hz to the data, which encompasses both the µ- and the β-rhythm. For classification we extract the interval 500–3500ms after the presented visual stimulus. To these trials we apply the original CSP ([10]) algorithm (see Sec. 4), the extended CSSP ([11]), and the proposed CSSSP algorithm (see Sec. 5). For CSSP we choose the best τ by leave-one-out cross validation on the training set. For CSSSP we present the results for different regularization constants C with fixed T = 16. Here we use 3 patterns per class which leads to a 6-dimensional output signal. As a measure of the amplitude in the specified frequency band we calculate the logarithm of the variances of the spatio-temporally filtered output signals as feature vectors. 6.2 Classification and Validation The presented preprocessing reduces the dimensionality of the feature vectors to six. Since we have 120 up to 200 samples per class for each data set, there is no need for regularization when using linear classifiers. When testing non-linear classification methods on these features, we could not observe any statistically significant gain for the given experimental setup when compared to Linear Discriminant Analysis (LDA) (see also [22, 6, 23]). Therefore we choose LDA for classification. For validation purposes the (chronologically) first half of the data are used as training and the second half as test data. 7 Results Fig. 2 shows one chosen frequency filter for the subject whose spectra are shown in Fig. 1 and the remaining spectrum after using this filter. As expected the filter detects that there C = 0.1 C = 0.5 C = 1 C = 5 CSP vs. CSSSP 0 20 40 0 10 20 30 40 50 0 20 40 0 10 20 30 40 50 0 20 40 0 10 20 30 40 50 0 20 40 0 10 20 30 40 50 CSSP vs. CSSSP 0 20 40 0 10 20 30 40 50 0 20 40 0 10 20 30 40 50 0 20 40 0 10 20 30 40 50 0 20 40 0 10 20 30 40 50 Figure 3: Each plots shows validation error of one algorithm against another, in row 1 that is CSP (y-axis) vs. CSSSP (x-axis), in row 2 that is CSSP (y-axis) vs. CSSSP (x-axis). In columns the regularization parameter of CSSSP is varied between 0.1, 0.5, 1 and 5. In each plot a cross above the diagonal marks a dataset where CSSSP outperforms the other algorithm. is a high discrimination in frequencies at 12 Hz, but only a low discrimination in the frequency band at 8 Hz. Since the lower frequency peak is very predominant for this subject without having a high discrimination power, a filter is learned which drastically decreases the amplitude in this band, whereas full power at 12 Hz is retained. Applied to all datasets and all pairwise class combinations of the datasets we get the results shown in Fig. 3. Only the results of those datasets are displayed whose classification accuracy exceeds 70 % for at least one classifier. First of all, it is obvious that a small choice of the regularization constant is problematic, since the algorithm tends to overfit. For high values CSSSP tends towards the CSP performance since using frequency filters is punished too hard. In between there is a range where CSSSP is better than CSP. Furthermore there are some datasets where the gain by CSSSP is huge. Compared to CSSP the situation is similar, namely that CSSSP outperforms the CSSP in many cases and on average, but there are also a few cases, where CSSP is better. An open issue is the choice of the parameter C. If we choose it constant at 1 for all datasets the figure shows that CSSSP will typically outperform CSP. Compared to CSSP both cases appear, namely that CSSP is better than CSSSP and vice versa. A more refined way is to choose C individually for each dataset. One way to accomplish this choice is to perform cross-validations for a set of possible values of C and to select the C with minimum cross-validation error. We have done this, for example, for the dataset whose spectra are shown in Fig. 1. Here on the training set for C the value 0.3 is chosen. The classification error of CSSSP with this C is 4.3%, whereas CSP has 12.9% and CSSP 8.6 % classification error. 8 Concluding discussion In past BCI research the CSP algorithm has proven to be very sucessful in determining spatial filters which extract discriminative brain rhythms. However the performance can suffer when a non-discriminative brain rhythm with an overlapping frequency range interferes. The presented CSSSP algorithm successful solves such problematic situations by optimizing simultaneously with the spatial filters a spectral filter. The trade-off between flexibility of the estimated frequency filter and the danger of overfitting is accounted for by a sparsity constraint which is weighted by a regularization constant. The successfulness of the proposed algorithm when compared to the original CSP and to the CSSP algorithm was demonstrated on a corpus of 60 EEG data sets recorded from 22 different subjects. Acknowledgments We thank S. Lemm for helpful discussions. The studies were supported by BMBF-grants FKZ 01IBB02A and FKZ 01IBB02B, by the Deutsche Forschungsgemeinschaft (DFG), FOR 375/B1 and by the PASCAL Network of Excellence (EU # 506778). References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control”, Clin. Neurophysiol., 113: 767–791, 2002. [2] E. A. Curran and M. J. Stokes, “Learning to control brain activity: A review of the production and control of EEG components for driving brain-computer interface (BCI) systems”, Brain Cogn., 51: 326–336, 2003. [3] A. Kübler, B. Kotchoubey, J. Kaiser, J. Wolpaw, and N. Birbaumer, “Brain-Computer Communication: Unlocking the Locked In”, Psychol. Bull., 127(3): 358–375, 2001. [4] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub, and H. Flor, “A spelling device for the paralysed”, Nature, 398: 297–298, 1999. [5] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schlögl, B. Obermaier, and M. Pregenzer, “Current Trends in Graz Brain-computer Interface (BCI)”, IEEE Trans. Rehab. Eng., 8(2): 216–219, 2000. [6] B. Blankertz, G. Curio, and K.-R. Müller, “Classifying Single Trial EEG: Towards Brain Computer Interfacing”, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157–164, 2002. [7] L. Trejo, K. Wheeler, C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A. Hibbs, R. Matthews, and M. Krupka, “Multimodal Neuroelectric Interface Development”, IEEE Trans. Neural Sys. Rehab. Eng., (11): 199–204, 2003. [8] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, “Linear spatial integration for single trial detection in encephalography”, NeuroImage, 7(1): 223–230, 2002. [9] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, “EEG-Based Communication: A Pattern Recognition Approach”, IEEE Trans. Rehab. Eng., 8(2): 214–215, 2000. [10] H. Ramoser, J. Müller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement”, IEEE Trans. Rehab. Eng., 8(4): 441–446, 2000. [11] S. Lemm, B. Blankertz, G. Curio, and K.-R. Müller, “Spatio-Spectral Filters for Improved Classification of Single Trial EEG”, IEEE Trans. Biomed. Eng., 52(9): 1541–1548, 2005. [12] H. Jasper and W. Penfield, “Electrocorticograms in man: Effects of voluntary movement upon the electrical activity of the precentral gyrus”, Arch. Psychiat. Nervenkr., 183: 163–174, 1949. [13] G. Pfurtscheller and F. H. L. da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles”, Clin. Neurophysiol., 110(11): 1842–1857, 1999. [14] H. Jasper and H. Andrews, “Normal differentiation of occipital and precentral regions in man”, Arch. Neurol. Psychiat. (Chicago), 39: 96–115, 1938. [15] H. Berger, “Über das Elektroenkephalogramm des Menschen”, Arch. Psychiat. Nervenkr., 99(6): 555–574, 1933. [16] F. H. da Silva, T. H. van Lierop, C. F. Schrijer, and W. S. van Leeuwen, “Organization of thalamic and cortical alpha rhythm: Spectra and coherences”, Electroencephalogr. Clin. Neurophysiol., 35: 627–640, 1973. [17] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Combining Features for BCI”, in: S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Inf. Proc. Systems (NIPS 02), vol. 15, 1115–1122, 2003. [18] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Increase Information Transfer Rates in BCI by CSP Extension to Multi-class”, in: S. Thrun, L. Saul, and B. Schölkopf, eds., Advances in Neural Information Processing Systems, vol. 16, 733–740, MIT Press, Cambridge, MA, 2004. [19] K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, San Diego, 2nd edn., 1990. [20] Z. J. Koles and A. C. K. Soong, “EEG source localization: implementing the spatio-temporal decomposition approach”, Electroencephalogr. Clin. Neurophysiol., 107: 343–352, 1998. [21] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Boosting bit rates in non-invasive EEG singletrial classifications by feature combination and multi-class paradigms”, IEEE Trans. Biomed. Eng., 51(6): 993–1002, 2004. [22] K.-R. Müller, C. W. Anderson, and G. E. Birch, “Linear and Non-Linear Methods for Brain-Computer Interfaces”, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 165–169, 2003. [23] B. Blankertz, G. Dornhege, C. Schäfer, R. Krepki, J. Kohlmorgen, K.-R. Müller, V. Kunzmann, F. Losch, and G. Curio, “Boosting Bit Rates and Error Detection for the Classification of Fast-Paced Motor Commands Based on Single-Trial EEG Analysis”, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 127–131, 2003.
|
2005
|
90
|
2,911
|
Unbiased Estimator of Shape Parameter for Spiking Irregularities under Changing Environments Keiji Miura Kyoto University JST PRESTO Masato Okada University of Tokyo JST PRESTO RIKEN BSI Shun-ichi Amari RIKEN BSI Abstract We considered a gamma distribution of interspike intervals as a statistical model for neuronal spike generation. The model parameters consist of a time-dependent firing rate and a shape parameter that characterizes spiking irregularities of individual neurons. Because the environment changes with time, observed data are generated from the time-dependent firing rate, which is an unknown function. A statistical model with an unknown function is called a semiparametric model, which is one of the unsolved problem in statistics and is generally very difficult to solve. We used a novel method of estimating functions in information geometry to estimate the shape parameter without estimating the unknown function. We analytically obtained an optimal estimating function for the shape parameter independent of the functional form of the firing rate. This estimation is efficient without Fisher information loss and better than maximum likelihood estimation. 1 Introduction The firing patterns of cortical neurons look very noisy [1]. Consequently, probabilistic models are necessary to describe these patterns [2, 3, 4]. For example, Baker and Lemon showed that the firing patterns recorded from motor areas can be explained using a continuous-time rate-modulated gamma process [5]. Their model had a rate parameter, ξ, and a shape parameter, κ, that was related to spiking irregularity. ξ was assumed to be a function of time because it depended largely on the behavior of the monkey. κ was assumed to be unique to individual neurons and constant over time. The assumption that κ is unique to individual neurons is also supported by other studies [6, 7, 8]. However, these indirect supports are not conclusive. Therefore, we need to accurately estimate κ to make the assumption more reliable. If the assumption is correct, neurons may be identified by κ estimated from the spiking patterns, and κ may provide useful information about the function of a neuron. In other words, it may be possible to classify neurons according to functional firing patterns rather than static anatomical properties. Thus, it is very important to accurately estimate κ in the field of neuroscience. In reality, however, it is very difficult to estimate all the parameters in the model from the observed spike data. The reason for this is that the unknown function for the timedependent firing rate, ξ(t), has infinite degrees of freedom. This kind of estimation problem is called the semiparametric model [9] and is one of the unsolved problems in statistics. Are there any ingenious methods of estimating κ accurately to overcome this difficulty? Ikeda pointed out that the problem we need to consider is the semiparametric model [10]. However, the problem remains unsolved. There is a method called estimating functions [11, 12] for semiparametric problems, and a general theory has been developed [13, 14, 15] from the viewpoint of information geometry [16, 17, 18]. However, the method of estimating functions cannot be applied to our problem in its original form. In this paper, we consider the semiparametric model suggested by Ikeda instead of the continuous-time rate-modulated gamma process. In this discrete-time rate-modulated model, the firing rate varies for each interspike interval. This model is a mixture model and can represent various types of interspike interval distributions by adjusting its weight function. The model can be analyzed by using the method of estimating functions for semiparametric models. Various attempts have been made to solve semiparametric models. Neyman and Scott pointed out that the maximum likelihood method does not generally provide a consistent estimator when the number of parameters and observations are the same [19]. In fact, we show that maximum likelihood estimation for our problem is biased. Ritov and Bickel considered asymptotic attainability of information bound purely mathematically [20, 21]. However, their results were not practical for application to our problem. Amari and Kawanabe showed a practical method of estimating finite parameters of interest without estimating an unknown function [15]. This is the method of estimating functions. If this method can be applied, κ can be estimated consistently independent of the functional form of a firing rate. In this paper, we show that the model we consider here is the “exponential form” defined by Amari and Kawanabe [15]. However, an asymptotically unbiased estimating function does not exist unless multiple observations are given for each firing rate, ξ. We show that if multiple observations are given, the method of estimating functions can be applied. In that case, the estimating function of κ can be analytically obtained, and κ can be estimated consistently independent of the functional form of a firing rate. In general, estimation using estimating functions is not efficient. However, for our problem, this method yielded an optimal estimator in the sense of Fisher information [15]. That is, we obtained an efficient estimator. 2 Simple case We considered the following statistical model of inter spike intervals proposed by Ikeda [10]. Interspike intervals are generated by a gamma distribution whose mean firing rate changes over time. The mean firing rate ξ at each observation is determined randomly according to an unknown probability distribution, k(ξ). The model is described as p(T; κ, k(ξ)) = q(T; ξ, κ)k(ξ)dξ, (1) where q(T; ξ, κ) = (ξκ)κ Γ(κ) T κ−1e−ξκT = eξ(−κT )+(κ−1) log(T )−(−κ log(ξκ)+log Γ(κ)) ≡ eξs(T,κ)+r(T,κ)−ψ(κ,ξ). (2) Here, T denotes an interspike interval. We defined s, r, and ψ as s(T, κ) = −κT, (3) r(T, κ) = (κ −1) log(T), and (4) ψ(κ, ξ) = −κ log(ξκ) + log Γ(κ) (5) to demonstrate that the model is the exponential form defined by Amari and Kawanabe [15]. Note that this type of model is called a semiparametric model because it has both unknown finite parameters, κ, and function, k(ξ). In this mixture model, {ξ(1), ξ(2), . . .} is an unknown sequence where ξ is independently and identically distributed according to a probability density function k(ξ). Then, l-th observation T (l) is distributed according to q(T (l); ξ(l), κ). In effect, T is independently and identically distributed according to p(T; κ, k(ξ)). An estimating function is a function of κ whose zero-crossing provides an estimate of κ, analogous to the derivative with respect to κ of the log-likelihood function. Note that the zero-crossings of the derivatives of the log-likelihood function with respect to parameters provide an maximum likelihood estimator. Let us calculate the estimating function following Amari and Kawanabe [15] to estimate κ without estimating k(ξ). They showed that for the exponential form of mixture distributions, the estimating function, uI, is given by the projection of the score function, u = ∂κ log p, as uI(T, κ) = u −E[u|s] = (∂κs −E[∂κs|s]) · Eξ[ξ|s] + ∂κr −E[∂κr|s] = ∂κr −E[∂κr|s], (6) where Eξ[ξ|s] = ξk(ξ) exp(ξ · s −ψ)dξ k(ξ) exp(ξ · s −ψ)dξ . (7) The relation, E[∂κs|s] = s κ = −T = ∂κs, (8) holds because the number of random variables, T, and s are the same. For the same reason, E[∂κr|s] = log(T) = ∂κr. (9) Then, uI = 0. (10) This means that the set of estimating functions is an empty set. Therefore, we proved that no asymptotically unbiased estimating function of κ exists for the model. Two or more random variables may be needed. Let us consider the multivariate model described as p(T1, ..., Tn; κ, k(ξ1, ..., ξn)) = n i=1 q(Ti; ξi, κ)k(ξ1, ..., ξn)dξ. (11) Here, the number of random variables and s are also the same, and uI becomes an empty set. This result can be understood intuitively as follows. When the mean, µ, and variance, σ, of a normal distribution are estimated from a single observation, x, they are estimated as µ = x and σ = 0. Similarly, ξ and κ of a gamma distribution, q(T; ξ, κ), are estimated from a single observation, T, as ξ = 1 T and κ = ∞corresponding to 0 variance. Two or more observations are required to estimate κ. For the semiparametric model considered in this section, only one observation is given for each ξ. Two or more observations are needed for each ξ. 3 Cases with multiple observations for each ξ Next we consider the case where m observations are given for each ξ(l), which may be distributed according to k(ξ). Here, a consistent estimator of κ exists. Let {T} = {T1, . . . , Tm} be the m observations, which are generated from the same distribution specified by ξ and κ. We have N such observations {T (l)}, l = 1, . . . , N, with a common κ and different ξ(l). Thus, {T (l) 1 , . . . , T (l) m } are generated from the same firing rate ξ(l). Let us take one {T}. The probability model can be written as p({T}; κ, k(ξ)) = m i=1 q(Ti; ξ, κ)k(ξ)dξ, (12) where m i=1 q(Ti; ξ, κ) = m i=1 (ξκ)κ Γ(κ) T κ−1 i e−ξκTi = eξ(−κm i=1 Ti)+(κ−1)m i=1 log(Ti)−(−mκ log(ξκ)+m log Γ(κ)) ≡ e(ξ·s({T },κ)+r({T },κ)−ψ(κ,ξ)). (13) We defined s, r, and ψ as s({T}, κ) = −κ m i=1 Ti, (14) r({T}, κ) = (κ −1) m i=1 log(Ti), and (15) ψ(κ, ξ) = −mκ log(ξκ) + m log Γ(κ). (16) Then, the estimating function is given by uI({T}, κ) = u −E[u|s] = (∂κs −E[∂κs|s]) · Eξ[ξ|s] + ∂κr −E[∂κr|s] = ∂κr −E[∂κr|s] = m i=1 log(Ti) −mE[log(T1)|s], (17) where we used E[∂κs|s] = s κ = ∂κs. (18) To calculate the conditional expectation of log T1, let us use Bayes’s Theorem: p(T|s) = p(T, s) p(s) . (19) By transforming random variables, (T1, T2, T3, ..., Tm), into (s, T2, T3, ..., Tm), we have p(s) = i q(Ti; ξ, κ)δ(s + κ m i=1 Ti)k(ξ)dξdT = m−1 i=1 B(iκ, κ)(−s)mκ−1 Γ(κ)m ξmκesξk(ξ)dξ. (20) where the beta function is defined as B(x, y) = Γ(x)Γ(y) Γ(x + y) = (x −1)!(y −1)! (x + y −1)! . (21) Similarly, we have E[log(T1)|s] = log(T1) m i=1 q(Ti)δ(s + κ m i=1 Ti)k(ξ)dξdT 1 p(s) = log(−s κ) −φ(mκ) + φ(κ), (22) where the digamma function is defined as φ(κ) = Γ′(κ) Γ(κ) . (23) Note that E[log(T1)|s] does not depend on the unknown function, k(ξ). Thus, we have uI({T}, κ) = m i=1 log(Ti) −m log( m i=1 Ti) + mφ(mκ) −mφ(κ). (24) The form of uI can be understood as follows. If we scale T as t = ξT, we have E[t] = 1. Then, we can show that uI does not depend on ξ, because log(T) −E[log T|s] = log(t) −E[log t|s]. (25) This implies that we can estimate κ without estimating ξ. The method of estimating function only works for gamma distributions. It crucially depends on the fact that the estimating function is invariant under scaling of T. κ can be estimated consistently from N independent observations, {T (l)} = {T (l) 1 , . . . , T (l) m }, l = 1, . . . , N, as the value of κ that solves N l=1 uI({T (l)}, ˆκ) = 0. (26) In fact, the expectation of uI is 0 independent of k(ξ): E[uI] = ( m i=1 q(Ti; ξ, κ)uIdT)k(ξ)dξ = Eq[uI|s]p(s)ds · k(ξ)dξ = Eq[log t −E[log t|s]|s]p(s)ds = 0, (27) where Eq denotes the expectation for m i=1 q(ti; 1, κ). uI yields an efficient estimating function [15, 21]. An efficient estimator is one whose variance attains the Cramer-Rao lower bound asymptotically. Thus, there is no estimator of κ whose mean-square estimation error is smaller than that given by uI. As uI does not depend on k(ξ), it is the optimal estimating function whatever k(ξ) is, or whatever the sequence ξ(1), . . . , ξ(N) is. 2 5 10 20 50 200 500 5 10 15 20 30 40 Number of observations κ^ Maximum likelihood Proposed method Figure 1: Biases of ˆκ for maximum likelihood estimation and proposed method for m = 2. The dotted line represents the true value, κ = 4. The maximum likelihood estimation is biased even when an infinite number of observations are given while the estimating function is asymptotically unbiased. The maximum likelihood estimation for this problem is given by uMLE = m i=1 log(Ti) + m log(ˆξ) + m log κ −mφ(κ), (28) where 1 ˆξ = 1 m m i=1 Ti. (29) uMLE is similar to uI but different in terms of constant. As a result, the maximum likelihood estimator ˆκ is biased (Figure 1). So far, we have assumed that the firing rates for m observations are the same. Instead, let us consider a case where the firing rates have some relation. For example, consider the case where Eq[t1] = 2Eq[t2]. The model can be written as p(t1, t2; κ, k(ξ)) = q(t1; ξ, κ)q(t2; 2ξ, κ)k(ξ)dξ. (30) This model can be derived from Eq. (12) by rescaling as T1 = t1 and T2 = 2t2. Note that q(2T; ξ, κ) = q(T; 2ξ, κ) because T always appears as ξT in q(T; ξ, κ). Thus, Eq. (12) includes various kinds of models. 4 General case Let us consider a general case where the firing rate changes stepwise. That is, {ξ1, . . . , ξn} is distributed according to k({ξ}) = k(ξ1, . . . , ξn) and ma observations are given for each ξa. The model can be written as p({T}; κ, k({ξ})) = m1 i1=1 q(T (1) i1 ; ξ1, κ) m2 i2=1 q(T (2) i2 ; ξ2, κ) . . . mn in=1 q(T (n) in ; ξn, κ)k({ξ})dξ1dξ2 . . . dξn, (31) where m1 i1=1 q(T (1) i1 ; ξ1, κ) m2 i2=1 q(T (2) i2 ; ξ2, κ) . . . mn in=1 q(T (n) in ; ξn, κ) = exp(ξ1(−κ m1 i1=1 T (1) i1 ) + ξ2(−κ m2 i2=1 T (2) i2 ) + . . . + ξn(−κ mn in=1 T (n) in ) +(κ −1)( m1 i1=1 log T (1) i1 + m2 i2=1 log T (2) i2 + . . . + mn in=1 log T (n) in ) + n a=1 maκ log(ξa) + n a=1 maκ log(κ) − n a=1 ma log Γ(κ)). (32) We defined sa, r, and ψ as sa({T (a)}, κ) = −κ ma ia=1 T (a) ia , (33) r({T}, κ) = (κ −1) n a=1 ( ma ia=1 log T (a) ia ), (34) ψ(κ, {ξ}) = − n a=1 maκ log(ξa) − n a=1 maκ log(κ) + n a=1 ma log Γ(κ). (35) Then, uI({T}, κ) = u −E[u|s] = (∂κs −E[∂κs|s]) · E[ξ|s] + ∂κr −E[∂κr|s] = ∂κr −E[∂κr|s] (36) = n a=1 { ma ia=1 log T (a) ia −ma log( ma ia=1 T (a) ia ) + maφ(maκ) −maφ(κ)}. Thus, κ is estimated with equal weight for every observation. Note that the conditional expectations can be calculated independently for each set of random variables. uI yields an efficient estimating function. As this does not depend on k({ξ}), uI is the optimal estimating function at any k({ξ}). There is no information loss. Note that k({ξ}) can include correlations among ξa’s. Nevertheless, the result is very similar to that of the previous section. 5 Summary and discussion We estimated the shape parameter, κ, of the semiparametric model suggested by Ikeda without estimating the firing rate, ξ. The maximum likelihood estimator is not consistent for this problem because the number of nuisance parameters, ξ, increases with increasing observations, T. We showed that Ikeda’s model is the exponential form defined by Amari and Kawanabe [15] and can be analyzed by a method of estimating functions for semiparametric models. We found that an estimating function does not exist unless multiple observations are given for each firing rate, ξ. If multiple observations are given, a method of estimating functions can be applied. In that case, the estimating function of κ can be analytically obtained, and κ can be estimated consistently independent of the functional form of the firing rate, k(ξ). In general, the estimating function is not efficient. However, this method provided an optimal estimator in the sense of Fisher information for our problem. That is, we obtained an efficient estimator. Acknowledgments We are grateful to K. Ikeda for his helpful discussions. This work was supported in part by grants from the Japan Society for the Promotion of Science (Nos. 14084212 and 16500093). References [1] G. R. Holt, W. R. Softky, C. Koch, and R. J. Douglas, Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons, J. Neurophysiol., Vol. 75, pp. 1806-14, 1996. [2] H. C. Tuckwell, Introduction to theoretical neurobiology: volume 2, nonlinear and stochastic theories, Cambridge University Press, Cambridge, 1988. [3] Y. Sakai, S. Funahashi, and S. Shinomoto, Temporally correlated inputs to leaky integrateand-fire models can reproduce spiking statistics of cortical neurons, Neural Netw., Vol. 12, pp. 1181-1190, 1999. [4] D. R. Cox and P. A. W. Lewis, The statistical analysis of series of events, Methuen, London, 1966. [5] S. N. Baker and R. N. Lemon, Precise spatiotemporal repeating patterns in monkey primary and supplementary motor areas occur at chance levels, J. Neurophysiol., Vol. 84, pp. 1770-80, 2000. [6] S. Shinomoto, K. Shima, and J. Tanji, Differences in spiking patterns among cortical neurons, Neural Comput.,Vol. 15, pp. 2823-42, 2003. [7] S. Shinomoto, Y. Miyazaki, H. Tamura, and I. Fujita, Regional and laminar differences in in vivo firing patterns of primate cortical neurons, J. Neurophysiol., in press. [8] S. Shinomoto, K. Miura, and S. Koyama, A measure of local variation of inter-spike intervals, Biosystems, Vol. 79, pp. 67-72, 2005. [9] J. Pfanzagl, Estimation in semiparametric models, Springer-Verlag, Berlin, 1990. [10] K. Ikeda, Information geometry of interspike intervals in spiking neurons, Neural Comput., in press. [11] V. P. Godambe, An optimum property of regular maximum likelihood estimation, Ann. Math. Statist., Vol. 31, pp. 1208-1211, 1960. [12] V. P. Godambe (ed.), Estimating functions, Oxford University Press, New York, 1991. [13] S. Amari, Dual connections on the Hilbert bundles of statistical models, In C. T. J. Dodson (ed.), Geometrization of statistical theory, pp. 123-152, University of Lancaster Department of Mathematics, Lancaster, 1987. [14] S. Amari and M. Kumon, Estimation in the presence of infinitely many nuisance parameters geometry of estimating functions, Ann. Statist., Vol. 16, pp. 1044-1068, 1988. [15] S. Amari and M. Kawanabe, Information geometry of estimating functions in semi-parametric statistical models, Bernoulli, Vol. 3, pp. 29-54, 1997. [16] H. Nagaoka and S. Amari, Differential geometry of smooth families of probability distributions, Technical Report 82-7, University of Tokyo, 1982. [17] S. Amari and H. Nagaoka, Methods of information geometry, American Mathematical Society, Providence, RI, 2001. [18] S. Amari, Information geometry on hierarchy of probability distributions, IEEE Transactions on Information Theory, Vol. 47, pp. 1701-1711, 2001. [19] J. Neyman and E. L. Scott, Consistent estimates based on partially consistent observations, Econometrica, Vol. 32, pp. 1-32, 1948. [20] Y. Ritov and P. J. Bickel, Achieving information bounds in non and semiparametric models, Ann. Statist., Vol. 18, pp. 925-938, 1990. [21] P. J. Bickel, C. A. J. Klaassen, Y. Ritov, and J. A. Wellner, Efficient and adaptive estimation for semiparametric models, Johns Hopkins University Press, Baltimore, MD, 1993.
|
2005
|
91
|
2,912
|
Coarse sample complexity bounds for active learning Sanjoy Dasgupta UC San Diego dasgupta@cs.ucsd.edu Abstract We characterize the sample complexity of active learning problems in terms of a parameter which takes into account the distribution over the input space, the specific target hypothesis, and the desired accuracy. 1 Introduction The goal of active learning is to learn a classifier in a setting where data comes unlabeled, and any labels must be explicitly requested and paid for. The hope is that an accurate classifier can be found by buying just a few labels. So far the most encouraging theoretical results in this field are [7, 6], which show that if the hypothesis class is that of homogeneous (i.e. through the origin) linear separators, and the data is distributed uniformly over the unit sphere in Rd, and the labels correspond perfectly to one of the hypotheses (i.e. the separable case) then at most O(d log d/ǫ) labels are needed to learn a classifier with error less than ǫ. This is exponentially smaller than the usual Ω(d/ǫ) sample complexity of learning linear classifiers in a supervised setting. However, generalizing this result is non-trivial. For instance, if the hypothesis class is expanded to include non-homogeneous linear separators, then even in just two dimensions, under the same benign input distribution, we will see that there are some target hypotheses for which active learning does not help much, for which Ω(1/ǫ) labels are needed. In fact, in this example the label complexity of active learning depends heavily on the specific target hypothesis, and ranges from O(log 1/ǫ) to Ω(1/ǫ). In this paper, we consider arbitrary hypothesis classes H of VC dimension d < ∞, and learning problems which are separable. We characterize the sample complexity of active learning in terms of a parameter which takes into account: (1) the distribution P over the input space X; (2) the specific target hypothesis h∗∈H; and (3) the desired accuracy ǫ. Specifically, we notice that distribution P induces a natural topology on H, and we define a splitting index ρ which captures the relevant local geometry of H in the vicinity of h∗, at scale ǫ. We show that this quantity fairly tightly describes the sample complexity of active learning: any active learning scheme requires Ω(1/ρ) labels and there is a generic active learner which always uses at most ˜O(d/ρ) labels1. This ρ is always at least ǫ; if it is ǫ we just get the usual sample complexity of supervised 1The ˜O(·) notation hides factors polylogarithmic in d, 1/ǫ, 1/δ, and 1/τ. learning. But sometimes ρ is a constant, and in such instances active learning gives an exponential improvement in the number of labels needed. We look at various hypothesis classes and derive splitting indices for target hypotheses at different levels of accuracy. For homogeneous linear separators and the uniform input distribution, we easily find ρ to be a constant – perhaps the most direct proof yet of the efficacy of active learning in this case. Most proofs have been omitted for want of space; the full details, along with more examples, can be found at [5]. 2 Sample complexity bounds 2.1 Motivating examples Linear separators in R1 Our first example is taken from [3, 4]. Suppose the data lie on the real line, and the classifiers are simple thresholding functions, H = {hw : w ∈R}: hw(x) = 1 if x ≥w 0 if x < w + + + + − − −− − − − w VC theory tells us that if the underlying distribution P is separable (can be classified perfectly by some hypothesis in H), then in order to achieve an error rate less than ǫ, it is enough to draw m = O(1/ǫ) random labeled examples from P, and to return any classifier consistent with them. But suppose we instead draw m unlabeled samples from P. If we lay these points down on the line, their hidden labels are a sequence of 0’s followed by a sequence of 1’s, and the goal is to discover the point w at which the transition occurs. This can be done with a binary search which asks for just log m = O(log 1/ǫ) labels. Thus, in this case active learning gives an exponential improvement in the number of labels needed. Can we always achieve a label complexity proportional to log 1/ǫ rather than 1/ǫ? A natural next step is to consider linear separators in two dimensions. Linear separators in R2 Let H be the hypothesis class of linear separators in R2, and suppose the input distribution P is some density supported on the perimeter of the unit circle. It turns out that the positive results of the one-dimensional case do not generalize: there are some target hypotheses in H for which Ω(1/ǫ) labels are needed to find a classifier with error rate less than ǫ, no matter what active learning scheme is used. To see this, consider the following possible target hypotheses (Figure 1, left): h0, for which all points are positive; and hi (1 ≤i ≤1/ǫ), for which all points are positive except for a small slice Bi of probability mass ǫ. The slices Bi are explicitly chosen to be disjoint, with the result that Ω(1/ǫ) labels are needed to distinguish between these hypotheses. For instance, suppose nature chooses a target hypothesis at random from among the hi, 1 ≤i ≤1/ǫ. Then, to identify this target with probability at least 1/2, it is necessary to query points in at least (about) half the Bi’s. Thus for these particular target hypotheses, active learning offers no improvement in sample complexity. What about other target hypotheses in H, for instance those in which the positive and negative regions are most evenly balanced? Consider the following active learning scheme: B2 B1 h0 h3 h2 h1 x3 P P′ origin Figure 1: Left: The data lie on the circumference of a circle. Each Bi is an arc of probability mass ǫ. Right: The same distribution P, lifted to 3-d, and with trace amounts of another distribution P′ mixed in. 1. Draw a pool of O(1/ǫ) unlabeled points. 2. From this pool, choose query points at random until at least one positive and one negative point have been found. (If all points have been queried, then halt.) 3. Apply binary search to find the two boundaries between positive and negative on the perimeter of the circle. For any h ∈H, define i(h) = min{positive mass of h, negative mass of h}. It is not hard to see that when the target hypothesis is h, step (2) asks for O(1/i(h)) labels (with probability at least 9/10, say) and step (3) asks for O(log 1/ǫ) labels. Thus even within this simple hypothesis class, the label complexity of active learning can run anywhere from O(log 1/ǫ) to Ω(1/ǫ), depending on the specific target hypothesis. Linear separators in R3 In our two previous examples, the amount of unlabeled data needed was O(1/ǫ), exactly the usual sample complexity of supervised learning. We next turn to a case in which it is helpful to have significantly more unlabeled data than this. Consider the distribution of the previous 2-d example: for concreteness, fix P to be uniform over the unit circle in R2. Now lift it into three dimensions by adding to each point x = (x1, x2) a third coordinate x3 = 1. Let H consist of homogeneous linear separators in R3. Clearly the bad cases of the previous example persist. Suppose, now, that a trace amount τ of a second distribution P′ is mixed in with P (Figure 1, right), where P′ is uniform on the circle {x2 1+x2 2 = 1, x3 = 0}. The “bad” linear separators in H cut off just a small portion of P but nonetheless divide P′ perfectly in half. This permits a three-stage algorithm: (1) using binary search on points from P′, approximately identify the two places at which the target hypothesis h∗cuts P′; (2) use this to identify a positive and negative point of P (look at the midpoints of the positive and negative intervals in P′); (3) do binary search on points from P. Steps (1) and (3) each use just O(log 1/ǫ) labels. This O(log 1/ǫ) label complexity is made possible by the presence of P′ and is only achievable if the amount of unlabeled data is Ω(1/τ), which could potentially be enormous. With less unlabeled data, the usual Ω(1/ǫ) label complexity applies. H− x H+ x x S x Figure 2: (a) x is a cut through H; (b) splitting edges. 2.2 Basic definitions The sample complexity of supervised learning is commonly expressed as a function of the error rate ǫ and the underlying distribution P. For active learning, the previous three examples demonstrate that it is also important to take into account the target hypothesis and the amount of unlabeled data. The main goal of this paper is to present one particular formalism by which this can be accomplished. Let X be an instance space with underlying distribution P. Let H be the hypothesis class, a set of functions from X to {0, 1} whose VC dimension is d < ∞. We are operating in a non-Bayesian setting, so we are not given a measure (prior) on the space H. In the absence of a measure, there is no natural notion of the “volume” of the current version space. However, the distribution P does induce a natural distance function on H, a pseudometric: d(h, h′) = P{x : h(x) ̸= h′(x)}. We can likewise define the notion of neighborhood: B(h, r) = {h′ ∈H : d(h, h′) ≤r}. We will be dealing with a separable learning scenario, in which all labels correspond perfectly to some concept h∗∈H, and the goal is to find h ∈H such that d(h∗, h) ≤ǫ. To do this, it is sufficient to whittle down the version space to the point where it has diameter at most ǫ, and to then return any of the remaining hypotheses. Likewise, if the diameter of the current version space is more than ǫ then any hypothesis chosen from it will have error more than ǫ/2 with respect to the worst-case target. Thus, in a non-Bayesian setting, active learning is about reducing the diameter of the version space. If our current version space is S ⊂H, how can we quantify the amount by which a point x ∈X reduces its diameter? Let H+ x denote the classifiers that assign x a value of 1, H+ x = {h ∈H : h(x) = 1}, and let H− x be the remainder, which assign it a value of 0. We can think of x as a cut through hypothesis space; see Figure 2(a). In this example, x is clearly helpful, but it doesn’t reduce the diameter of S. And we cannot say that it reduces the average distance between hypotheses, since again there is no measure on H. What x seems to be doing is to reduce the diameter in a certain “direction”. Is there some notion in arbitrary metric spaces which captures this intuition? Consider any finite Q ⊂H × H. We will think of an element (h, h′) ∈Q as an edge between vertices h and h′. For us, each such edge will represent a pair of hypotheses which need to be distinguished from one another: that is, they are relatively far apart, so there is no way to achieve our target accuracy if both of them remain in the version space. We would hope that for any finite set of edges Q, there are queries that will remove a substantial fraction of them. To this end, a point x ∈X is said to ρ-split Q if its label is guaranteed to reduce the number of edges by a fraction ρ > 0, that is, if: max{|Q ∩(H+ x × H+ x )|, |Q ∩(H− x × H− x )|} ≤(1 −ρ)|Q|. For instance, in Figure 2(b), the edges are 3/5-split by x. If our target accuracy is ǫ, we only really care about edges of length more than ǫ. So define Qǫ = {(h, h′) ∈Q : d(h, h′) > ǫ}. Finally, we say that a subset of hypotheses S ⊂H is (ρ, ǫ, τ)-splittable if for all finite edge-sets Q ⊂S × S, P{x : x ρ-splits Qǫ} ≥τ. Paraphrasing, at least a τ fraction of the distribution P is useful for splitting S.2 This τ gives a sense of how many unlabeled samples are needed. If τ is miniscule, then there are good points to query, but these will emerge only in an enormous pool of unlabeled data. It will soon transpire that the parameters ρ, τ play roughly the following roles: # labels needed ∝1/ρ, # of unlabeled points needed ∝1/τ A first step towards understanding them is to establish a trivial lower bound on ρ. Lemma 1 Pick any 0 < α, ǫ < 1, and any set S. Then S is ((1 −α)ǫ, ǫ, αǫ)-splittable. Proof. Pick any finite edge-set Q ⊂S × S. Let Z denote the number of edges of Qǫ cut by a point x chosen at random from P. Since the edges have length at least ǫ, this x has at least an ǫ chance of cutting any of them, whereby EZ ≥ǫ|Qǫ|. Now, ǫ|Qǫ| ≤EZ ≤P(Z ≥(1 −α)ǫ|Qǫ|) · |Qǫ| + (1 −α)ǫ|Qǫ|, which after rearrangement becomes P(Z ≥(1 −α)ǫ|Qǫ|) ≥αǫ, as claimed. Thus, ρ is always Ω(ǫ); but of course, we hope for a much larger value. We will now see that the splitting index roughly characterizes the sample complexity of active learning. 2.3 Lower bound We start by showing that if some region of the hypothesis space has a low splitting index, then it must contain hypotheses which are not conducive to active learning. Theorem 2 Fix a hypothesis space H and distribution P. Suppose that for some ρ, ǫ < 1 and τ < 1/2, S ⊂H is not (ρ, ǫ, τ)-splittable. Then any active learner which achieves an accuracy of ǫ on all target hypotheses in S, with confidence > 3/4 (over the random sampling of data), either needs ≥1/τ unlabeled samples or ≥1/ρ labels. Proof. Let Qǫ be the set of edges of length > ǫ which defies splittability, with vertices V = {h : (h, h′) ∈Qǫ for some h′ ∈H}. We’ll show that in order to distinguish between hypotheses in V, either 1/τ unlabeled samples or 1/ρ queries are needed. So pick less than 1/τ unlabeled samples. With probability at least (1 −τ)1/τ ≥1/4, none of these points ρ-splits Qǫ; put differently, each of these potential queries has a bad outcome (+ or −) in which at most ρ|Qǫ| edges are eliminated. In this case there must be a target hypothesis in V for which at least 1/ρ labels are required. In our examples, we will apply this lower bound through the following simple corollary. 2Whenever an edge of length l ≥ǫ can be constructed in S, then by taking Q to consist solely of this edge, we see that τ ≤l. Thus we typically expect τ to be at most about ǫ, although of course it might be a good deal smaller than this. Let S0 be an ǫ0-cover of H for t = 1, 2, . . . , T = lg 2/ǫ: St = split(St−1, 1/2t) return any h ∈ST function split(S, ∆) Let Q0 = {(h, h′) ∈S × S : d(h, h′) > ∆} Repeat for t = 0, 1, 2, . . .: Draw m unlabeled points xt1, . . . , xtm Query the xti which maximally splits Qt Let Qt+1 be the remaining edges until Qt+1 = ∅ return remaining hypotheses in S Figure 3: A generic active learner. Corollary 3 Suppose that in some neighborhood B(h0, ∆), there are hypotheses h1, . . . , hN such that: (1) d(h0, hi) > ǫ for all i; and (2) the “disagree sets” {x : h0(x) ̸= hi(x)} are disjoint for different i. Then for any τ and any ρ > 1/N, the set B(h0, ∆) is not (ρ, ǫ, τ)-splittable . Any active learning scheme which achieves an accuracy of ǫ on all of B(h0, ∆) must use at least N labels for some of the target hypotheses, no matter how much unlabeled data is available. In this case, the distance metric on h0, h1, . . . , hN can accurately be depicted as a star with h0 at the center and with spokes leading to each hi. Each query only cuts off one spoke, so N queries are needed. 2.4 Upper bound We now show a loosely matching upper bound on sample complexity, via an algorithm (Figure 3) which repeatedly halves the diameter of the remaining version space. For some ǫ0 less than half the target error rate ǫ, it starts with an ǫ0-cover of H: a set of hypotheses S0 ⊂H such that any h ∈H is within distance ǫ0 of S0. It is well-known that it is possible to find such an S0 of size ≤2(2e/ǫ0 ln 2e/ǫ0)d [9](Theorem 5). The ǫ0-cover serves as a surrogate for the hypothesis class – for instance, the final hypothesis is chosen from it. The algorithm is hopelessly intractable and is meant only to demonstrate the following upper bound. Theorem 4 Let the target hypothesis be some h∗∈H. Pick any target accuracy ǫ > 0 and confidence level δ > 0. Suppose B(h∗, 4∆) is (ρ, ∆, τ)-splittable for all ∆≥ǫ/2. Then there is an appropriate choice of ǫ0 and m for which, with probability at least 1 −δ, the algorithm will draw ˜O((1/ǫ) + (d/ρτ)) unlabeled points, make ˜O(d/ρ) queries, and return a hypothesis with error at most ǫ. This theorem makes it possible to derive label complexity bounds which are fine-tuned to the specific target hypothesis. At the same time, it is extremely loose in that no attempt has been made to optimize logarithmic factors. 3 Examples 3.1 Simple boundaries on the line Returning to our first example, let X = R and H = {hw : w ∈R}, where each hw is a threshold function hw(x) = 1(x ≥w). Suppose P is the underlying distribution on X; for simplicity we’ll assume it’s a density, although the discussion can easily be generalized. The distance measure P induces on H is d(hw, hw′) = P{x : hw(x) ̸= hw′(x)} = P{x : w ≤x < w′} = P[w, w′) (assuming w′ ≥w). Pick any accuracy ǫ > 0 and consider any finite set of edges Q = {(hwi, hw′ i) : i = 1, . . . , n}, where without loss of generality the wi are in nondecreasing order, and where each edge has length greater than ǫ: P[wi, w′ i) > ǫ. Pick w so that P[wn/2, w) = ǫ. It is easy to see that any x ∈[wn/2, w) must eliminate at least half the edges in Q. Therefore, H is (ρ = 1/2, ǫ, ǫ)-splittable for any ǫ > 0. This echoes the simple fact that active-learning H is just a binary search. 3.2 Intervals on the line The next case we consider is almost identical to our earlier example of 2-d linear separators (and the results carry over to that example, within constant factors). The hypotheses correspond to intervals on the real line: X = R and H = {ha,b : a, b ∈R}, where ha,b(x) = 1(a ≤x ≤b). Once again assume P is a density. The distance measure it induces is d(ha,b, ha′,b′) = P{x : x ∈[a, b] ∪[a′, b′], x ̸∈[a, b] ∩[a′, b′]} = P([a, b]∆[a′, b′]), where S∆T denotes symmetric difference (S ∪T ) \ (S ∩T ). Even in this very simple class, some hypotheses are much easier to active-learn than others. Hypotheses not amenable to active-learning. Divide the real line into 1/ǫ disjoint intervals, each with probability mass ǫ, and let {hi : i = 1, ..., 1/ǫ} denote the hypotheses taking value 1 on the corresponding intervals. Let h0 be the everywhere-zero concept. Then these hi satisfy the conditions of Corollary 3; their star-shaped configuration forces a ρ-value of ǫ, and active learning doesn’t help at all in choosing amongst them. Hypotheses amenable to active learning. The bad hypotheses are the ones whose intervals have small probability mass. We’ll now see that larger concepts are not so bad; in particular, for any h whose interval has mass > 4ǫ, B(h, 4ǫ) is (ρ = Ω(1), ǫ, Ω(ǫ))-splittable. Pick any ǫ > 0 and any ha,b such that P[a, b] = r > 4ǫ. Consider a set of edges Q whose endpoints are in B(ha,b, 4ǫ) and which all have length > ǫ. In the figure below, all lengths denote probability masses. Any concept in B(ha,b, 4ǫ) (more precisely, its interval) must lie within the outer box and must contain the inner box (this inner box might be empty). r a b 4ǫ 4ǫ 4ǫ 4ǫ Any edge (ha′,b′, ha′′,b′′) ∈Q has length > ǫ, so [a′, b′]∆[a′′, b′′] (either a single interval or a union of two intervals) has total length > ǫ and lies between the inner and outer boxes. Now pick x at random from the distribution P restricted to the space between the two boxes. This space has mass at most 16ǫ and at least 4ǫ, of which at least ǫ is occupied by [a′, b′]∆[a′′, b′′]. Therefore x separates ha′,b′ from ha′′,b′′ with probability ≥1/16. Now let’s look at all of Q. The expected number of edges split by our x is at least |Q|/16, and therefore the probability that more than |Q|/32 edges are split is at least 1/32. So P{x : x (1/32)-splits Q} ≥4ǫ/32 = ǫ/8. To summarize, for any hypothesis ha,b, let i(ha,b) = P[a, b] denote the probability mass of its interval. Then for any h ∈H and any ǫ < i(h)/4, the set B(h, 4ǫ) is (1/32, ǫ, ǫ/8)splittable. In short, once the version space is whittled down to B(h, i(h)/4), efficient active learning is possible. And the initial phase of getting to B(h, i(h)/4) can be managed by random sampling, using ˜O(1/i(h)) labels: not too bad when i(h) is large. 3.3 Linear separators under the uniform distribution The most encouraging positive result for active learning to date has been for learning homogeneous (through the origin) linear separators with data drawn uniformly from the surface of the unit sphere in Rd. The splitting indices for this case [5] bring this out immediately: Theorem 5 For any h ∈H, any ǫ ≤1/(32π2√ d), B(h, 4ǫ) is ( 1 8, ǫ, Ω(ǫ/ √ d))-splittable. 4 Related work and open problems There has been a lot of work on a related model in which the points to be queried are synthetically constructed, rather than chosen from unlabeled data [1]. The expanded role of P in our model makes it substantially different, although a few intuitions do carry over – for instance, Corollary 3 generalizes the notion of teaching dimension[8]. We have already discussed [7, 4, 6]. One other technique which seems useful for active learning is to look at the unlabeled data and then place bets on certain target hypotheses, for instance the ones with large margin. This insight – nicely formulated in [2, 10] – is not specific to active learning and is orthogonal to the search issues considered in this paper. In all the positive examples in this paper, a random data point which intersects the version space has a good chance of Ω(1)-splitting it. This permits a naive active learning strategy, also suggested in [3]: just pick a random point whose label you are not yet sure of. On what kinds of problems will this work, and what are prototypical cases where more intelligent querying is needed? Acknowledgements. I’m grateful to Yoav Freund for introducing me to this field; to Peter Bartlett, John Langford, Adam Kalai and Claire Monteleoni for helpful discussions; and to the anonymous NIPS reviewers for their detailed and perceptive comments. References [1] D. Angluin. Queries revisited. ALT, 2001. [2] M.-F. Balcan and A. Blum. A PAC-style model for learning from labeled and unlabeled data. Eighteenth Annual Conference on Learning Theory, 2005. [3] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [4] S. Dasgupta. Analysis of a greedy active learning strategy. NIPS, 2004. [5] S. Dasgupta. Full version of this paper at www.cs.ucsd.edu/˜dasgupta/papers/sample.ps. [6] S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Eighteenth Annual Conference on Learning Theory, 2005. [7] Y. Freund, S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning Journal, 28:133–168, 1997. [8] S. Goldman and M. Kearns. On the complexity of teaching. Journal of Computer and System Sciences, 50(1):20–31, 1995. [9] D. Haussler. Decision-theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100(1):78–150, 1992. [10] J. Shawe-Taylor, P. Bartlett, R. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926–1940, 1998.
|
2005
|
92
|
2,913
|
A PAC-Bayes approach to the Set Covering Machine Fran¸cois Laviolette, Mario Marchand IFT-GLO, Universit´e Laval Sainte-Foy (QC) Canada, G1K-7P4 given name.surname@ift.ulaval.ca Mohak Shah SITE, University of Ottawa Ottawa, Ont. Canada,K1N-6N5 mshah@site.uottawa.ca Abstract We design a new learning algorithm for the Set Covering Machine from a PAC-Bayes perspective and propose a PAC-Bayes risk bound which is minimized for classifiers achieving a non trivial margin-sparsity trade-off. 1 Introduction Learning algorithms try to produce classifiers with small prediction error by trying to optimize some function that can be computed from a training set of examples and a classifier. We currently do not know exactly what function should be optimized but several forms have been proposed. At one end of the spectrum, we have the set covering machine (SCM), proposed by Marchand and Shawe-Taylor (2002), that tries to find the sparsest classifier making few training errors. At the other end, we have the support vector machine (SVM), proposed by Boser et al. (1992), that tries to find the maximum soft-margin separating hyperplane on the training data. Since both of these learning machines can produce classifiers having small prediction error, we have recently investigated (Laviolette et al., 2005) if better classifiers could be found by learning algorithms that try to optimize a non-trivial function that depends on both the sparsity of a classifier and the magnitude of its separating margin. Our main result was a general data-compression risk bound that applies to any algorithm producing classifiers represented by two complementary sources of information: a subset of the training set, called the compression set, and a message string of additional information. In addition, we proposed a new algorithm for the SCM where the information string was used to encode radius values for data-dependent balls and, consequently, the location of the decision surface of the classifier. Since a small message string is sufficient when large regions of equally good radius values exist for balls, the data compression risk bound applied to this version of the SCM exhibits, indirectly, a non-trivial margin-sparsity trade-off. Moreover, this version of the SCM currently suffers from the fact that the radius values, used in the final classifier, depends on a a priori chosen distance scale R. In this paper, we use a new PAC-Bayes approach, that applies to the sample-compression setting, and present a new learning algorithm for the SCM that does not suffer from this scaling problem. Moreover, we propose a risk bound that depends more explicitly on the margin and which is also minimized by classifiers achieving a non-trivial margin-sparsity trade-off. 2 Definitions We consider binary classification problems where the input space X consists of an arbitrary subset of Rn and the output space Y = {0, 1}. An example z def = (x, y) is an input-output pair where x ∈X and y ∈Y. In the probably approximately correct (PAC) setting, we assume that each example z is generated independently according to the same (but unknown) distribution D. The (true) risk R(f) of a classifier f : X →Y is defined to be the probability that f misclassifies z on a random draw according to D: R(f) def = Pr(x,y)∼D (f(x) ̸= y) = E(x,y)∼DI(f(x) ̸= y) where I(a) = 1 if predicate a is true and 0 otherwise. Given a training set S = (z1, . . . , zm) of m examples, the task of a learning algorithm is to construct a classifier with the smallest possible risk without any information about D. To achieve this goal, the learner can compute the empirical risk RS(f) of any given classifier f according to: RS(f) def = 1 m m X i=1 I(f(xi) ̸= yi) def = E(x,y)∼SI(f(x) ̸= y) We focus on learning algorithms that construct a conjunction (or disjunction) of features called data-dependent balls from a training set. Each data-dependent ball is defined by a center and a radius value. The center is an input example xi chosen among the training set S. For any test example x, the output of a ball h, of radius ρ and centered on example xi, and is given by hi,ρ(x) def = ½ yi if d(x, xi) ≤ρ ¯yi otherwise , where ¯yi denotes the boolean complement of yi and d(x, xi) denotes the distance between the two points. Note that any metric can be used for the distance here. To specify a conjunction of balls we first need to list all the examples that participate as centers for the balls in the conjunction. For this purpose, we use a vector i def = (i1, . . . , i|i|) of indices ij ∈{1, . . . , m} such that i1 < i2 < . . . < i|i| where |i| is the number of indices present in i (and thus the number of balls in the conjunction). To complete the specification of a conjunction of balls, we need a vector ρρρ = (ρi1, ρi2, . . . , ρi|i|) of radius values where ij ∈{1, . . . , m} for j ∈{1, . . . , |i|}. On any input example x, the output Ci,ρρρ(x) of a conjunction of balls is given by: Ci,ρρρ(x) def = ½ 1 if hj,ρj(x) = 1 ∀j ∈i 0 if ∃j ∈i : hj,ρj(x) = 0 Finally, any algorithm that builds a conjunction can be used to build a disjunction just by exchanging the role of the positive and negative labelled examples. Due to lack of space, we describe here only the case of a conjunction. 3 A PAC-Bayes Risk Bound The PAC-Bayes approach, initiated by McAllester (1999a), aims at providing PAC guarantees to “Bayesian” learning algorithms. These algorithms are specified in terms of a prior distribution P over a space of classifiers that characterizes our prior belief about good classifiers (before the observation of the data) and a posterior distribution Q (over the same space of classifiers) that takes into account the additional information provided by the training data. A remarkable result that came out from this line of research, known as the “PAC-Bayes theorem”, provides a tight upper bound on the risk of a stochastic classifier called the Gibbs classifier. Given an input example x, the label GQ(x) assigned to x by the Gibbs classifier is defined by the following process. We first choose a classifier h according to the posterior distribution Q and then use h to assign the label h(x) to x. The PACBayes theorem was first proposed by McAllester (1999b) and later improved by others (see Langford (2005) for a survey). However, for all these versions of the PAC-Bayes theorem, the prior P must be defined without reference to the training data. Consequently, these theorems cannot be applied to the sample-compression setting where classifiers are partly described by a subset of the training data (as for the case of the SCM). In the sample compression setting, each classifier is described by a subset Si of the training data, called the compression set, and a message string σ that represents the additional information needed to obtain a classifier. In other words, in this setting, there exists a reconstruction function R that outputs a classifier R(σ, Si) when given an arbitrary compression set Si and a message string σ. Given a training set S, the compression set Si ⊆S is defined by a vector of indices i def = (i1, . . . , i|i|) that points to individual examples in S. For the case of a conjunction of balls, each j ∈i will point to a training example that is used for a ball center and the message string σ will be the vector ρρρ of radius values (defined above) that are used for the balls. Hence, given Si and ρρρ, the classifier obtained from R(ρρρ, Si) is just the conjunction Ci,ρρρ defined previously.1 Recently, Laviolette and Marchand (2005) have extended the PAC-Bayes theorem to the sample-compression setting. Their proposed risk bound depends on a dataindependent prior P and a data-dependent posterior Q that are both defined on I × M where I denotes the set of the 2m possible index vectors i and M denotes, in our case, the set of possible radius vectors ρρρ. The posterior Q is used by a stochastic classifier, called the sample-compressed Gibbs classifier GQ, defined as follows. Given a training set S and given a new (testing) input example x, a samplecompressed Gibbs classifier GQ chooses randomly (i,ρρρ) according to Q to obtain classifier R(ρρρ, Si) which is then used to determine the class label of x. In this paper we focus on the case where, given any training set S, the learner returns a Gibbs classifier defined with a posterior distribution Q having all its weight on a single vector i. Hence, a single compression set Si will be used for the final classifier. However, the radius ρi for each i ∈i will be chosen stochastically according to the posterior Q. Hence we consider posteriors Q such that Q(i′,ρρρ) = I(i = i′)Qi(ρρρ) where i is the vector of indices chosen by the learner. Hence, given a training set S, the true risk R(GQi) of GQi and its empirical risk RS(GQi) are defined by R(GQi) def = E ρρρ∼Qi R(R(ρρρ, Si)) ; RS(GQi) def = E ρρρ∼Qi RSi(R(ρρρ, Si)) , where i denotes the set of indices not present in i. Thus, i ∩i = ∅and i ∪i = (1, . . . , m). In contrast with the posterior Q, the prior P assigns a non zero weight to several vectors i. Let PI(i) denote the prior probability P assigned to vector i and let Pi(ρρρ) 1We assume that the examples in Si are ordered as in S so that the kth radius value in ρρρ is assigned to the kth example in Si. denote the probability density function associated with prior P given i. The risk bound depends on the Kullback-Leibler divergence KL(Q∥P) between the posterior Q and the prior P which, in our case, gives KL(Qi∥P) = E ρρρ∼Qi ln Qi(ρρρ) PI(i)Pi(ρρρ) . For these classes of posteriors Q and priors P, the PAC-Bayes theorem of Laviolette and Marchand (2005) reduces to the following simpler version. Theorem 1 (Laviolette and Marchand (2005)) Given all our previous definitions, for any prior P and for any δ ∈(0, 1] Pr S∼Dm ³ ∀Qi : kl(RS(GQi)∥R(GQi)) ≤ 1 m−|i| £ KL(Qi ∥P) + ln m+1 δ ¤´ ≥1−δ , where kl(q∥p) def = q ln q p + (1 −q) ln 1 −q 1 −p . To obtain a bound for R(GQi) we need to specify Qi(ρρρ), PI(i), and Pi(ρρρ). Since all vectors i having the same size |i| are, a priori, equally “good”, we choose PI(i) = 1 ¡m |i| ¢p(|i|) for any p(·) such that Pm d=0 p(d) = 1. We could choose p(d) = 1/(m + 1) for d ∈ {0, 1, . . . , m} if we have complete ignorance about the size |i| of the final classifier. But since the risk bound will deteriorate for large |i|, it is generally preferable to choose, for p(d), a slowly decreasing function of d. For the specification of Pi(ρρρ), we assume that each radius value, in some predefined interval [0, R], is equally likely to be chosen for each ρi such that i ∈i. Here R is some “large” distance specified a priori. For Qi(ρρρ), a margin interval [ai, bi] ⊆[0, R] of equally good radius values is chosen by the learner for each i ∈i. Hence, we choose Pi(ρρρ) = Y i∈i 1 R = µ 1 R ¶|i| ; Qi(ρρρ) = Y i∈i 1 bi −ai . Therefore, the Gibbs classifier returned by the learner will draw each radius ρi uniformly in [ai, bi]. A deterministic classifier is then specified by fixing each radius values ρi ∈[ai, bi]. It is tempting at this point to choose ρi = (ai +bi)/2 ∀i ∈i (i.e., in the middle of each interval). However, we will see shortly that the PAC-Bayes theorem offers a better guarantee for another type of deterministic classifier. Consequently, with these choices for Qi(ρρρ), PI(i), and Pi(ρρρ), the KL divergence between Qi and P is given by KL(Qi∥P) = ln µm |i| ¶ + ln µ 1 p(|i|) ¶ + X i∈i ln µ R bi −ai ¶ . Notice that the KL divergence is small for small values of |i| (whenever p(|i|) is not too small) and for large margin values (bi −ai). Hence, the KL divergence term in Theorem 1 favors both sparsity (small |i|) and large margins. Hence, in practice, the minimum might occur for some GQi that sacrifices sparsity whenever larger margins can be found. Since the posterior Q is identified by i and by the intervals [ai, bi] ∀i ∈i, we will now refer to the Gibbs classifier GQi by Gi ab where a and b are the vectors formed by the unions of ais and bis respectively. To obtain a risk bound for Gi ab, we need to find a closed-form expression for RS(Gi ab). For this task, let U[a, b] denote the uniform distribution over [a, b] and let σi a,b(x) be the probability that a ball with center xi assigns to x the class label yi when its radius ρ is drawn according to U[a, b]: σi a,b(x) def = Prρ∼U[a,b] (hi,ρ(x) = yi) = 1 if d(x, xi) ≤a b−d(x,xi) b−a if a ≤d(x, xi) ≤b 0 if d(x, xi) ≥bi . Therefore, ζi a,b(x) def = Prρ∼U[a,b] (hi,ρ(x) = 1) = ½ σi a,b(x) if yi = 1 1 −σi a,b(x) if yi = 0 . Now let Gi ab(x) denote the probability that Ci,ρρρ(x) = 1 when each ρi ∈ρρρ are drawn according to U[ai, bi]. We then have Gi ab(x) = Y i∈i ζi ai,bi(x) . Consequently, the risk R(x,y)(Gi ab) on a single example (x, y) is given by Gi ab(x) if y = 0 and by 1 −Gi ab(x) otherwise. Therefore R(x,y)(Gi ab) = y(1 −Gi ab(x)) + (1 −y)Gi ab(x) = (1 −2y)(Gi ab(x) −y) . Hence, the empirical risk RS(Gi ab) of the Gibbs classifier Gi ab is given by RS(Gi ab) = 1 m −|i| X j∈i (1 −2yj)(Gi ab(xj) −yj) . From this expression we see that RS(Gi ab) is small when Gi ab(xj) →yj ∀j ∈i. Training points where Gi ab(xj) ≈1/2 should therefore be avoided. The PAC-Bayes theorem below provides a risk bound for the Gibbs classifier Gi ab. Since the Bayes classifier Bi ab just performs a majority vote under the same posterior distribution as the one used by Gi ab, we have that Bi ab(x) = 1 iffGi ab(x) > 1/2. From the above definitions, note that the decision surface of the Bayes classifier, given by Gi ab(x) = 1/2, differs from the decision surface of classifier Ciρρρ when ρi = (ai + bi)/2 ∀i ∈i. In fact there does not exists any classifier Ciρρρ that has the same decision surface as Bayes classifier Bi ab. From the relation between Bi ab and Gi ab, it also follows that R(x,y)(Bi ab) ≤2R(x,y)(Gi ab) for any (x, y). Consequently, R(Bi ab) ≤2R(Gi ab). Hence, we have the following theorem. Theorem 2 Given all our previous definitions, for any δ ∈(0, 1], for any p satisfying Pm d=0 p(d) = 1, and for any fixed distance value R, we have: PrS∼Dm à ∀i, a, b: R(Gi ab) ≤sup ½ ϵ: kl(RS(Gi ab)∥ϵ) ≤ 1 m −|i| · ln µm |i| ¶ + + ln µ 1 p(|i|) ¶ + X i∈i ln µ R bi −ai ¶ + ln m + 1 δ #) ! ≥1 −δ . Furthermore: R(Bi ab) ≤2R(Gi ab) ∀i, a, b. Recall that the KL divergence is small for small values of |i| (whenever p(|i|) is not too small) and for large margin values (bi −ai). Furthermore, the Gibbs empirical risk RS(Gi ab) is small when the training points are located far away from the Bayes decision surface Gi ab(x) = 1/2 (with Gi ab(xj) →yj ∀j ∈i). Consequently, the Gibbs classifier with the smallest guarantee of risk should perform a non trivial margin-sparsity tradeoff. 4 A Soft Greedy Learning Algorithm Theorem 2 suggests that the learner should try to find the Bayes classifier Bi ab that uses a small number of balls (i.e., a small |i|), each with a large separating margin (bi −ai), while keeping the empirical Gibbs risk RS(Gi ab) at a low value. To achieve this goal, we have adapted the greedy algorithm for the set covering machine (SCM) proposed by Marchand and Shawe-Taylor (2002). It consists of choosing the (Boolean-valued) feature i with the largest utility Ui defined as Ui = |Ni| −p |Pi| , where Ni is the set of negative examples covered (classified as 0) by feature i, Pi is the set of positive examples misclassified by this feature, and p is a learning parameter that gives a penalty p for each misclassified positive example. Once the feature with the largest Ui is found, we remove Ni and Pi from the training set S and then repeat (on the remaining examples) until either no more negative examples are present or that a maximum number of features has been reached. In our case, however, we need to keep the Gibbs risk on S low instead of the risk of a deterministic classifier. Since the Gibbs risk is a “soft measure” that uses the piece-wise linear functions σi a,b instead of “hard” indicator functions, we need a “softer” version of the utility function Ui. Indeed, a negative example that falls in the linear region of a σi a,b is in fact partly covered. Following this observation, let k be the vector of indices of the examples that we have used as ball centers so far for the construction of the classifier. Let us first define the covering value C(Gk ab) of Gk ab by the “amount” of negative examples assigned to class 0 by Gk ab: C(Gk ab) def = X j∈k (1 −yj) £ 1 −Gk ab(xj) ¤ . We also define the positive-side error E(Gk ab) of Gk ab as the “amount” of positive examples assigned to class 0 : E(Gk ab) def = X j∈k yj £ 1 −Gk ab(xj) ¤ . We now want to add another ball, centered on an example with index i, to obtain a new vector k′ containing this new index in addition to those present in k. Hence, we now introduce the covering contribution of ball i (centered on xi) as Ck ab(i) def = C(Gk′ a′b′) −C(Gk ab) = (1 −yi) £ 1 −ζi ai,bi(xi) Gk ab(xi) ¤ + X j∈k′ (1 −yj) £ 1 −ζi ai,bi(xj) ¤ Gk ab(xj) , and the positive-side error contribution of ball i as Ek ab(i) def = E(Gk′ a′b′) −E(Gk ab) = yi £ 1 −ζi ai,bi(xi) Gk ab(xi) ¤ + X j∈k′ yj £ 1 −ζi ai,bi(xj) ¤ Gk ab(xj) . Typically, the covering contribution of ball i should increase its “utility” and its positive-side error should decrease it. Hence, we define the utility U k ab(i) of adding ball i to Gk ab as U k ab(i) def = Ck ab(i) −pEk ab(i) , where parameter p represents the penalty of misclassifying a positive example. For a fixed value of p, the “soft greedy” algorithm simply consists of adding, to the current Gibbs classifier, a ball with maximum added utility until either the maximum number of possible features (balls) has been reached or that all the negative examples have been (totally) covered. It is understood that, during this soft greedy algorithm, we can remove an example (xj, yj) from S whenever it is totally covered. This occurs whenever Gk ab(xj) = 0. The term P i∈i ln(R/(bi −ai)), present in the risk bound of Theorem 2, favors “soft balls” having large margins bi −ai. Hence, we introduce a margin parameter γ ≥0 that we use as follows. At each greedy step, we first search among balls having bi −ai = γ. Once such a ball, of center xi, having maximum utility has been found, we try to increase further its utility be searching among all possible values of ai and bi > ai while keeping its center xi fixed2. Both p and γ will be chosen by cross validation on the training set. We conclude this section with an analysis of the running time of this soft greedy learning algorithm for fixed p and γ. For each potential ball center, we first sort the m −1 other examples with respect to their distances from the center in O(m log m) time. Then, for this center xi, the set of ai values that we examine are those specified by the distances (from xi) of the m −1 sorted examples3. Since the examples are sorted, it takes time ∈O(km) to compute the covering contributions and the positive-side error for all the m −1 values of ai. Here k is the largest number of examples falling into the margin. We are always using small enough γ values to have k ∈O(log m) since, otherwise, the results are terrible. It therefore takes time ∈O(m log m) to compute the utility values of all the m −1 different balls of a given center. This gives a time ∈O(m2 log m) to compute the utilities for all the possible m centers. Once a ball with a largest utility value has been chosen, we then try to increase further its utility by searching among O(m2) pair values for (ai, bi). We then remove the examples covered by this ball and repeat the algorithm on the remaining examples. It is well known that greedy algorithms of this kind have the following guarantee: if there exist r balls that covers all the m examples, the greedy algorithm will find at most r ln(m) balls. Since we almost always have r ∈O(1), the running time of the whole algorithm will almost always be ∈O(m2 log2(m)). 5 Empirical Results on Natural Data We have compared the new PAC-Bayes learning algorithm (called here SCM-PB), with the old algorithm (called here SCM). Both of these algorithms were also compared with the SVM equipped with a RBF kernel of variance σ2 and a soft margin parameter C. Each SCM algorithm used the L2 metric since this is the metric present in the argument of the RBF kernel. However, in contrast with Laviolette et al. (2005), each SCM was constrained to use only balls having centers of the same class (negative for conjunctions and positive for disjunctions). 2The possible values for ai and bi are defined by the location of the training points. 3Recall that for each value of ai, the value of bi is set to ai + γ at this stage. Table 1: SVM and SCM results on UCI data sets. Data Set SVM results SCM SCM-PB Name train test C σ2 SVs errs b errs b γ errs breastw 343 340 1 5 38 15 1 12 4 .08 10 bupa 170 175 2 .17 169 66 5 62 6 .1 67 credit 353 300 100 2 282 51 3 58 11 .09 55 glass 107 107 10 .17 51 29 5 22 16 .04 19 heart 150 147 1 .17 64 26 1 23 1 0 28 haberman 144 150 2 1 81 39 1 39 1 .2 38 USvotes 235 200 1 25 53 13 10 27 18 .14 12 Each algorithm was tested the UCI data sets of Table 1. Each data set was randomly split in two parts. About half of the examples was used for training and the remaining set of examples was used for testing. The corresponding values for these numbers of examples are given in the “train” and “test” columns of Table 1. The learning parameters of all algorithms were determined from the training set only. The parameters C and γ for the SVM were determined by the 5-fold cross validation (CV) method performed on the training set. The parameters that gave the smallest 5-fold CV error were then used to train the SVM on the whole training set and the resulting classifier was then run on the testing set. Exactly the same method (with the same 5-fold split) was used to determine the learning parameters of both SCM and SCM-PB. The SVM results are reported in Table 1 where the “SVs” column refers to the number of support vectors present in the final classifier and the “errs” column refers to the number of classification errors obtained on the testing set. This notation is used also for all the SCM results reported in Table 1. In addition to this, the “b” and “γ” columns refer, respectively, to the number of balls and the margin parameter (divided by the average distance between the positive and the negative examples). The results reported for SCM-PB refer to the Bayes classifier only. The results for the Gibbs classifier are similar. We observe that, except for bupa and heart, the generalization error of SCM-PB was always smaller than SCM. However, the only significant difference occurs on USvotes. We also observe that SCM-PB generally sacrifices sparsity (compared to SCM) to obtain some margin γ > 0. References B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144–152. ACM Press, 1992. John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6:273–306, 2005. Fran¸cois Laviolette and Mario Marchand. PAC-Bayes risk bounds for sample-compressed Gibbs classifiers. Proceedings of the 22nth International Conference on Machine Learning (ICML 2005), pages 481–488, 2005. Fran¸cois Laviolette, Mario Marchand, and Mohak Shah. Margin-sparsity trade-offfor the set covering machine. Proceedings of the 16th European Conference on Machine Learning (ECML 2005); Lecture Notes in Artificial Intelligence, 3720:206–217, 2005. Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of Machine Learning Reasearch, 3:723–746, 2002. David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355–363, 1999a. David A. McAllester. Pac-bayesian model averaging. In COLT, pages 164–170, 1999b.
|
2005
|
93
|
2,914
|
Logic and MRF Circuitry for Labeling Occluding and Thinline Visual Contours Eric Saund Palo Alto Research Center 3333 Coyote Hill Rd. Palo Alto, CA 94304 saund@parc.com Abstract This paper presents representation and logic for labeling contrast edges and ridges in visual scenes in terms of both surface occlusion (border ownership) and thinline objects. In natural scenes, thinline objects include sticks and wires, while in human graphical communication thinlines include connectors, dividers, and other abstract devices. Our analysis is directed at both natural and graphical domains. The basic problem is to formulate the logic of the interactions among local image events, specifically contrast edges, ridges, junctions, and alignment relations, such as to encode the natural constraints among these events in visual scenes. In a sparse heterogeneous Markov Random Field framework, we define a set of interpretation nodes and energy/potential functions among them. The minimum energy configuration found by Loopy Belief Propagation is shown to correspond to preferred human interpretation across a wide range of prototypical examples including important illusory contour figures such as the Kanizsa Triangle, as well as more difficult examples. In practical terms, the approach delivers correct interpretations of inherently ambiguous hand-drawn box-and-connectordiagrams at low computational cost. 1 Introduction A great deal of attention has been paid to the curious phenomenon of illusory contours in visual scenes [5]. The most famous example is the Kanizsa Triangle (Figure 1). Although a number of explanations have been proposed, computational accounts have converged on the understanding that illusory contours are an outcome of the more general problem of labeling scene contours in terms of causal events such as surface overlap. Illusory contours are the visual system’s way of expressing belief in an occlusion relation between two surfaces having the same lightness and therefore lacking a visible contrast edge. The phenomena are interesting in their revelation of interactions among multiple factors comprising the visual system’s prior assumptions about what constitutes likely interpretations of ambiguous input. Several computational models for this process have generated interpretations of Kanizsalike figures corresponding to human perception. Williams[9] formulated an integer-linear Figure 1: a. Original Kanizsa Triangle. b. Solid surface version. c. Human preferred interpretation. d, e. Other valid interpretations. optimization problem with hard constrains originating from the topology of contours and junctions, and soft constraints representing figural biases for non-accidental interpretations and figural closure. Heitger and von der Heydt[2] implemented a series of nonlinear filtering operations that enacted interactions among line terminations and junctions to infer modal completions corresponding to illusory contours. Geiger[1] used a dense Markov Random Field to represent surface depths explicitly and propagated local evidence through a diffusion process. Saund[6] enumerated possible generic and non-generic interpretations of T- and L-junctions to set up an optimization problem solved by deterministic annealing. Liu and Wang[4] set up a network of contours traversing the boundaries of segmented regions, which interact to propagate local information through an iterative updating scheme. This paper expands this body of previous work in the following ways: • The computational model is expressed in terms of a sparse heterogeneous Markov Random Field whose solution is accessible to fast techniques such as Loopy Belief Propagation. • We introduce interpretations of thinlines in addition to solid surfaces, adding a significant layer of richness and complexity. • The model infers occlusion relations of surfaces depicted by line drawings of their borders, as well as solid graphics depictions. • We devise MRF energy functions that implement circuitry for sophisticated logical constraints of the domain. The result is a formulation that is both fast and effective at correctly interpreting a greater range of psychophysical and near-practical contour configuration examples than has heretofor been demonstrated. The model exposes aspects of fundamental ambiguity to be resolved by the incorporation of additional constraints and domain-specific knowledge. 2 Interpretation Nodes and Relations 2.1 Visible Contours and Contour Ends Early vision studies commonly distinguish several models for visible contour creation and measurement, including contrast edges, lines or ridges, ramps, color and texture edges, etc. Let us idealize to consider only contrast edges and ridges (also known as “bars”), measured at a single scale. We include in our domain of interest human-generated graphical Figure 2: a. Sample image region. b. Spatial relation categories characterizing links in the MRF among Contour End nodes: Corner, Near Alignment, Far Alignment, Lateral. c. Resulting MRF including nodes of type Visible Contour, Contour End, Corner Tie, and Corner Tie Mediator. figures. Contrast edges arise from distinct regions or surfaces, while ridges may represent either a boundary between regions or else a “thinline”, i.e. a physical or graphical object whose shape is essentially defined by a one-dimensional path at our scale of measurement. Examples of thinlines in photographic imagery include twigs, sidewalk cracks, and telephone wires, while in graphical images thinlines include separators, connectors, and arrow shafts. Figure 7e shows a hand-drawn sketch in which some lines (measured as ridges) are intended to define boxes and therefore represent region boundaries, while others are connectors between boxes. We take the contour interpretation problem to include the analysis of this type of scene in addition to classical illusory contour figures. For any input data, we may construct a Markov Random Field consisting of four types of nodes derived from measured contrast edge and ridge contours. An interpretation is an assignment of states to nodes. Local potentials and the potential matrices associated with pairwise links between nodes encode constraints and biases among interpretation states based on the spatial relations among the visible contours. Figure 2 illustrates MRF nodes types and links for a simple example input image, as explained below. Let us assume that contours defining region boundaries are assigned an occlusion direction, equivalent to relative surface depth and hence boundary ownership. Figure 3 shows the possible mappings between visible image contours measured as contrast edges or ridges, and their interpretation in terms of direction of surface overlap or else thinline object. Contrast edges always correspond to surface occlusion, while ridges may represent either a surface boundary or a thinline object. Correspondingly, the simplest MRF node type is the Visible Contour node which has state dimension 3 corresponding to two possible overlap directions and one thinline interpretation. Most of the interesting evidence and interaction occurs at terminations and junctions of visible contours. Contour End nodes are given the job of explaining why a smooth visible edge or ridge contour has terminated visibility, and hence they will encode the bulk of the modal (illusory) and amodal (occluded) completion information of a computed interpretation. Smooth visible contours may terminate in four ways: Figure 3: Permissible mappings between visible edge and ridge contours and interpretations. Wedges indicate direction of surface overlap: white (FG) surface occludes shaded (BG) surface. 1. The surface boundary contour or thinline object changes direction (turns a corner) 2. The contour becomes modal because the background surface lacks a visible edge with the foreground surface. 3. The contour becomes amodal because it becomes occluded by another surface. 4. The contour simply terminates when an surface overlap meets the end of a fold, or when a thin object or graphic stops. Contour Ends therefore have 3x4 = 12 interpretation states as shown in Figure 4. Figure 4: Contour End nodes have state dimension 12 indicating contour overlap type/direction (overlap or thinline) and one of four explanations for termination of the visible contour. Every Visible Contour node is linked to its two corresponding Contour End nodes through energy matrices (or equivalently, potential matrices, using Potential ψ = exp−E) representing simple compatibility among overlap direction/thinline interpretation states. Additional links in the network are created based on spatial relations among Contour Ends as described next. a b Figure 5: a. Corner Tie nodes have state dimension 6 indicating the causal relationship between the Contour End nodes they link. b. Energy matrix linking the Left Contour End of a pair of corner-relation Contour Ends to their Corner Tie. X indicates high energy prohibiting the state combination. EA refers to a low penalty for Accidental Coincidence of the Contour Ends. EDC refers to a (typically low) penalty of two Contour Ends failing to meet the ideal geometrical constraints of meeting at a corner. The subscripts refer to necessary Near-Alignment Relations on the Contour Ends. The energy matrix linking the Right End Contour to the Corner Tie swaps the 5th and 6th columns. 2.2 Contour Ends Relation Links Let us consider five classes of pairwise geometric relations among observed contour ends: Corner, Near-Alignment, Far-Alignment, Lateral, and Unrelated. Mathematical expressions forming the bases for these relations may be engineered as measures of distance and smooth continuation such as used by Saund [6]. The Corner relation depends only on proximity; Near-Alignment depends on proximity and alignment; Far-Alignment omits the proximity requirement. Within this framework a further refinement distinguishes ridge Contour Ends from those arising from contrast edges. Namely, ridge ends are permitted to form Lateral relation links which correspond to potential modal contours. Contrast edge Contour Ends are excluded from this link type because they terminate at junctions which distribute modal and amodal completion roles to their participating Contour Ends. Contour End nodes from ridge contours may participate in Far-Alignment links but their local energies are set to preclude them from taking states representing modal completions. In this way the present model fixes the topology of related ends in the process of setting up the Markov Graph. An important problem for future research is to formulate the Markov Graph to include all plausible Contour End pairings and have the actual pairings sort themselves out at solution time. Biases about preferred and less-preferred interpretations are represented through the terms in the energy matrices linking related Contour Ends. In accordance with prior work, we bias energy terms associated with curved Visible Contours and junctions of Contour Ends in favor of convex object interpretations. Space limitations preclude presenting the energy matrices in detail, but we discuss the main novel and significant considerations. The simplest case is pairs of Contour Ends sharing a Near-Alignment or Far-Alignment relation. These energy matrices are constructed to trade off priors regarding accidental alignment versus amodal or modal invisible contour completion interpretations. For ConFigure 6: The Corner Tie Mediator node restricts border ownership of occluding contours to physically consistent interpretations. The energy matrix shown in e links the Corner Tie Mediator to the Left Corner Tie of a pair sharing a Contour End. X indicates high energy. The energy matrix for the link to the Right Corner Tie swaps the second and third columns. tour End pairs that are relatively near and well aligned, energy terms corresponding to causally unrelated interpretations (CE states 0,1,2) are large, while terms corresponding to amodal completion with compatible overlap/thinline property (CE states 6,7,8) are small. Actual energy values for the matrices are assigned by straightforward formulas derived from the Proximity and Smooth Continuation terms mentioned above. Per Kanizsa, modal completion interpretations (CE states 3,4,5) are somewhat more expensive than amodal interpretations, by a constant factor. Energy terms shift their relative weights in favor of causally unrelated interpretations (CE corner states 0,1,2) as the Contour Ends become more distant and less aligned. Contour Ends sharing a Corner relation can be related in one of three ways: they can be causally unrelated and unordered in depth; they can represent a turning of a surface boundary or thinline object; they can represent overlap of one contour above the other. In order to exploit the geometry of Contour Ends as local evidence, these alternatives must be articulated and entered into the MRF node graph. To do this we therefore introduce a third type of node, the Corner Tie node, possessing six states as illustrated in Figure 5a. The energy matrix relating Contour End nodes and Corner Tie nodes is shown in Figure 5b. It contains low energy terms representing the Corner Tie’s belief that the Contour End termination is due to direction change (turning a corner). It also contains low energy terms representing the conditions of one Contour End’s owning surface overlapping the other contour, i.e. the relative depth relation between these contours in the scene. 2.3 Constraints on Overlaps and Thinlines at Junctions Physical considerations impose hard constraints on the interpretations of End Pairs meeting at a junction. Consider the T-junction in Figure 6a. One preferred interpretation for a T-junction is occlusion (6b). A less-preferred but possible interpretation is a change of direction (corner) by one surface, with accidental alignment by another contour (6c). What is impossible is for a surface boundary to bifurcate and “belong” to both sides of the T (6d). This type of constraint cannot be enforced by the purely pairwise Corner Tie node. We therefore introduce a fourth node type, the Corner Tie Mediator. This node governs the number of Corner Ties that any Contour End can claim to form a direction change (corner turn) relation with. The energy matrix for the Corner Tie Mediator node is shown in Figure 6e: multiple Corner-Ties in the overlap direction-turn states (CT states 1 & 2) are excluded (solid arrows). But note that the matrix contains a low energy term (dashed arrow) for the formation of multiple direction-turn Corner-Ties provided they are in the Thinline state (CT state 3); branching of thinline objects is physically permissible. 3 Experiments and Conclusion Loopy Belief Propagation under the Max-Product algorithm seeks the MAP configuration which is equivalent to the minimum-energy assignment of states [8]. We have not encountered a failure of LBP to converge, and it is quite rare to encounter a lower-energy assignment of states than the algorithm delivers starting from an initial uniform distribution over states. However, multiple stable fixed points can exist. For some ambiguous figures such as Figure 7e in which qualitatively different interpretations have similar energies, one may clamp one or more nodes to alternative states, leading to LBP solutions which persist once the clamping is removed. This invites the exploration of N-best configuration solution techniques [10]. Figure 7 demonstrates MAP assignments corresponding to preferred human interpretations of the classic Kanizsa illusory contour figure and others containing both aligning L-junction and ridge termination evidence for modal contours, amodal completions, and thinline objects. Note that the MRF correctly predicts that outline drawings of surface boundaries do not induce illusory contours. Figure 7g borrows from experiments by Szummer and Cowans[7] toward a practical application in line drawing interpretation, in which closed boxes define regions while connectors remain interpreted as thinline objects. For this scene containing 369 nodes and 417 links, the entire process of forming the MRF and performing 100 iterations of LBP takes less than a second. The major pressures operating in these situations are a figural bias toward interpreting closed paths as convex regions, and a preference to interpret ridge contours participating in T- and X- junctions as thinline objects. We have shown how explicit consideration of ridge features and thinline interpretations brings new complexity to the logic of sorting out depth relations in visual scenes. This investigation suggests that a sparse heterogeneous Markov Random Field approach may provide a suitable basis for such models. References [1] Geiger, D., Kumaran, K, & Parida, L. (1996) Visual organization for figure/ground separation. in Proc. IEEE CVPR pp. 155-160. [2] Heitger, F., & von der Heydt, R. (1993) A Computational Model of Neural Contour Processing: Figure-Ground Segregation and Illusory Contours. Proc. ICCV ’93. [3] Kanizsa, G. (1979) Organization in Vision, Praeger, New York. [4] Liu, X., Wang, D. (2000) Perceptual Organization Based on Temporal Dynamics. in S.A. Solla, T.K. Leen, K.-R. Muller (eds.), Advances in Neural Information Processing Systems 12, pp. 38-44. MIT Press. [5] Petry, S., & Meyer, G. (eds.) (1987) The Perception of Illusory Contours, Springer-Verlag, New York. [6] Saund, E. (1999) Perceptual Organization of Occluding Contours of Opaque Surfaces, CVIU V. 76, No. 1, pp. 70-82. [7] Szummer, M., & Cowans, P. (2004) Incorporating Context and User Feedback in Pen-Based Interfaces. AAAI TR FS-04-06 (Papers from the 2004 AAAI Fall Symposium.) [8] Weiss, Y., and Freeman, W.T. (2001) On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs, IEEE Trans. Inf. Theory 47:2, pp. 723-735. [9] Williams, L. (1990) Perceptual Organization of Occluding Contours. Proc. ICCV ’90. pp. 639649. [10] Yanover, C. and Weiss, Y. (2003) Finding the M Most Probable Configurations Using Loopy Belief Propagation. in S. Thrun, L. Saul and B. Sch¨0lkpf, eds., Advances in Neural Information Processing Systems 16, MIT Press.
|
2005
|
94
|
2,915
|
Non-iterative Estimation with Perturbed Gaussian Markov Processes Yunsong Huang B. Keith Jenkins Signal and Image Processing Institute Department of Electrical Engineering-Systems University of Southern California Los Angeles, CA 90089-2564 {yunsongh,jenkins}@sipi.usc.edu Abstract We develop an approach for estimation with Gaussian Markov processes that imposes a smoothness prior while allowing for discontinuities. Instead of propagating information laterally between neighboring nodes in a graph, we study the posterior distribution of the hidden nodes as a whole—how it is perturbed by invoking discontinuities, or weakening the edges, in the graph. We show that the resulting computation amounts to feed-forward fan-in operations reminiscent of V1 neurons. Moreover, using suitable matrix preconditioners, the incurred matrix inverse and determinant can be approximated, without iteration, in the same computational style. Simulation results illustrate the merits of this approach. 1 Introduction Two issues, (i) efficient representation, and (ii) efficient inference, are of central importance in the area of statistical modeling of vision problems. For generative models, often the ease of generation and the ease of inference are two conflicting features. Factor Analysis [1] and its variants, for example, model the input as a linear superposition of basis functions. While the generation, or synthesis, of the input is immediate, the inference part is usually not. One may apply a set of filters, e.g., Gabor filters, to the input image. In so doing, however, the statistical modeling is only deferred, and further steps, either implicit or explicit, are needed to capture the ‘code’ carried by those filter responses. By characterizing mutual dependencies among adjacent nodes, Markov Random Field (MRF) [2] and graphical models [3] are other powerful ways for modeling the input, which, when continuous, is often conveniently assumed to be Gaussian. In vision applications, it’s suitable to employ smoothness priors admitting discontinuities [4]. Examples include weak membranes and plates [5], formulated in the context of variational energy minimization. Typically, the inference for MRF or graphical models would incur lateral propagation of information between neighboring units [6]. This is appealing in the sense that it consists of only simple, local operations carried out in parallel. However, the resulting latency could undermine the plausibility that such algorithms are employed in human early vision inference tasks [7]. In this paper we take the weak membrane and plate as instances of Gaussian processes (GP). We show that the effect of marking each discontinuity (hereafter termed as “bondbreaking”) is to perturb the inverse of covariance matrix of the hidden nodes x by a matrix of rank 1. When multiple bonds are broken, the computation of the posterior mean and covariance of x would involve the inversion of a matrix, which typically has large condition number, implying very slow convergence in straight-forward iterative approaches. We show that there exists a family of preconditioners that can bring the condition number close to 1, thereby greatly speeding up the iteration—to the extent that a single step would suffice in practice. Therefore, the predominant computation employed in our approach is noniterative, of fan-in and fan-out style. We also devise ways to learn the parameters regarding state and observation noise non-iteratively. Finally, we report experimental results of applying the proposed algorithm to image-denoising. 2 Perturbing a Gaussian Markov Process (GMP) Consider a spatially invariant GMP defined on a torus, x ∼N(0, Q0), whose energy— defined as xT Q−1 0 x—is the sum of energies of all edges1 in the graph, due to the Markovian property. In what follows, we perturb the potential matrix Q−1 0 by reducing the coupling energy of certain bonds2. This relieves the smoothness constraint on the nodes connected via those bonds. Suppose the energy reduction of a bond connecting node i and j (whose state vectors are xi and xj, respectively) can be expressed as (xT i fi + xT j fj)2, where fi and fj are coefficient vectors. This becomes (xT f)2, if f is constructed to be a vector of same size as x, with the only non-zero entries fi and fj corresponding to node i and j. This manipulation can be identified with a rank-1 perturbation of Q−1 0 , as Q−1 1 ←Q−1 0 −ff T, which is equivalent to xT Q−1 1 x ←xT Q−1 0 x −(xT f)2, ∀x. We call this an elementary perturbation of Q−1 0 , and f an elementary perturbation vector associated with the particular bond. When L such perturbations have taken place (cf. Fig. 1), we form the L perturbation vectors into a matrix F1 = [f 1, . . . , f L], and then the collective perturbations yield Q−1 1 = Q−1 0 −F1F T 1 (1) and thus Q1 = Q0 + Q0F1(I −F T 1 Q0F1)−1F T 1 Q0, (2) which follows from the Sherman-Morrison-Woodbury Formula (SMWF). 2.1 Perturbing a membrane and a plate In a membrane model [5], xi is scalar and the energy of the bond connecting xi and xj is (xi −xj)2/q, where q is a parameter denoting the variance of state noise. Upon perturbation, this energy is reduced to η2(xi −xj)2/q, where 0 < η ≪1 ensures positivity of the energy. Then, the energy reduction is (1 −η2)(xi −xj)2/q, from which we can identify fi = p (1 −η2)/q and fj = −fi. In the case of a plate [5], xi = [ui, uhi, uvi]T , in which ui represents the intensity, while uhi and uvi represent its gradient in the horizontal and vertical direction, respectively. We define the energy of a horizontal bond connecting node j and i as E(−,i) 0 = (uvi − uvj)2/q + d(−,i)T O−1d(−,i), where d(−,i) = ui uhi − 1 1 0 1 uj uhj and O = q 1/3 1/2 1/2 1 , 1Henceforth called bonds, as edge will refer to intensity discontinuity in an image. 2The bond energy remains positive. This ensures the positive definiteness of the potential matrix. the superscript (−, i) representing horizontal bond to the left of node i. The first and second term of E(−,i) would correspond to (∂2u(h, v)/∂h∂v)2/q and (∂2u(h, v)/∂h2)2/q, respectively, if u(h, v) is a continuous function of h and v (cf. [5]). If E(−,i) 0 is reduced to E(−,i) 1 = [(uvi −uvj)2 + (uhi −uhj)2]/q, i.e., coupling between node i and j exists only through their gradient values, one can show that the energy reduction E(−,i) 0 −E(−,i) 1 = [ui −uj −(uhi +uhj)/2]2 ·12/q. Taking the actual energy reduction to be (1 −η2)(E(−,i) 0 −E(−,i) 1 ), we can identify fi (−,i) = p 12(1 −η2)/q[1, −1/2, 0]T and fj (−,i) = p 12(1 −η2)/q[−1, −1/2, 0]T, where 0 < η ≪1 ensures the positive definiteness of the resulting potential matrix. A similar procedure can be applied to a vertical bond in the plate, producing a perturbation vector f (|,i), whose components are zero everywhere except for fi (|,i) = p 12(1 −η2)/q[1, 0, −1/2]T and fj (|,i) = p 12(1 −η2)/q[−1, 0, −1/2]T, for which node j is the lower neighbor of node i. One can verify that xT f = 0 when the plate assumes the shape of a linear slope, meaning that this perturbation produces no energy difference in such a case. (xT f)2 becomes significant when the perturbed, or broken, bond associated with f straddles across a step discontinuity of the image. Such an f is thus related to edge detection. 2.2 Hidden state estimation Standard formulae exist for the posterior covariance K and mean ˆx of x, given a noisy observation3 y = Cx + n, where n ∼N(0, rI). ˆxα = KαCT y/r, and Kα = [Q−1 α + CT C/r]−1, (3) for either the unperturbed (α = 0) or perturbed (α = 1) process. Thus, K1 = [Q−1 0 + CT C/r −F1F T 1 ]−1, following Eq. 3 and 1 = [K−1 0 −F1F T 1 ]−1, = K0 + W1H−1 1 W T 1 , applying SMWF, (4) where H1 ≜ I −F T 1 K0F1, and W1 ≜K0F1 (5) ∴ˆx1 = K1CT y/r = K0CT y/r + W1H−1 1 W T 1 CT y/r = ˆx0 + ˆxc, (6) where ˆxc ≜ W1H−1 1 W T 1 CT y/r, = W1H−1 1 z1, where z1 = W T 1 CT y/r (7) On a digital computer, the above computation can be efficiently implemented in the Fourier domain, despite the huge size of Kα and Qα. For example, K1 equals K0—a circulant matrix—plus a rank-L perturbation (cf. Eq. 4). Since each column of W1 is a spatially shifted copy of a prototypical vector, arising from breaking either a horizontal or a vertical bond, convolution can be utilized in computing W T 1 CT y. The computation of H−1 1 is deferred to Section 3. On a neural substrate, however, the computation can be implemented by inner-products in parallel. For instance, z1r is the result of inner-products between the input y and the feed-forward fan-in weights CW, coded by the dendrites of identical neurons, each situated at a broken bond. Let v1 = H−1 1 z1 be the responses of another layer of neurons. Then Cˆxc = CWv1 amounts to the back-projection of layer v1 to the input plane with fan-out weights identical to the fan-in counterpart. We can also apply the above procedure incrementally4, i.e., apply F1 and then F2, both consisting of a set of perturbation vectors. Quantities resulting from the α’th perturba3The observation matrix C = I for a membrane, and C = I ⊗[1, 0, 0] for a plate. 4Latency considerations, however, preclude the practicability of fully incremental computation. Figure 1: A portion of MRF. Solid and broken lines denote intact and broken bonds, respectively. Open circles denote hidden nodes xi and filled circles denote observed nodes yi. (a) (b) 10 15 20 25 −0.01 −0.005 0 0.005 0.01 (c) Weight value Figure 2: The resulting receptive field of the edge detector produced by breaking the shaded bond shown in Fig. 1. The central vertical dashed line in (a) and (b) marks the location of the vertical streak of bonds shown as broken in Fig. 1. In (a), those bonds are not actually broken; in (b), they are. In (c), a central horizontal slice of (a) is plotted as a solid curve and the counterpart of (b) as a dashed curve. ˆx0 ˆxc y, ˆx1 Figure 3: Estimation of x given input y. ˆx0: by unperturbed rod; ˆx1: coinciding perfectly with y, is obtained by a rod whose two bonds at the step edges of y are broken; ˆxc: correction term, engendered by the perturbed rod. tion step can be obtained from those of the (α −1)’th step, simply by replacing the subscript/superscript ‘1’ and ‘0’ with α and α −1, respectively, in Eqs. 1 to 6. In particular, W2 = K1F2 = K0F2 | {z } g W2 + W1H−1 1 W T 1 F2 | {z } δW2 , (8) where f W2 refers to the weights due to F2 in the absence of perturbation F1, which, when indeed existent, would exert a contextual effect on F2, thereby contributing to the term δW2. Figure 2 illustrates this effect on one perturbation vector (termed ‘edge detector’) in a membrane model, wherein ‘receptive field’ refers to f W2 and W2 in the case of panel (a) and (b), respectively. Evidently, the receptive field of W2 across the contextual boundary is pinched off. Figure 3 shows the estimation of x, cf. Eq. 6 and 7, using a 1D plate, i.e., rod. We stress that once the relevant edges are detected, ˆxc is computed almost instantly, without the need of iterative refinement via lateral propagation. This could be related to the brightness filling-in signal[8]. 2.3 Parameter estimation As edge inference/detection is outside the scope of this paper, we limit our attention to finding optimal values for the parameters r and q. Although the EM algorithm is possible for that purpose, we strive for a non-iterative alternative. To that end, we reparameterize r and q into r and ϱ = q/r. Given a possibly perturbed model Mα, in which x ∼N(0, Qα), we have y ∼N(0, Sα), where Sα = rI + CQαCT . Note that f Sα ≜Sα/r does not depend on r when ϱ is fixed, as Qα ∝q ∝r =⇒Sα ∝r. Next, we aim to maximize the log-probability of y, which is a vector of N components (or pixels). ˜ Jα ≜Lnp(y|Mα) = −(NLn(2π) + Ln|Sα| + yT Sα −1y)/2 = −(NLn(2π) + NLnr + Ln|f Sα| + (yT f Sα −1y)/r)/2 Setting ∂˜ Jα/∂r = 0 ⇒ ˆr = Eα/N, where Eα ≜yT f Sα −1y (9) Define J ≜ NLnEα + Ln|f Sα| = const. −2 ˜ Jα|ˆr (10) J is a function of ϱ only, and we locate the ˆϱ that minimizes J as follows. Prompted by the fact that ϱ governs the spatial scale of the process [5] and scale channels exist in primate visual system, we compute J(ϱ) for a preselected set of ϱ, corresponding to spatial scales half-octave apart, and then fit the resulting J’s with a cubic polynomial, whose location of minimum suggests ˆϱ. We use this approach in Section 4. Computing J in Eq. 10 needs two identities, which are included here without proof (the second can be proven by using SMWF and its associated determinant identity): Eα = yT (y −Cˆxα) (cf. Appendix A of [5]), and |S0|/|Sα| = |Bα|/|Hα|, where Hα = I −Fα T K0Fα, and Bα ≜I −Fα T Q0Fα (11) That is, Eα can be readily obtained once ˆxα has been estimated, and |f Sα| = |f S0||Hα|/|Bα|, in which |f S0| can be calculated in the spectral domain, as S0 is circulant. The computation of |Hα| and |Bα| is dealt with in the next section. 3 Matrix Preconditioning Some of the foregoing computation necessitates matrix determinant and matrix inverse, e.g., H−1z1(cf. Eq. 7). Because H is typically poorly conditioned, plain iterative means to evaluate H−1za would converge very slowly. Methods exist in the literature for finding a matrix P ([9] and references therein) satisfying the following two criteria: (1) inverting P is easy; (2) the condition number κ(P −1H) approaches 1. Ideally, κ(P −1H) = 1 implies P = H. Here we summarize our findings regarding the best class of preconditioners when H arises from some prototypical configurations of bond breaking. We call the following procedure Approximate Diagonalization (AD). (1) ‘DFT’. When a streak of broken bonds forms a closed contour, with a consistent polarity convention (e.g., the excitatory region of the receptive field of the edge detector associated with each bond lies inside the enclosed region), H and B (cf. Eq. 11) are approximately circulant. Let X be the unitary Fourier matrix of same size as H, then He = X†HX would be approximately diagonal. Let ΛH be diagonal: ΛHij = δijHeii, then eH = XΛHX† is a circulant matrix approximating H; Q i ΛHii approximates |H|; XΛH −1X† approximates H−1. In this way, a computation such as H−1z1 becomes XΛH −1X†z1, which amounts to simple fan-in and fan-out operations, if we regard each column of X as a fan-in weight vector. The quality of this preconditioner eH can be evaluated by both the condition number κ( eH−1H) and the relative error between the inverse matrices: ϵ ≜∥eH−1 −H−1∥F /∥H−1∥F , (12) where ∥ ∥F denotes Frobenius norm. The same X can approximately diagonalize B, and the product of the diagonal elements of the resulting matrix approximates |B|. (2) ‘DCST’. One end of the streak of broken bonds (target contour) abuts another contour, and the other end is open (i.e., line-end). Imagine a vibrational mode of the membrane/plate given the configuration of broken bonds. The vibrational contrast of the nodes across the broken bond at a line-end has to be small, since in the immediate vicinity there exist paths of intact bonds linking the two nodes. This suggests a Dirichlet boundary condition at the line-end. At the abutting end (i.e., a T-junction), however, the vibrational contrast can be large, since the nodes on different sides of the contour are practically decoupled. This suggests a von Neumann boundary condition. This analysis leads to using a transform (termed ‘HSWA’ in [10]) which we call ‘DCST’, denoting sine phase at the open end and cosine phase at the abutting end. The unitary transform matrix X is given by: Xi,j = 2√2L + 1 cos(π(i −1/2)(j −1/2)/(L + 1/2)), 1 ≤i, j ≤L, where L is the number of broken bonds in the target contour. (3) ‘DST’. When the streak of broken bonds form an open-ended contour,H can be approximately diagonalized by Sine Transform (cf. the intuitive rationale stated in case (2)), of which the unitary transform matrix X is given by: Xi,j = p 2/(L + 1) sin(πij/(L + 1)), 1 ≤i, j ≤L. For a ‘clean’ prototypical contour, the performance of such preconditioners is remarkable, typically producing 1 ≤κ < 1.2 and ϵ < 0.05. When contours in the image are interconnected in a complex way, we first parse the image domain into non-overlapping enclosed regions, and then treat each region independently. A contour segment dividing two regions is shared between them, and thus would contribute two copies, each belonging to one region[11]. 4 Experiment We test our approach on a real image (Fig. 4a), which is corrupted with three increasing levels of white Gaussian noise: SNR = 4.79db (Fig. 4b), 3.52db, and 2.34db. Our task is to estimate the original image, along with finding optimal q and r. We used both membrane and plate models, and in each case we used both the ‘direct’ method, which directly computes H−1 in Eq. 7 and |H|/|B| required in Eq. 10, and the ‘AD’ method, as described in Section 3, to compute those quantities in approximation. We first apply a Canny detector to generate an edge map (Fig. 4g) for each noisy image, which is then converted to broken bonds. The large number (over 104) of broken bonds makes the direct method impractical. In order to attain a ‘direct’ result, we partition the image domain into a 5 × 5 array of blocks (one such block is delineated by the inner square in Fig. 4g), and focus on each of them in turn by retaining edges not more than 10 pixels from the target block (this block’s outer scope is delineated with the outer square in Fig. 4g). When ˆx is inferred given this partial edge map, only its pixels within the block are considered valid and are retained. We mosaic up ˆx from all those blocks to get the complete inferred image. In ‘AD’, we parse the contours in each block and apply different diagonalizers accordingly, as summarized in Section 3. The performance of the three types of AD is plotted in Fig. 5, from which it is evident that in majority of cases κ < 1.5 and ϵ ≤10%. Fig. 4e and f illustrate the procedure to find optimal q/r for a membrane and a plate, respectively, as explained in Section 2.3. Note how good the cubic polynomial fit is, and that the results of AD do not deviate much from those of the direct (rigorous) method. Fig. 4c and 4d show ˆx by a perturbed and intact membrane model, respectively. Notice that the edges, for instance around Lena’s shoulder and her hat, in Fig. 4d are more smeared than those in Fig. 4c (cf. Fig. 3). Table 1 summarizes the value of optimal q/r and MeanSquared-Error (MSE). Our results compare favorably with those listed in the last column of the table, which is excerpted from [12]. (a) (b) (c) (d) 0.1 1 3.8 4 4.2 4.4 4.6 4.8 5x 10 4 q/r (e) J direct cubic fit AD cubic fit extremum 0.01 0.1 1 4 4.5 5 5.5 x 10 4 q/r (f) J (g) Figure 4: (a) Original image, (b) noisy image. Estimation by (c) a perturbed membrane, and (d) an intact membrane. The criterion function of varying q/r for (e) perturbed membrane, and (f) perturbed plate, which shares the same legend as in (e). (g) Canny edge map. 0 50 100 0 0.05 0.1 0.15 0.2 0.25 (c) DCST 0 100 200 0 0.05 0.1 0.15 0.2 (b) DST 0 10 20 30 0 0.05 0.1 0.15 0.2 (a) DFT 0 50 100 1 1.5 2 2.5 3 0 200 400 1 1.5 2 0 20 40 1.4 1.6 1.8 2 2.2 κ ϵ Figure 5: Histograms of condition number κ after preconditioning, and relative error ϵ as defined in Eq. 12, illustrating the performance of preconditioners, DFT, DST, and DCST, on their respective datasets. Horizontal axes indicate the number of occurrences in each bin. Table 1: Optimal q/r and MSE. membrane model plate model Improved SNR direct AD direct AD Entropic [12] q/r MSE q/r MSE q/r MSE q/r MSE MSE 4.79 0.456 92 0.444 92 0.067 100 0.075 98 121 3.52 0.299 104 0.311 104 0.044 111 0.049 108 138 2.34 0.217 115 0.233 115 0.033 119 0.031 121 166 5 Conclusions We have shown how the estimation with perturbed Gaussian Markov processes—hidden state and parameter estimation—can be carried out in non-iterative way. We have adopted a holistic viewpoint. Instead of focusing on each individual hidden node, we have taken each process as an entity under scrutiny. This paradigm shift changes the way information is stored and represented—from the scenario where the global pattern of the process is embodied entirely by local couplings to the scenario where fan-in and fan-out weigths, in addition to local couplings, reflect the patterns of larger scales. Although edge detection has not been treated in this paper, our formulation is capable of doing so, and our preliminary results are encouraging. It may be premature at this stage to translate the operations of our model to neural substrate; we speculate nevertheless that our approach may have relevance to understanding biological visual systems. Acknowledgments This work was supported in part by the TRW Foundation, ARO (Grant Nos. DAAG55-981-0293 and DAAD19-99-1-0057), and DARPA (Grant No. DAAD19-0010356). References [1] Z. Ghahramani and M.J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Advances in Neural Information Processing Systems, volume 12. MIT Press, 2000. [2] S.Z. Li. Markov Random Field Modeling in Computer Vision. Springer-Verlag, 1995. [3] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37:183–233, 1999. [4] F. C. Jeng and J. W. Woods. Compound Gauss-Markov random fields for image estimation. IEEE Trans. on Signal Processing, 39(3):683–697, 1991. [5] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press, 1987. [6] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Bethe free energy, kikuchi approximations, and belief propagation algorithms. Technical Report TR2001-16, MERL, May 2001. [7] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520–522, 1996. [8] L. Pessoa and P. De Weerd, editors. Filling-in: From Perceptual Completion to Cortical Reorganization. Oxford: Oxford University Press, 2003. [9] R. Chan, M. Ng, and C. Wong. Sine transform based preconditioners for symmetric toeplitz systems. Linear Algebra and its Applications, 232:237–259, 1996. [10] S. A. Martucci. Symmetric convolution and the discrete sine and cosine transforms. IEEE Trans. on Signal Processing, 42(5):1038–1051, May 1994. [11] H. Zhou, H. Friedman, and R. von der Heydt. Coding of border ownership in monkey visual cortex. J. Neuroscience, 20(17):6594–6611, 2000. [12] A. Ben Hamza, H. Krim, and G. B. Unal. Unifying probabilistic and variational estimation. IEEE Signal Processing Magazine, pages 37–47, September 2002.
|
2005
|
95
|
2,916
|
Products of “Edge-perts” Peter Gehler Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 T¨ubingen, Germany pgehler@tuebingen.mpg.de Max Welling Department of Computer Science University of California Irvine welling@ics.uci.edu Abstract Images represent an important and abundant source of data. Understanding their statistical structure has important applications such as image compression and restoration. In this paper we propose a particular kind of probabilistic model, dubbed the “products of edge-perts model” to describe the structure of wavelet transformed images. We develop a practical denoising algorithm based on a single edge-pert and show state-ofthe-art denoising performance on benchmark images. 1 Introduction Images, when represented as a collection of pixel values, exhibit a high degree of redundancy. Wavelet transforms, which capture most of the second order dependencies, form the basis of many successful image processing applications such as image compression (e.g. JPEG2000) or image restoration (e.g. wavelet coring). However, the higher order dependencies can not be filtered out by these linear transforms. In particular, the absolute values of neighboring wavelet coefficients (but not their signs) are mutually dependent. This kind of dependency is caused by the presence of edges that induce clustering of wavelet activity. Our philosophy is that by modelling this clustering effect we can potentially improve the performance of some important image processing tasks. Our model builds on earlier work in the image processing literature. In particular, the PoEdges models that we discuss in this paper can be viewed as generalizations of the models proposed in [1] and [2]. The state-of-art in this area is the joint model discussed in [3] based on the “Gaussian scale mixture” model (GSM). While the GSM falls in the category of directed graphical models and has a top-down structure, the PoEdges model is best classified as an (undirected) Markov random field model and follows bottom-up semantics. The main contributions of this paper are 1) a new model to describe the higher order statistical dependencies among wavelet coefficients (section 2), 2) an efficient estimation procedure to fit the parameters of a single edge-pert model and a new technique to estimate the wavelet coefficients that participate in each such (local) model (section 3.1) and 3) a new “iterated Wiener denoising algorithm” (section 3.2). In section 4 we report on a number of experiments to compare performance of our algorithm with several methods in the literature and with the GSM-based method in particular. center component upper left component −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 W = [8.64,8.63], α = 0.28 center component upper left component −15 −10 −5 0 5 10 15 −15 −10 −5 0 5 10 15 W Z U Uα |Z|β Σ Z U (Ia) (Ib) (IIa) (IIb) Figure 1: Estimated (Ia) and modelled (Ib) conditional distribution of a wavelet coefficient given its upper left neighbor. The statistics were collected from the vertical subband at the lowest level of a Haar filter wavelet decomposition of the ”Lena” image. Note that the “bow-tie” dependencies are captured by the PoEdges model. (IIa) Bottom up network interpretation of “products of edge-perts” model. (IIb) Top-down generative Gaussian scale mixture model. 2 “Product of Edge-perts” It has long been recognized in the image processing community that wavelet transforms form an excellent basis for representation of images. Within the class of linear transforms, it represents a compromise between many conflicting but desirable properties of image representation such as multi-scale and multi-orientation representation, locality both in space and frequency, and orthogonality resulting in decorrelation. A particularly suitable wavelet transform which forms the basis of the best denoising algorithms today is the over-complete steerable wavelet pyramid [4] freely downloadable from http://www.cns.nyu.edu/∼lcv/software.html. In our experiments we have confirmed that the best results were obtained using this wavelet pyramid. In the following we will describe a model for the statistical dependencies between wavelet coefficients. This model was inspired by recent studies of these dependencies (see e.g. [1, 5]). It also represents a generalization of the bivariate Laplacian model proposed in [2]. The probability distribution of the “product of edge-pert” model (PoEdges) over the wavelet coefficients z has the following form, P(z) = 1 Z exp h − X i X j Wij|ˆaT j z|βjαii , βj > 0, αi ∈(0, 1], Wij ≥0 where the normalization constant Z depends on all the parameters in the model {Wij, ˆaj, βj, αi} and where ˆa indicates an unit-length vector. In figure 2 we show the effect of changing some parameters for a single edge-pert model (i.e. set i = 1 in Eqn.1 above). The parameters {βj} control the shape of the contours: for β = 2 we have elliptical contours, for β = 1 the contours are straight lines while for β < 1 the contours curve inwards. The parameters {αi} control the rate at which the distribution decays, i.e. the distance between iso-probability contours. The unit vectors {ˆai} determine the orientation of basis vectors. If the {ˆai} are axis-aligned (as in figure 2), the distribution is symmetric w.r.t. reflections of any subset of the {zi} in the origin, which implies that the wavelet coefficients are necessarily decorrelated (although higher order dependencies may still remain). Finally, the weights {Wij} model the scale (inverse variance) of the wavelet coefficients. We mention that it is possible to entertain a larger number of bases vectors than wavelet coefficients (a so-called “over-complete basis”), which seems appropriate for some of the empirical joint histograms shown in [1]. This model describes two important statistical properties which have been observed for wavelet coefficients: 1) its marginal distributions p(zi) are peaked and have heavy tails (high kurtosis) and 2) the conditional distributions p(zi|zj) display “bow-tie” dependencies which are indicative of clustering of wavelet coefficients (neighboring wavelet coefficient −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 (a) (b) (c) (d) Figure 2: Contour plots for a single edge-pert model with (a) β1,2 = 0.5, α = 0.5, (b) β1,2 = 1, α = 0.5, (c) β1,2 = 2, α = 0.5, (d) β1,2 = 2, α = 0.3. For all figures W1 = 1 and W2 = 0.8. are often active together). This phenomenon is shown in figure 1Ia,b. To better understand the qualitative behavior of our model we provide the following network interpretation (see figure 1IIa,b. Input to the model (i.e. the wavelet coefficients) undergo a nonlinear transformation zi →|zi|βi →u = W|z|β →uα. The output of this network, uα, can be interpreted as a “penalty” for the input: the larger this penalty is, the more unlikely this input becomes under the probabilistic model. This process is most naturally understood [6] as enforcing constraints of the form u = W|z|β ≈0, by penalizing violations of these constraints with uα. What is the reason that the PoEdges model captures the clustering of wavelet activities? Consider a local model describing the statistical structure of a patch of wavelet coefficients and recall that the weighted sum of these activities is penalized. At a fixed position the activities are typically very small across images. However, when an edge happens to fall within the window of the model, most coefficients become active jointly. This “sparse” pattern of activity incurs less penalty than for instance the same amount1 of activity distributed equally over all images because of the concave shape of the penalty function, i.e. (act)α < ( 1 2act)α + ( 1 2act)α where “act” is the activity level and α < 1. 2.1 Related Work Early wavelet denoising techniques were based on the observation that the marginal distribution of a wavelet coefficient is highly kurtotic (peaked and heavy tails). It was found that the generalized Gaussian density represents a very good fit to the empirical histograms [1, 7], p(z) = αw 2Γ( 1 α) exp [−(w|z|)α] , α > 0, w > 0. (1) This has lead to the successful wavelet coring and shrinkage methods. A bivariate generalization of that model describing a wavelet coefficient zc and its “parent” zp at a higher level in the pyramid jointly, was proposed in [2]. The probability density, p(zc, zp) = w 2π exp − q w(z2c + z2p) (2) is easily seen to be a special case of the PoEdges model proposed here. This model, unlike the univariate model, captures the bow-tie dependencies described above resulting a significant gain in denoising performance. “Gaussian scale mixtures” (GSM) have been proposed to model even larger neighborhoods of wavelet coefficients. In particular, very good denoising results have been obtained by including within subband neighborhoods of size 3 × 3 in addition to the parent of a wavelet coefficient [3]. A GSM is defined in terms of a precision variable u, the squareroot of which multiplies a multivariate Gaussian variable: z = √u y, y ∼N[0, Σ], resulting in the following expression for the distribution over the wavelet coefficients: p(z) = R du Nz[0, uΣ] p(u). Here, p(u) is the prior distribution for the precision variable. Hence, the GSM represents an example of a generative model with top-down semantics. 1We assume the total amount of variance in wavelet activity is fixed in this comparison. This in contrast to the PoEdges model which is better interpreted as a bottom-up network with log-probability proportional to its output. This difference is contrasted in figure 1IIa,b. 3 Edge-pert Denoising Based on the PoEdges model discussed in the previous sections we now introduce a simplified model that forms the basis for a practical denoising algorithm. Recent progress in the field has indicated that it is important to model the higher order dependencies which exist between wavelet coefficients [2, 3]. This can be realized through the estimation of a joint model on a small cluster of wavelet coefficients around each coefficient. Ideally, we would like to use the full PoEdges model, but training these models from data is cumbersome. Therefore, in order to keep computations tractable, we proceed with a simplified model, p(z) ∝exp − X j wj ˆaj T z 2α . (3) Compared to the full PoEdges model we use only one edge-pert and we have set βj = 2 ∀j. 3.1 Model Estimation Our next task is to estimate the parameters of this model efficiently. We will learn separate models for each wavelet coefficient jointly with a small neighborhood of dependent coefficients. Each such model is estimated in three steps: I) determine the coefficients that participate in each model, II) transform each model into a decorrelated domain (this implicitly estimates the {ˆaj}) and III) estimate the remaining parameters w, α in the decorrelated domain using moment matching. Below we will describe these steps in more detail. By zi, ˜zi we will denote the clean and noisy wavelet coefficients respectively. With yi, ˜yi we denote the decorrelated clean and noisy wavelet coefficients while ni denotes the Gaussian noise random variable in the wavelet domain, i.e. ˜zi = zi + ni. Both due to the details of the wavelet decomposition and due to the properties of the noise itself we assume the noise to be correlated and zero mean: E[ni] = 0, E[ninj] = Σij. In this paper we further assume that we know the noise covariance in the image domain from which one can easily compute the noise covariance in the wavelet domain, however only minor changes are needed to estimate it from the noisy image itself. Step I: We start with a 7 × 7 neighborhood from which we will adaptively select the best candidates to include in the model. In addition, we will always include the parent coefficient in the subband of a coarser scale if it exists (this is done by first up-sampling this band, see [3]). The coefficients that participate in a model are selected by estimating their dependencies relative to the center coefficient. Anticipating that (second order) correlations will be removed by sphering we are only interested in higher order dependencies, in particular dependencies between the variances. The following cumulant is used to obtain these estimates, Hcj = E[˜z2 c ˜z2 j ] −2E[˜zc˜zj]2 −E[˜z2 c]E[˜z2 j ] (4) where c is the center coefficient which will be denoised. The necessary averages E[·] are computed by collecting samples within each subband, assuming that the statistics are location invariant. It can be shown that this cumulant is invariant under addition of possibly correlated Gaussian noise, i.e. it’s value is the same for {zi} and {˜zi}. Effectively, we measure the (higher order) dependencies between squared wavelet coefficients after subtraction of all correlations. Finally, we select the participants of a model centered at coefficient ˜zc by ranking the positive Hcj and picking all the ones which satisfy: Hci > 0.7 × maxj̸=c Hcj. Step II: For each model (with varying number of participants) we estimate the covariance, Cij = E[zi, zj] = E[˜zi˜zj] −Σij (5) and correct it by setting to zero all negative eigenvalues in such a way that the sum of the eigenvalues is invariant (see [3]). Statistics are again collected by sampling within a subband. Then, we perform a linear transformation to a new basis onto which Σ = I and C are diagonal. This can be accomplished by the following procedure, RRT = Σ ⇒ UΛU T = R−1CR−T ⇒ ˜y = (RU)−1˜z. (6) In this new space (which is different for every wavelet coefficient) we can now assume ˆaj = ej, the axis aligned basis vector. Step III: In the decorrelated space we estimate the single edge-pert model by moment matching. The moments of the edge-pert model in this space are easily computed using E h ( Np X j=1 wjy2 j )ℓi = Γ Np + 2ℓ 2α / Γ Np 2α (7) where Np is the number of participating coefficients in the model. We note that E[˜y2 i ] = 1 + E[y2 i ]. This leads to the following equation for α N 2 pΓ Np+4 2α Γ Np 2α Γ Np+2 2α 2 = Np X i=1 E[˜y4 i ] −6E[˜y2 i ] + 3 (E[˜y2 i ] −1)2 + Np X i̸=j E[˜y2 i ˜y2 j ] −E[˜y2 i ] −E[˜y2 j ] + 1 (E[˜y2 i ] −1)(E[˜y2 j ] −1) . (8) Thus we can estimate α by a line search and approximate the second term on the right hand side with Np(Np −1) to simplify the calculations. By further noting that the model (Eqn.3) is symmetric w.r.t. permutations of the variables uj = wjy2 j we find wj = Γ Np+2 2α / Np(E[˜y2 i ] −1) Γ Np 2α . (9) A common strategy in the wavelet literature is to estimate the averages E[·] by collecting samples in a local neighborhood around the coefficient under consideration. The advantage is that the estimates are adapting to the local statistics in the image. We have adopted this strategy and used a 11 × 11 box around each coefficient to collect 121 samples in the decorrelated wavelet domain. Coefficients for which E[˜y2 i ] < 1 are set to zero and removed from consideration. The estimation of α depends on the fourth moment and is thus very sensitive to outliers, which is a commonly known problem with the moment matching method. We encounter the same problem so whenever we find no estimate of α in [0, 1] using Eqn.8 we simply set it to 0.5. 3.2 The Iterated Wiener Filter To infer a wavelet coefficient given its noisy observation in the decorrelated wavelet domain, we maximize the a posteriori probability of our joint model. This is equivalent to, z∗= argmax z log p(˜z|z) + log p(z) . (10) When we assume Gaussian pixel noise, this translates into, z∗= argmin z 1 2(z −˜z)T K(z −˜z) + X j wjz2 j α (11) where J is the (linear) wavelet transform ˜z = Jx, K = J#T Σ−1 n J# with J# = (JT J)−1JT the pseudo-inverse of J (i.e. J#J = I) and Σn the noise covariance matrix. In the decorrelated wavelet domain we simply set K = I. One can now construct an upper bound on this objective by using, f α ≤γf + (1 −α) γ α α α−1 α < 1. (12) 20 22 24 26 28 30 31 32 33 34 35 36 Input PSNR [dB] Output PSNR [dB] Lena GSM: 35.59, 33.89, 32.67, 31.68 EP : 35.60, 33.89, 32.62, 31.64 BiV : 35.35, 33.67, 32.40, 31.40 LiOr : 34.96, 33.05, 31.72, 30.64 LM : 34.31, 32.36, 31.01, 29.98 20 22 24 26 28 27 28 29 30 31 32 33 34 35 Input PSNR [dB] Output PSNR [dB] Barbara GSM: 34.03, 31.87, 30.31, 29.12 EP : 34.40, 32.32, 30.86, 29.69 BiV : 33.35, 31.31, 29.80, 28.61 LiOr : 33.35, 31.10, 29.44, 28.23 LM : 32.57, 30.19, 28.59, 27.42 Figure 3: Output PSNR as a function of input PSNR for various methods on Lena (left) and Barbara (right) images. GSM: Gaussian scale mixture (3 × 3+p)[3], EP: edge-pert, BIV: Bivariate adaptive shrinkage [2], LiOr: results from [8], LM: 5 × 5 LAWMAP results from [9]. Dashed lines indicate results copied from the literature, while solid lines indicate that the values were (re)produced on our computer. This bound is saturated for γ = αf α−1, and hence we can construct the following iterative algorithm that is guaranteed to converge to a local minimum, zt+1 = K + Diag[2γtw] −1 K˜z ⇔ γt+1 = α X j wj(zt+1 j )2α−1. (13) This algorithm has a natural interpretation as an “iterated Wiener filter” (IWF), since the first step (left hand side) is an ordinary Wiener filter while the second step (right hand side) adapts the variance of the filter. A summary of the complete algorithm is provided below. Edge-pert Denoising Algorithm 1. Decompose image into subbands. 2. For each subband (except low-pass residual): 2i. Determine coefficients participating in joint model by using Eqn.4 (includes parent). 2ii. Compute noise covariance Σ. 2iii. Compute signal covariance using Eqn.5. 3. For each coefficient in a subband: 3i. Transform coefficients into the decorrelated domain using Eqn.6. 3ii. Estimate parameters {α, wi} on a local neighborhood using Eqn.8 and Eqn.9. 3iii. Denoise all wavelet coefficients in the neighborhood using IWF from section 3.2. 3iv. Transform denoised cluster back to the wavelet domain and retain the “center coefficient” only. 4. Reconstruct denoised image by inverting the wavelet transform. 4 Experiments Denoising experiments were run on the steerable wavelet pyramid with oriented highpass residual bands (FSpyr) using 8 orientations as described in [3]. Results are reported on six images: “Lena”, “Barbara”, “Boat”, “Fingerprint”, “House” and “Peppers” and averaged over 5 experiments. In each experiment an image was artificially contaminated with independent Gaussian pixel noise of some predetermined variance and denoised using 20 iterations of the proposed algorithm. To reduce artifacts at the boundaries we used “reflective boundary extensions”. The images were obtained from http://decsai.ugr.es/∼javier/denoise/index.html to ensure comparison on the same set of images. In table 1 we compare performance between the PoEdges and GSM based denoising algorithms on six test images and ten different noise levels. In figure 3 we compare results on σ 1 2 5 10 15 20 25 50 75 100 Lena EP 48.65 43.53 38.51 35.60 33.89 32.62 31.64 28.58 26.74 25.53 GSM 48.46 43.23 38.49 35.61 33.90 32.66 31.69 28.61 26.84 25.64 Barbara EP 48.70 43.59 38.06 34.40 32.32 30.86 29.69 26.12 24.12 22.90 GSM 48.37 43.29 37.79 34.03 31.86 30.32 29.13 25.48 23.65 22.61 Boat EP 48.46 43.09 37.05 33.49 31.58 30.28 29.24 26.27 24.64 23.56 GSM 48.44 42.99 36.97 33.58 31.70 30.38 29.37 26.38 24.79 23.75 Fingerprint EP 48.44 43.02 36.66 32.35 30.02 28.42 27.31 24.15 22.45 21.28 GSM 48.46 43.05 36.68 32.45 30.14 28.60 27.45 24.16 22.40 21.22 House EP 49.06 44.32 39.00 35.54 33.67 32.37 31.33 28.15 26.12 24.84 GSM 48.85 44.07 38.65 35.35 33.64 32.39 31.40 28.26 26.41 25.11 Peppers EP 48.50 43.20 37.40 33.79 31.74 30.29 29.13 25.69 23.85 22.50 GSM 48.38 43.00 37.31 33.77 31.74 30.31 29.21 25.90 24.00 22.66 Table 1: Comparison of image denoising results between PoEdges (EP above) and its closest competitor (GSM). All results are averaged over 5 noise samples. The GSM results are copied from [3]. Details of the PoEdges algorithm are described in main text. Note that PoEdges outperforms GSM for low noise levels while the GSM performs better at high noise levels. Also, PoEdges performs best at all noise levels on the Barbara image, while GSM is superior on the boat image. FSpyr against various methods published in the literature [3, 2, 9] on the images “Lena” and “Barbara”. These experiments lead to some interesting conclusions. In comparing PoEdges with GSM the general trend seems to be that PoEdges performs superior at lower noise levels while the reverse is true for higher noise levels. We observe that the PoEdges give significantly better results on the ”Barbara” image than any other published method (by a large magin). According to the findings of the authors of [3]2 this stems mainly from the fact that the parameters are estimated locally which is particularly suited for this image. Increasing the estimation window in step 3ii of the algorithm let the denoising results drop down to the GSM solution (not reported here). Comparing the quality of restored images in detail (as in figure 3) we conclude that the GSM produces slightly sharper edges at the expense of more artifacts. Denoising a 512 × 512 pixel sized image on a pentium 4 2.8GHz PC for our adaptive neighborhood selection model took 26 seconds for the QMF9 and 440 seconds for the FSpyr. We also compared GSM and EP using a separable orthonormal pyramid (QMF9). Using this simpler orthonormal decomposition we found that the EP model outperforms GSM in all experiments described above. However the results are significantly inferior because the wavelet representation plays a prominent role for denoising performance. These results and our matlab implementation of the algorithm are available online3. 5 Discussion We have proposed a general “product of edge-perts” model to capture the dependency structure in wavelet coefficients. This was turned into a practical denoising algorithm by simplifying to a single edge-pert and choosing βj = 2 ∀j. The parameters of this model can be adapted based on the noisy observation of the image. In comparison with the closest competitor (GSM [3]) we found superior performance at low noise levels while the reverse is true for high noise levels. Also, the PoEdges model performs better than any competitor on the Barbara image, but consistency less well than GSM on the boat image. The GSM model aims at capturing the same statistical regularities as the PoEdges but using a very different modelling paradigm: where PoEdges is best interpreted as a bottom-up constraint satisfaction model, the GSM is a causal generative model with top-down semantics. We have found that these two modelling paradigms exhibit different denoising accuracies 2Personal communication 3http://www.kyb.mpg.de/∼pgehler (a) (b) (c) (d) Figure 4: Comparison between (c) GSM with 3 × 3+parent [3] (PSNR 29.13) and (d) edge-pert denoiser with parameter settings as described in the text (PSNR 29.69) on Barbara image (cropped to 150 × 150 to enhance artifacts). Noisy image (b) has PSNR 20.17. Although the results turn out very similar, the GSM seems to be slightly less blurry at the expense of introducing more artifacts. on some types of images implying an opportunity for further study and improvement. The model in Eqn.3 can be extended in a number of ways. For example, we can lift the restriction on βj = 2, allow more basis-vectors ˆaj than coefficients or extend the neighborhood selection to subbands of different scales and/or orientations. More substantial performance gains are expected if we can extend the single edge-pert case to a multi edge-pert model. However, approximations in the estimation of these models will become necessary to keep the denoising algorithm practical. The adaptation of α relies on empirical estimations of the fourth moment and is therefore very sensitive to outliers. We are currently investigating more robust estimators to fit α. Further performance gains may still be expected through the development of new wavelet pyramids and through modelling of new dependency structures such as the phenomenon of phase alignment at the edges. Acknowledgments We would like to thank the authors of [2] and [3] for making their code available online. References [1] J. Huang and D. Mumford. Statistics of natural images and models. In Proc. of the Conf. on Computer Vision and Pattern Recognition, pages 1541–1547, Ft. Collins, CO, USA, 1999. [2] L. Sendur and I.W. Selesnick. Bivariate shrinkage with local variance estimation. IEEE Signal Processing Letters, 9(12):438–441, 2002. [3] J. Portilla, V. Strela, M. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans Image Processing, 12(11):1338–1351, 2003. [4] E.P. Simoncelli and W.T. Freeman. A flexible architecture for multi-scale derivative computation. In IEEE Second Int’l Conf on Image Processing, Washington DC, 1995. [5] E.P. Simoncelli. Modeling the joint statistics of images in the wavelet domain. In Proc SPIE, 44th Annual Meeting, volume 3813, pages 188–195, Denver, 1999. [6] G.E. Hinton and Y.W. Teh. Discovering multiple constraints that are frequently approximately satisfied. In Proc. of the Conf. on Uncertainty in Artificial Intelligence, pages 227–234, 2001. [7] E.P. Simoncelli and E.H. Adelson. Noise removal via bayesian wavelet coring. In 3rd IEEE Int’l Conf on Image Processing, Laussanne Switzerland, 1996. [8] X. Li and M.T. Orchard. Spatially adaptive image denoising under over-complete expansion. In IEEE Int’l. conf. on Image Processing, Vancouver, BC, 2000. [9] M. Kivanc, I. Kozintsev, K. Ramchandran, and P. Moulin. Low-complexity image denoising based on statistical modeling of wavelet coefficients. IEEE Signal Proc. Letters, 6:300–303, 1999.
|
2005
|
96
|
2,917
|
Ideal Observers for Detecting Motion: Correspondence Noise Hongjing Lu Department of Psychology, UCLA Los Angeles, CA 90095 hongjing@psych.ucla.edu Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract We derive a Bayesian Ideal Observer (BIO) for detecting motion and solving the correspondence problem. We obtain Barlow and Tripathy’s classic model as an approximation. Our psychophysical experiments show that the trends of human performance are similar to the Bayesian Ideal, but overall human performance is far worse. We investigate ways to degrade the Bayesian Ideal but show that even extreme degradations do not approach human performance. Instead we propose that humans perform motion tasks using generic, general purpose, models of motion. We perform more psychophysical experiments which are consistent with humans using a Slow-and-Smooth model and which rule out an alternative model using Slowness. 1 Introduction Ideal Observers give fundamental limits for performing visual tasks (somewhat similar to Shannon’s limits on information transfer). They give benchmarks against which to evaluate human performance. This enables us to determine objectively what visual tasks humans are good at, and may help point the way to underlying neuronal mechanisms. For a recent review, see [1]. In an influential paper, Barlow and Tripathy [2] tested the ability of human subjects to detect dots moving coherently in a background of random dots. They derived an “ideal observer” model using techniques from Signal Detection theory [3]. They showed that their model predicted the trends of the human performance as properties of the stimuli changed, but that humans performed far worse than their model. They argued that degrading their model, by lowering the spatial resolution, would give predictions closer to human performance. Barlow and Tripathy’s model has generated considerable interest, see [4,5,6,7]. We formulate this motion problem in terms of Bayesian Decision Theory and derive a Bayesian Ideal Observer (BIO) model. We describe why Barlow and Tripathy’s (BT) model is not fully ideal, show that it can be obtained as an approximation to the BIO, and determine conditions under which it is a good approximation. We perform psychophysical experiments under a range of conditions and show that the trends of human subjects are more similar to those of the BIO. We investigate whether degrading the Bayesian Ideal enables us to reach human performance, and conclude that it does not (without implausibly large deformations). We comment that Barlow and Tripathy’s degradation model is implausible due to the nature of the approximations used. Instead we show that a generic motion detection model which uses a slow-and-smooth assumption about the motion field [8,9] gives similar performance to human subjects under a range of experimental conditions. A simpler approach using a slowness assumption alone does not match new experimental data that we present. We conclude that human observers are not ideal, in the sense that they do not perform inference using the model that the experimenter has chosen to generate the data, but may instead use a general purpose model perhaps adapted to the motion statistics of natural images. 2 Bayes Decision Theory and Ideal Observers We now give the basic elements of Bayes Decision Theory. The input data is D and we seek to estimate a binary state W (e.g. coherent or incoherent motion, horizontal motion to right or to left). We assume models P(D|W) and P(W). We define a decision rule α(D) and a loss function L(α(I), W) = 1 −δα(D),W . The risk is R(α) = P D,W L(α(D), W)P(D|W)P(W). Optimal performance is given by the Bayes rule: α∗= arg min R(α). The fundamental limits are given by Bayes Risk: R∗= R(α∗). Bayes risk is the best performance that can be achieved. It corresponds to ideal performance. Barlow and Tripathy’s (BT) model does not achieve Bayes risk. This is because they used simplification to derive it using concepts from Signal Detection theory (SDT). SDT is essentially the application of Bayes Decision Theory to the task of signal detection but, for historical reasons, SDT restricts itself to a limited class of probability models and is unable to capture the complexity of the motion problem. 3 Experimental Setup and Correspondence Noise We now give the details of Barlow and Tripathy’s stimuli, their model, and their experiments. The stimuli consist of two image frames with N dots in each frame. The dots in the first frame are at random positions. For coherent stimuli, see figure (1), a proportion CN of dots move coherently left or right horizontally with a fixed translation motion with displacement T. The remaining N(1 −C) dots in the second frame are generated at random. For incoherent stimuli, the dots in both frames are generated at random. Estimating motion for these stimuli requires solving the correspondence problem to match dots between frames. For coherent motion, the noise dots act as correspondence noise and make the matching harder, see the rightmost panel in figure (1). Barlow and Tripathy perform two types of binary forced choice experiments. In detection experiments, the task is to determine whether the stimuli is coherent or incoherent motion. For discrimination experiments, the goal is to determine if the motion is to the right or the left. The experiments are performed by adjusting the fraction C of coherently moving dots until the human subject’s performance is at threshold (i.e. 75 percent correct). Barlow and Tripathy’s (BT) model gives the proportion of dots at threshold to be Cθ = 1/√Q −N where Q is the size of the image lattice. This is approximately 1/√Q (because N << Q) and so is independent of the density of dots. Barlow and Tripathy compare the thresholds of the human subjects with those of their model for a range of experimental conditions which we will discuss in later sections. Figure 1: The left three panels show coherent stimuli with N = 20, C = 0.1, N = 20, C = 0.5 and N = 20, C = 1.0 respectively. The closed and open circles denote dots in the first and second frame respectively. The arrows show the motion of those dots which are moving coherently. Correspondence noise is illustrated by the far right panel showing that a dot in the first frame has many candidate matches in the second frame. 4 The Bayesian Ideal Model We now compute the Bayes rule and Bayes risk by taking into account exactly how the data is generated. We denote the dot positions in the first and second frame by D = {xi : i = 1, ..., N}, {ya : a = 1, ..., N}. We define correspondence variables Via : Via = 1 if xi → ya, Via = 0 otherwise. The generative model for the data is given by: P(D|Coh, T) = X Via P({ya}|{xi}, {Via}, T)P({Via})P({xi}) coherent, P(D|Incoh) = P({ya})P({xi}), incoherent. (1) The prior distributions for the dot positions P({xi}), P({ya}) allow all configurations of the dots to be equally likely. They are therefore of form P({xi}) = P({ya}) = (Q−N)! Q! where Q is the number of lattice points. The model P({ya}|{xi}, {Via}, T) for coherent motion is P({ya}|{xi}, {Via}, T) = (Q−N)! (Q−CN)! Q ia (δya,xi+T )Via. We set the priors P({Via} to be the uniform distribution. There is a constraint P ia Via = CN (since only CN dots move coherently). This gives: P(D|Incoh) = (Q −N)! Q! (Q −N)! Q! , P(D|Coh, T) = {(N −CN)! (N)! (N −CN)! (N)! }2(CN)! X Via Y ia (δya+T,xi)Via. These can be simplified further by observing that P Via Q ia (δya,xi+T )Via = Ψ! (Ψ−M)!M!, where Ψ is the total number of matches – i.e. the number of dots in the first frame that have a corresponding dot at displacement T in the second frame (this includes “fake” matches due to change alignment of noise dots in the two frames). The Bayes rule for performing the tasks are given by testing the log-likelihood ratios: (i) log P (D|Incoh) P (D|Coh,T ) for detection (i.e. coherent versus incoherent), and (ii) log P (D|Coh,−T ) P (D|Coh,T ) for discrimination (i.e. motion to right or to left). For detection, the log-likelihood ratio is a function of Ψ. For discrimination, the log-likelihood ratio is a function of the number of matches to the right Ψr and to the left Ψl. It is straightforward to calculate the Bayes risk and determine coherence thresholds. We can rederive Barlow and Tripathy’s model as an approximation to the Bayesian Ideal. They make two approximations: (i) they model the distribution of ψ as Binomial, (ii) they use d′. Both approximations are very good near threshold, except for small N. The use of d′ can be justified if P(Ψ|Coh, T) and P(Ψ|Incoh) are Gaussians with similar variance. This is true for large N = 1000 and a range of C but not so good for small N = 100, see figure (2). 0 30 60 0 0.03 0.06 0.09 ψ Probability P(ψ|N) P(ψ|C) N=1000 C=0.5% 0 40 80 0 0.03 0.06 0.09 ψ Probability P(ψ|N) P(ψ|C) N=1000 C=5% 0 2 4 0 0.3 0.6 0.9 ψ Probability N=100 C=1% P(ψ|N) P(ψ|C) 0 5 10 15 0 0.2 0.4 ψ Probability N=200 C=2.5% P(ψ|N) P(ψ|C) Figure 2: We plot P(Ψ|Coh, T) and P(Ψ|Incoh), shown as P(Ψ|C) and P(Ψ|N) respectively, for a range of N and C. One of Barlow and Tripathy’s two approximations are justified if the distributions are Gaussian with the same variance. This is true for large N (left two panels) but fails for small N (right two panels). Note that human thresholds are roughly 30 times higher than for BIO (the scales on graphs differ). We computed the coherence threshold for the BIO and the BT models for N = 100 to N = 1000, see the second and fourth panels in figure (3). As described earlier, the BT threshold is approximately independent of the number N of dots. Our computations showed that the BIO threshold is also roughly constant except for small N (this is not surprising in light of figure (2). This motivated psychophysics experiments to determine how humans performed for small N (this range of dots was not explored in Barlow and Tripathy’s experiments). All our data points are from 300 trials using QUEST, so errors bars are so small that we do not include them. We performed the detection and discrimination tasks with translation motion T = 16 (as in Barlow and Tripathy). For detection and discrimation, the human subject’s thresholds showed similar trends to the thresholds for BIO and BT. But human performance at small N are more consistent with BIO, see figure (3). 100 1000 10000 0.1 0.5 1.0 Dot Numbers (N) Coherence Threshold HL RK 100 1000 10000 0.01 0.03 Dot Numbers (N) Coherence Threshold Baysian model Barlow & Tripathy 100 1000 10000 0.1 0.5 1.0 Dot Numbers (N) Coherence Threshold BT HL RK 100 1000 10000 0.01 0.03 Dot Numbers (N) Coherence Threshold Baysian model Barlow & Tripathy Figure 3: The left two panels show detection thresholds – human subjects (far left) and BIO and BT thresholds (left). The right two panels show discrimination thresholds – human subjects (right) and BIO and BT (far right). But probably the most striking aspect of figure (3) is how poorly humans perform compared to the models. The thresholds for BIO are always higher than those for BT, but these differences are almost negligible compared to the differences with the human subjects. The experiments also show that the human subject trends differ from the models at large N. But these are extreme conditions where there are dots on most points on the image lattice. 5 Degradating the Ideal Observer Models We now degrade the Bayes Ideal model to see if we can obtain human performance. We consider two mechanisms: (A) Humans do not know the precise value of the motion translation T. (B) Humans have poor spatial uncertainty. We will also combine both mechanisms. For (A), we model lack of knowledge of the velocity T by summing over different motions. We generate the stimuli as before from P(D|Incoh) or P(D|Coh, T), but we make the decision by thresholding: log P T P (D|Coh,T )P (T ) P (D|Incoh) . For (B), we model lack of spatial resolution by replacing P({ya}|{xi}, {Via}, T) = (Q−N)! (Q−CN)! Q ia Viaδya,xi+t by P({ya}|{xi}, {Via}, T) = (Q−N)! (Q−CN)! Q ia ViafW (ya, xi + t). Here W is the width of a spatial window, so that fW (a, b) = 1/W 2, if |a −b| < W; fW (a, b) = 0, otherwise. Our calculations, see figure (4), show that neither (A) nor (B) not their combination are sufficient to account for the poor performance of human subjects. Lack of knowledge of the correct motion (and consequently summing over several models) does little to degrade performance. Decreasing spatial resolution does degrade performance but even huge degradations are insufficient to reach human levels. Barlow and Tripathy [2] argue that they can degrade their model to reach human performance but the degradations are huge and they occur in conditions (e.g. N = 50 or N = 100) where their model is not a good approximation to the true Bayesian Ideal Observer. 5 9 17 33 0.1 0.5 Spatial uncertainty range (pixels) Coherence Threshold Unknown Velocity Spatial uncertainty Lattice separation Human performance Figure 4: Comparing the degraded models to human performance. We use a log-log plot because the differences between humans and model thresholds is very large. 6 Slowness and Slow-and-Smooth We now consider an alternative explanation for why human performance differs so greatly from the Bayesian Ideal Observer. Perhaps human subjects do not use the ideal model (which is only known to the designer of the experiments) and instead use a general purpose motion model. We now consider two possible models: (i) a slowness model, and (ii) a slow and smooth model. 100 1000 10000 0.1 0.5 1.0 Dot Numbers (N) Coherence Threshold Speed=2 Speed=8 Speed=16 100 1000 10000 0.1 0.5 1.0 Dot Numbers (N) Coherence Threshold Speed=2 Speed=8 Speed=16 100 1000 10000 0.1 0.5 1.0 Dot Numbers (N) Coherence Threshold 2D Nearest Neighbor 1D Nearest Neighbor Humans 100 1000 10000 0.1 0.5 1.0 Dot Numbers (N) Coherence Threshold Speed=2 Speed=4 Speed=8 Speed=16 Human average Figure 5: The coherence threshold as a function of N for different translation motions T. From left to right, human subject (HL), human subject (RK), 2DNN (shown for T = 16 only), and 1DNN. In the two right panels we have drawn the average human performance for comparision. The slowness model is partly motivated by Ullman’s minimal mapping theory [10] and partly by the design of practical computer vision tracking systems. This model solves the correspondence problem by simply matching a dot in the first frame to the closest dot in the second frame. We consider a 2D nearest neighbour model (2DNN) and a 1D nearest neighbour model (1DNN), for which the matching is constrained to be in horizontal directions only. After the motion has been calculated we perform a log-likelihood test to solve the discrimination and detection tasks. This enables us to calculate coherence thresholds, see figure (5). Both 1DNN and 2DNN predict that correspondence will be easy for small translation motions even when the number of dots is very large. This motivates a new class of experiments where we vary the translation motion. Our experiments show that 1DNN and 2DNN are poor fits to human performance. Human performance thresholds are relatively insensitive to the number N of dots and the translation motion T, see the two left panels in figure (5). By contrast, the 1DNN and 2DNN thresholds are either far lower than humans for small N or far higher at large N with a transition that depends on T. We conclude that the 1DNN and 2DNN models do not match human performance. N=100, C=10% N=100, C=20% N=100, C=30% N=100, C=50% N=100, C=10% N=100, C=20% N=100, C=30% N=100, C=50% N=100, C=10% N=100, C=20% N=100, C=30% N=100, C=50% Figure 6: The motion flows from Slow-and-Smooth for N = 100 as functions of C and T. From left to right, C = 0.1, C = 0.2, C = 0.3, C = 0.5. From top to bottom, T = 4, T = 8, T = 16. The closed and open circles denote dots in the first and second frame respectively. The arrows indicate the motion flow specified by the Slow-and-Smooth model. We now consider the Slow-and-Smooth model [8,9] which has been shown to account for a range of motion phenomena. We use a formulation [8] that was specifically designed for dealing with the correspondence problem. This gives a model of form P(V, v|{xi}, {ya}) = (1/Z)e−E[V,v]/Tm, where E[V, v] = N X i=1 N X a=1 Via(ya −xi −v(xi))2 + λ||Lv||2 + ζ N X i=1 Vi0, (2) L is an operator that penalizes slow-and-smooth motion and depends on a paramters σ, see Yuille and Grzywacz for details [8]. We impose the constraint that PN i=a Via = 1, ∀i, which enforces that each point i in the first frame is either unmatched, if Vi0 = 1, or is matched to a point a in the second frame. We implemented this model using an EM algorithm to estimate the motion field v(x) that maximizes P(v|{xi}, {ya}) = P V P(V, v|{xi}, {ya}). The parameter settings are Tm = 0.001, λ = 0.5, ζ = 0.01, σ = 0.2236. (The size of the units of length are normalized by the size of the image). The size of σ determines the spatial scale of the interaction between dots [8]. This parameter settings estimate correct motion directions in the condition that all dots move coherently, C = 1.0. The following results, see figure (6), show that for 100 dots (N = 100) the results of the slow-and-smooth model are similar to those of the human subjects for a range of different translation motions. Slow-and-Smooth starts giving coherence thresholds between C = 0.2 and C = 0.3 consistent with human performance. Lower thresholds occurred for slower coherent translations in agreement with human performance. Slow-and-Smooth also gives thresholds similar to human performance when we alter the number N of dots, see figure (7). Once again, Slow-and-Smooth starts giving the correct horizontal motion between c = 0.2 and c = 0.3. N=50, C=10% N=50, C=20% N=50, C=30% N=50, C=50% N=100, C=10% N=100, C=20% N=100, C=30% N=100, C=50% N=1000, C=10% N=1000, C=20% N=1000, C=30% N=1000, C=50% Figure 7: The motion fields of Slow-and-Smooth for T = 16 as a function of c and N. From left to right, C = 0.1, C = 0.2, C = 0.3, C = 0.5. From top to bottom, N = 50, N = 100, N = 1000. Same conventions as for previous figure. 7 Summary We defined a Bayes Ideal Observer (BIO) for correspondence noise and showed that Barlow and Tripathy’s (BT) model [2] can be obtained as an approximation. We performed psychophysical experiments which showed that the trends of human performance were more similar to those of BIO (when it differed from BT). We attempted to account for human’s poor performance (compared to BIO) by allowing for degradations of the model such as poor spatial resolution and uncertainty about the precise translation velocity. We concluded that these degradation had to be implausibly large to account for the poorness of human performance. We noted that Barlow and Tripathy’s degradation model [2] takes them into a regime where their model is a bad approximation to the BIO. Instead, we investigated the possibility that human observers perform these motion tasks using generic probability models for motion possibly adapted to the statistics of motion in the natural world. Further psychophysical experiments showed that human performance was inconsistent with a model than prefers slow motion. But human performance was consistent with the Slow-and-Smooth model [8,9]. We conclude with two metapoints. Firstly, it is possible to design ideal observer models for complex stimuli using techniques from Bayes decision theory. There is no need to restrict oneself to the traditional models described in classic signal detection books such as Green and Swets [3]. Secondly, human performance at visual tasks may be based on generic models, such as Slow-and-Smooth, rather than the ideal models for the experimental tasks (known only to the experimenter). Acknowledgements We thank Zili Liu for helpful discussions. We gratefully acknowledge funding support from the American Association of University Women (HL), NSF0413214 and W.M. Keck Foundation (ALY). References [1] Geisler, W.S. (2002) “Ideal Observer Analysis”. In L. Chalupa and J. Werner (Eds). The Visual Neuroscienes. Boston. MIT Press. 825-837. [2] Barlow, H., and Tripathy, S.P. (1997) Correspondence noise and signal pooling in the detection of coherent visual motion. Journal of Neuroscience, 17(20), 7954-7966. [3] Green, D.M., and Swets, J.A. (1966) Signal detection theory and psychophysics. New York: Wiley. [4] Morrone, M.C., Burr, D. C., and Vaina, L. M. (1995) Two stages of visual processing for radial and circular motion. Nature, 376(6540), 507-509. [5] Neri, P., Morrone, M.C., and Burr, D.C. (1998) Seeing biological motion. Nature, 395(6705), 894-896. [6] Song, Y., and Perona, P. (2000) A computational model for motion detection and direction discrimination in humans. IEEE computer society workshop on Human Motion, Austin, Texas. [7] Wallace, J.M and Mamassian, P. (2004) The efficiency of depth discrimination for non-transparent and transparent stereoscopic surfaces. Vision Research, 44, 2253-2267. [8] Yuille, A.L. and Grzywacz, N.M. (1988) A computational theory for the perception of coherent visual motion. Nature, 333,71-74, [9] Weiss, Y., and Adelson, E.H. (1998) Slow and smooth: A Bayesian theory for the combination of local motion signals in human vision Technical Report 1624. Massachusetts Institute of Technology. [10] Ullman, S. (1979) The interpretation of Visual Motion. MIT Press, Cambridge, MA, 1979.
|
2005
|
97
|
2,918
|
Selecting Landmark Points for Sparse Manifold Learning J. G. Silva ISEL/ISR R. Conselheiro Emidio Navarro 1950.062 Lisbon, Portugal jgs@isel.ipl.pt J. S. Marques IST/ISR Av. Rovisco Pais 1949-001 Lisbon, Portugal jsm@isr.ist.utl.pt J. M. Lemos INESC-ID/IST R. Alves Redol, 9 1000-029 Lisbon, Portugal jlml@inesc-id.pt Abstract There has been a surge of interest in learning non-linear manifold models to approximate high-dimensional data. Both for computational complexity reasons and for generalization capability, sparsity is a desired feature in such models. This usually means dimensionality reduction, which naturally implies estimating the intrinsic dimension, but it can also mean selecting a subset of the data to use as landmarks, which is especially important because many existing algorithms have quadratic complexity in the number of observations. This paper presents an algorithm for selecting landmarks, based on LASSO regression, which is well known to favor sparse approximations because it uses regularization with an l1 norm. As an added benefit, a continuous manifold parameterization, based on the landmarks, is also found. Experimental results with synthetic and real data illustrate the algorithm. 1 Introduction The recent interest in manifold learning algorithms is due, in part, to the multiplication of very large datasets of high-dimensional data from numerous disciplines of science, from signal processing to bioinformatics [6]. As an example, consider a video sequence such as the one in Figure 1. In the absence of features like contour points or wavelet coefficients, each image of size 71 × 71 pixels is a point in a space of dimension equal to the number of pixels, 71 × 71 = 5041. The observation space is, therefore, R5041. More generally, each observation is a vector y ∈ Rm where m may be very large. A reasonable assumption, when facing an observation space of possibly tens of thousands of dimensions, is that the data are not dense in such a space, because several of the meaFigure 1: Example of a high-dimensional dataset: each image of size 71 × 71 pixels is a point in R5041. sured variables must be dependent. In fact, in many problems of interest, there are only a few free parameters, which are embedded in the observed variables, frequently in a nonlinear way. Assuming that the number of free parameters remains the same throughout the observations, and also assuming smooth variation of the parameters, one is in fact dealing with geometric restrictions which can be well modelled as a manifold. Therefore, the data must lie on, or near (accounting for noise) a manifold embedded in observation, or ambient space. Learning this manifold is a natural approach to the problem of modelling the data, since, besides computational issues, sparse models tend to have better generalization capability. In order to achieve sparsity, considerable effort has been devoted to reducing the dimensionality of the data by some form of non-linear projection. Several algorithms ([10], [8], [3]) have emerged in recent years that follow this approach, which is closely related to the problem of feature extraction. In contrast, the problem of finding a relevant subset of the observations has received less attention. It should be noted that the complexity of most existing algorithms is, in general, dependent not only on the dimensionality but also on the number of observations. An important example is the ISOMAP [10], where the computational cost is quadratic in the number of points, which has motivated the L-ISOMAP variant [3] which uses a randomly chosen subset of the points as landmarks (L is for Landmark). The proposed algorithm uses, instead, a principled approach to select the landmarks, based on the solutions of a regression problem minimizing a regularized cost functional. When the regularization term is based on the l1 norm, the solution tends to be sparse. This is the motivation for using the Least Absolute value Subset Selection Operator (LASSO) [5]. Finding the LASSO solutions used to require solving a quadratic programming problem, until the development of the Least Angle Regression (LARS1) procedure [4], which is much faster (the cost is equivalent to that of ordinary least squares) and not only gives the LASSO solutions but also provides an estimator of the risk as a function of the regularization tuning parameter. This means that the correct amount of regularization can be automatically found. In the specific context of selecting landmarks for manifold learning, with some care in the LASSO problem formulation, one is able to avoid a difficult problem of sparse regression with Multiple Measurement Vectors (MMV), which has received considerable interest in its own right [2]. The idea is to use local information, found by local PCA as usual, and preserve the smooth variation of the tangent subspace over a larger scale, taking advantage of any known embedding. This is a natural extension of the Tangent Bundle Approximation (TBA) algorithm, proposed in [9], since the principal angles, which TBA computes anyway, are readily avail1The S in LARS stands for Stagewise and LASSO, an allusion to the relationship between the three algorithms. able and appropriate for this purpose. Nevertheless, the method proposed here is independent of TBA and could, for instance, be plugged into a global procedure like L-ISOMAP. The algorithm avoids costly global computations, that is, it doesn’t attempt to preserve geodesic distances between faraway points, and yet, unlike most local algorithms, it is explicitly designed to be sparse while retaining generalization ability. The remainder of this introduction formulates the problem and establishes the notation. The selection procedure itself is covered in section 2, while also providing a quick overview of the LASSO and LARS methods. Results are presented in section 3 and then discussed in section 4. 1.1 Problem formulation The problem can be formulated as following: given N vectors y ∈Rm, suppose that the y can be approximated by a differentiable n-manifold M embedded in Rm. This means that M can be charted through one or more invertible and differentiable mappings of the type gi(y) = x (1) to vectors x ∈Rn so that open sets Pi ⊂M, called patches, whose union covers M, are diffeomorphically mapped onto other open sets Ui ⊂Rn, called parametric domains. Rn is the lower dimensional parameter space and n is the intrinsic dimension of M. The gi are called charts, and manifolds with complex topology may require several gi. Equivalently, since the charts are invertible, inverse mappings hi : Rn →Rm, called parameterizations can be also be found. Arranging the original data in a matrix Y ∈Rm×N, with the y as column vectors and assuming, for now, only one mapping g, the charting process produces a matrix X ∈ Rn×N: Y = y11 · · · y1N ... ... ... ym1 . . . ymN X = x11 · · · x1N ... ... ... xn1 . . . xnN (2) The n rows of X are sometimes called features or latent variables. It is often intended in manifold learning to estimate the correct intrinsic dimension, n, as well as the chart g or at least a column-to-column mapping from Y to X. In the present case, this mapping will be assumed known, and so will n. What is intended is to select a subset of the columns of X (or of Y, since the mapping between them is known) to use as landmarks, while retaining enough information about g, resulting in a reduced n × N ′ matrix with N ′ < N. N ′ is the number of landmarks, and should also be automatically determined. Preserving g is equivalent to preserving its inverse mapping, the parameterization h, which is more practical because it allows the following generative model: y = h(x) + η (3) in which η is zero mean Gaussian observation noise. How to find the fewest possible landmarks so that h can still be well approximated? 2 Landmark selection 2.1 Linear regression model To solve the problem, it is proposed to start by converting the non-linear regression in (3) to a linear regression by offloading the non-linearity onto a kernel, as described in numerous works, such as [7]. Since there are N columns in X to start with, let K be a square, N ×N, symmetric semidefinite positive matrix such that K = {kij} kij = K(xi, xj) K(x, xj) = exp(−∥x −xj∥2 2σ2 K ). (4) The function K can be readily recognized as a Gaussian kernel. This allows the reformulation, in matrix form, of (3) as YT = KB + E (5) , where B, E ∈RN×m and each line of E is a realization of η above. Still, it is difficult to proceed directly from (5), because neither the response, YT , nor the regression parameters, B, are column vectors. This leads to a Multiple Measurement Vectors (MMV) problem, and while there is nothing to prevent solving it separately for each column, this makes it harder to impose sparsity in all columns simultaneously. Two alternative approaches present themselves at this point: • Solve a sparse regression problem for each column of YT (and the corresponding column of B), find a way to force several lines of B to zero. • Re-formulate (5) is a way that turns it to a single measurement value problem. The second approach is better studied, and it will be the one followed here. Since the parameterization h is known and must be, at the very least, bijective and continuous, then it must preserve the smoothness of quantities like the geodesic distance and the principal angles. Therefore, it is proposed to re-formulate (5) as θ = Kβ + ǫ (6) where the new response, θ ∈RN, as well as β ∈RN and ǫ ∈RN are now column vectors, allowing the use of known subset selection procedures. The elements of θ can be, for example, the geodesic distances to the yµ = h(xµ) observation corresponding to the mean, xµ of the columns of X. This would be a possibility if an algorithm like ISOMAP were used to find the chart from Y to X. However, since the whole point of using landmarks is to know them beforehand, so as to avoid having to compute N × N geodesic distances, this is not the most interesting alternative. A better way is to use a computationally lighter quantity like the maximum principal angle between the tangent subspace at yµ, Tyµ(M), and the tangent subspaces at all other y. Given a point y0 and its k nearest neighbors, finding the tangent subspace can be done by local PCA. The sample covariance matrix S can be decomposed as S = 1 k k X i=0 (yi −y0)(yi −y0)T (7) S = VDVT (8) where the columns of V are the eigenvectors vi and D is a diagonal matrix containing the eigenvalues λi, in descending order. The eigenvectors form an orthonormal basis aligned with the principal directions of the data. They can be divided in two groups: tangent and normal vectors, spanning the tangent and normal subspaces, with dimensions n and m−n, respectively. Note that m −n is the codimension of the manifold. The tangent subspaces are spanned from the n most important eigenvectors. The principal angles between two different tangent subspaces at different points y0 can be determined from the column spaces of the corresponding matrices V. An in-depth description of the principal angles, as well as efficient algorithms to compute them, can be found, for instance, in [1]. Note that, should the Ty(M) be already available from the eigenvectors found during some local PCA analysis, e. g., during estimation of the intrinsic dimension, there would be little extra computational burden. An example is [9], where the principal angles already are an integral part of the procedure - namely for partitioning the manifold into patches. Thus, it is proposed to use θj equal to the maximum principal angle between Tyµ(M) and Tyj(M), where yj is the j-th column of Y. It remains to be explained how to achieve a sparse solution to (6). 2.2 Sparsity with LASSO and LARS The idea is to find an estimate ˆβ that minimizes the functional E = ∥θ −K ˆβ∥2 + γ∥ˆβ∥q q. (9) Here, ∥ˆβ∥q denotes the lq norm of ˆβ, i. e. qqPm i=1 |ˆβi|q, and γ is a tuning parameter that controls the amount of regularization. For the most sparseness, the ideal value of q would be zero. However, minimizing E with the l0 norm is, in general, prohibitive in computational terms. A sub-optimal strategy is to use q = 1 instead. This is the usual formulation of a LASSO regression problem. While minimization of (9) can be done using quadratic programming, the recent development of the LARS method has made this unnecessary. For a detailed description of LARS and its relationship with the LASSO, vide [4]. Very briefly, LARS starts with ˆβ = 0 and adds covariates (the columns of K) to the model according to their correlation with the prediction error vector, θ −K ˆβ, setting the corresponding ˆβj to a value such that another covariate becomes equally correlated with the error and is, itself, added to the model - it becomes active. LARS then proceeds in a direction equiangular to all the active ˆβj and the process is repeated until all covariates have been added. There are a total of m steps, each of which adds a new ˆβj, making it non-zero. With slight modifications, these steps correspond to a sampling of the tuning parameter γ in (9) under LASSO. Moreover, [4] shows that the risk, as a function of the number, p, of non-zero ˆβj, can be estimated (under mild assumptions) as R( ˆβp) = ∥θ −K ˆβp∥2/¯σ2 −m + 2p (10) where ¯σ2 can be found from the unconstrained least squares solution of (6). Computing R( ˆβp) requires no more than the ˆβp themselves, which are already provided by LARS anyway. 2.3 Landmarks and parameterization of the manifold The landmarks are the columns xj of X (or of Y) with the same indexes j as the non-zero elements of βp, where p = arg min p R(βp). (11) There are N ′ = p landmarks, because there are p non-zero elements in βp. This criterion ensures that the landmarks are the kernel centers that minimize the risk of the regression in (6). As an interesting byproduct, regardless of whether h was a continuous or point-to-point mapping to begin with, it is now also possible to obtain a new, continuous parameterization hB,X′ by solving a reduced version of (5): YT = BK′ + E (12) where K′ only has N ′ columns, with the same indexes as X′. In fact, K′ ∈RN×N′ is no longer square. Also, now B ∈RN′×m. The new, smaller regression (12) can be solved separately for each column of YT and B by unconstrained least squares. For a new feature vector, x, in the parametric domain, a new vector y ∈M in observation space can be synthesized by y = hB,X′(x) = [y1(x) . . . ym(x)]T (13) yj(x) = X xi∈X′ bijK(xi, x) where the {bij} are the elements of B. 3 Results The algorithm has been tested in two synthetic datasets: the traditional synthetic “swiss roll” and a sphere, both with 1000 points embedded in R10, with a small amount of isotropic Gaussian noise (σy = 0.01) added in all dimensions, as shown in Figure 2. These manifolds have intrinsic dimension n = 2. A global embedding for the swiss roll was found by ISOMAP, using k = 8. On the other hand, TBA was used for the sphere, resulting in multiple patches and charts - a necessity, because otherwise the sphere’s topology would make ISOMAP fail. Therefore, in the sphere, each patch has its own landmark points, and the manifold require the union of all such points. All are shown in Figure 2, as selected by our procedure. Additionally, a real dataset was used: images from the video sequence shown above in Figure 1. This example is known [9] to be reasonably well modelled by as few as 2 free parameters. The sequence contains N = 194 frames with m = 5041 pixels. A first step was to perform global PCA in order to discard irrelevant dimensions. Since it obviously isn’t possible −1 −0.5 0 0.5 1 −1 0 1 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −0.5 0 0.5 −0.5 0 0.5 −0.5 0 0.5 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −1 0 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −1 0 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 0 200 400 600 800 1000 −800 −600 −400 −200 0 200 400 600 800 1000 1200 −10 0 10 20 30 40 50 60 70 80 −250 −200 −150 −100 −50 (a) (b) Figure 2: Above: landmarks; Middle: interpolated points using hB,X′; Below: risk estimates. For the sphere, the risk plot is for the largest patch. Total landmarks, N ′ = 27 for the swiss roll, 42 for the sphere. to compute a covariance matrix of size 5000 × 5000 from 194 samples, the problem was transposed, leading to the computation of the eigenvectors of a N × N covariance, from which the first N −1 eigenvectors of the non-transposed problem can easily be found [11]. This resulted in an estimated 15 globally significant principal directions, on which the data were projected. After this pre-processing, the effective values of m and N were, respectively, 15 and 194. An embedding was found using TBA with 2 features (ISOMAP would have worked as well). The results obtained for this case are shown in Figure 3. Only 4 landmarks were needed, and they correspond to very distinct face expressions. 4 Discussion A new approach for selecting landmarks in manifold learning, based on LASSO and LARS regression, has been presented. The proposed algorithm finds geometrically meaningful landmarks and successfully circumvents a difficult MMV problem, by using the intuition that, since the variation of the maximum principal angle is a measure of curvature, the points that are important in preserving it should also be important in preserving the overall manifold geometry. Also, a continuous manifold parameterization is given with very little −1000 −500 0 500 1000 −1000 0 1000 2000 −400 −200 0 200 400 600 800 Figure 3: Landmarks for the video sequence: N ′ = 4, marked over a scatter plot of the first 3 eigen-coordinates. The corresponding pictures are also shown. additional computational cost. The entire procedure avoids expensive, quadratic programming computations - its complexity is dominated by the LARS step, which has the same cost as a least squares fit [4]. The proposed approach has been validated with experiments on synthetic and real datasets. Acknowledgments This work was partially supported by FCT POCTI, under project 37844. References [1] A. Bjorck and G. H. Golub. Numerical methods for computing angles between linear subspaces. Mathematical Computation, 27, 1973. [2] J. Chen and X. Huo. Sparse representation for multiple measurement vectors (mmv) in an over-complete dictionary. ICASSP, 2005. [3] V. de Silva and J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. NIPS, 15, 2002. [4] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 2003. [5] T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Statistical Learning. Springer, 2001. [6] H. L¨adesm¨aki, O. Yli-Harja, W. Zhang, and I. Shmulevich. Intrinsic dimensionality in gene expression analysis. GENSIPS, 2005. [7] T. Poggio and S. Smale. The mathematics of learning: Dealing with data. Notices of the American Mathematical Society, 2003. [8] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, 2000. [9] J. Silva, J. Marques, and J. M. Lemos. Non-linear dimension reduction with tangent bundle approximation. ICASSP, 2005. [10] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319–2323, 2000. [11] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3:71–86, 1991.
|
2005
|
98
|
2,919
|
Statistical Convergence of Kernel CCA Kenji Fukumizu Institute of Statistical Mathematics Tokyo 106-8569 Japan fukumizu@ism.ac.jp Francis R. Bach Centre de Morphologie Mathematique Ecole des Mines de Paris, France francis.bach@mines.org Arthur Gretton Max Planck Institute for Biological Cybernetics 72076 T¨ubingen, Germany arthur.gretton@tuebingen.mpg.de Abstract While kernel canonical correlation analysis (kernel CCA) has been applied in many problems, the asymptotic convergence of the functions estimated from a finite sample to the true functions has not yet been established. This paper gives a rigorous proof of the statistical convergence of kernel CCA and a related method (NOCCO), which provides a theoretical justification for these methods. The result also gives a sufficient condition on the decay of the regularization coefficient in the methods to ensure convergence. 1 Introduction Kernel canonical correlation analysis (kernel CCA) has been proposed as a nonlinear extension of CCA [1, 11, 3]. Given two random variables, kernel CCA aims at extracting the information which is shared by the two random variables, and has been successfully applied in various practical contexts. More precisely, given two random variables X and Y , the purpose of kernel CCA is to provide nonlinear mappings f(X) and g(Y ) such that their correlation is maximized. As in many statistical methods, the desired functions are in practice estimated from a finite sample. Thus, the convergence of the estimated functions to the population ones with increasing sample size is very important to justify the method. Since the goal of kernel CCA is to estimate a pair of functions, the convergence should be evaluated in an appropriate functional norm: thus, we need tools from functional analysis to characterize the type of convergence. The purpose of this paper is to rigorously prove the statistical convergence of kernel CCA, and of a related method. The latter uses a NOrmalized Cross-Covariance Operator, and we call it NOCCO for short. Both kernel CCA and NOCCO require a regularization coefficient to enforce smoothness of the functions in the finite sample case (thus avoiding a trivial solution), but the decay of this regularisation with increased sample size has not yet been established. Our main theorems give a sufficient condition on the decay of the regularization coefficient for the finite sample estimates to converge to the desired functions in the population limit. Another important issue in establishing the convergence is an appropriate distance measure for functions. For NOCCO, we obtain convergence in the norm of reproducing kernel Hilbert spaces (RKHS) [2]. This norm is very strong: if the positive definite (p.d.) kernels are continuous and bounded, it is stronger than the uniform norm in the space of continuous functions, and thus the estimated functions converge uniformly to the desired ones. For kernel CCA, we show convergence in the L2 norm, which is a standard distance measure for functions. We also discuss the relation between our results and two relevant studies: COCO [9] and CCA on curves [10]. 2 Kernel CCA and related methods In this section, we review kernel CCA as presented by [3], and then formulate it with covariance operators on RKHS. In this paper, a Hilbert space always refers to a separable Hilbert space, and an operator to a linear operator. ∥T∥denotes the operator norm sup∥ϕ∥=1 ∥Tϕ∥, and R(T) denotes the range of an operator T. Throughout this paper, (HX , kX ) and (HY, kY) are RKHS of functions on measurable spaces X and Y, respectively, with measurable p.d. kernels kX and kY. We consider a random vector (X, Y ) : Ω→X ×Y with distribution PXY . The marginal distributions of X and Y are denoted PX and PY . We always assume EX[kX (X, X)] < ∞ and EY [kY(Y, Y )] < ∞. (1) Note that under this assumption it is easy to see HX and HY are continuously included in L2(PX) and L2(PY ), respectively, where L2(µ) denotes the Hilbert space of square integrable functions with respect to the measure µ. 2.1 CCA in reproducing kernel Hilbert spaces Classical CCA provides the linear mappings aT X and bT Y that achieve maximum correlation. Kernel CCA extends this by looking for functions f and g such that f(X) and g(Y ) have maximal correlation. More precisely, kernel CCA solves max f∈HX ,g∈HY Cov[f(X), g(Y )] Var[f(X)]1/2Var[g(Y )]1/2 . (2) In practice, we have to estimate the desired function from a finite sample. Given an i.i.d. sample (X1, Y1), . . . , (Xn, Yn) from PXY , an empirical solution of Eq. (2) is max f∈HX ,g∈HY d Cov[f(X), g(Y )] d Var[f(X)] + εn∥f∥2 HX 1/2 d Var[g(Y )] + εn∥g∥2 HY 1/2 , (3) where d Cov and d Var denote the empirical covariance and variance, such as d Cov[f(X), g(Y )] = 1 n Pn i=1 f(Xi) −1 n Pn j=1f(Xj) g(Yi) −1 n Pn j=1 g(Yj) . The positive constant εn is a regularization coefficient. As we shall see, the regularization terms εn∥f∥2 HX and εn∥g∥2 HY make the problem well-formulated statistically, enforce smoothness, and enable operator inversion, as in Tikhonov regularization. 2.2 Representation with cross-covariance operators Kernel CCA and related methods can be formulated using covariance operators [4, 7, 8], which make theoretical discussions easier. It is known that there exists a unique cross-covariance operator ΣY X : HX →HY for (X, Y ) such that ⟨g, ΣY Xf⟩HY = EXY (f(X)−EX[f(X)])(g(Y )−EY [g(Y )]) (= Cov[f(X), g(Y )]) holds for all f ∈HX and g ∈HY. The cross covariance operator represents the covariance of f(X) and g(Y ) as a bilinear form of f and g. In particular, if Y is equal to X, the self-adjoint operator ΣXX is called the covariance operator. Let (X1, Y1), . . . , (Xn, Yn) be i.i.d. random vectors on X ×Y with distribution PXY . The empirical cross-covariance operator bΣ(n) Y X is defined by the cross-covariance operator with the empirical distribution 1 n Pn i=1 δXiδYi. By definition, for any f ∈ HX and g ∈HY, the operator bΣ(n) Y X gives the empirical covariance as follows; ⟨g, bΣ(n) Y Xf⟩HY = d Cov[f(X), g(Y )]. Let QX and QY be the orthogonal projections which respectively map HX onto R(ΣXX) and HY onto R(ΣY Y ). It is known [4] that ΣY X can be represented as ΣY X = Σ1/2 Y Y VY XΣ1/2 XX, (4) where VY X : HX →HY is a unique bounded operator such that ∥VY X∥≤1 and VY X = QY VY XQX. We often write VY X as Σ−1/2 Y Y ΣY XΣ−1/2 XX in an abuse of notation, even when Σ−1/2 XX or Σ−1/2 Y Y are not appropriately defined as operators. With cross-covariance operators, the kernel CCA problem can be formulated as sup f∈HX ,g∈HY ⟨g, ΣY Xf⟩HY subject to ⟨f, ΣXXf⟩HX = 1, ⟨g, ΣY Y g⟩HY = 1. (5) As with classical CCA, the solution of Eq. (5) is given by the eigenfunctions corresponding to the largest eigenvalue of the following generalized eigenproblem: O ΣXY ΣY X O f g = ρ1 ΣXX O O ΣY Y f g . (6) Similarly, the empirical estimator in Eq. (3) is obtained by solving sup f∈HX ,g∈HY ⟨g, bΣ(n) Y Xf⟩HY subject to ( ⟨f, (bΣ(n) XX + εnI)f⟩HX = 1, ⟨g, (bΣ(n) Y Y + εnI)g⟩HY = 1. (7) Let us assume that the operator VY X is compact,1 and let φ and ψ be the unit eigenfunctions of VY X corresponding to the largest singular value; that is, ⟨ψ, VY Xφ⟩HY = max f∈HX ,g∈HY,∥f∥HX =∥g∥HY =1⟨g, VY Xf⟩HY. (8) Given φ ∈R(ΣXX) and ψ ∈R(ΣY Y ), the kernel CCA solution in Eq. (6) is f = Σ−1/2 XX φ, g = Σ−1/2 Y Y ψ. (9) In the empirical case, let bφn ∈HX and bψn ∈HY be the unit eigenfunctions corresponding to the largest singular value of the finite rank operator bV (n) Y X := bΣ(n) Y Y + εnI −1/2bΣ(n) Y X bΣ(n) XX + εnI −1/2. (10) As in Eq. (9), the empirical estimators bfn and bgn in Eq. (7) are equal to bfn = (bΣ(n) XX + εnI)−1/2 bφn, bgn = (bΣ(n) Y Y + εnI)−1/2 bψn. (11) 1A bounded operator T : H1 →H2 is called compact if any bounded sequence {un} ⊂ H1 has a subsequence {un′} such that Tun′ converges in H2. One of the useful properties of a compact operator is that it admits a singular value decomposition (see [5, 6]) Note that all the above empirical operators and the estimators can be expressed in terms of Gram matrices. The solutions bfn and bgn are exactly the same as those given in [3], and are obtained by linear combinations of kX (·, Xi)−1 n Pn j=1kX (·, Xj) and kY(·, Yi) −1 n Pn j=1kY(·, Yj). The functions bφn and bψn are obtained similarly. There exist additional, related methods to extract nonlinear dependence. The constrained covariance (COCO) [9] uses the unit eigenfunctions of ΣY X; max f∈HX ,g∈HY ∥f∥HX =∥g∥HY =1 ⟨g, ΣY Xf⟩HY = max f∈HX ,g∈HY ∥f∥HX =∥g∥HY =1 Cov[f(X), g(Y )]. The statistical convergence of COCO has been proved in [8]. Instead of normalizing the covariance by the variances, COCO normalizes it by the RKHS norms of f and g. Kernel CCA is a more direct nonlinear extension of CCA than COCO. COCO tends to find functions with large variance for f(X) and g(Y ), which may not be the most correlated features. On the other hand, kernel CCA may encounter situations where it finds functions with moderately large covariance but very small variance for f(X) or g(Y ), since ΣXX and ΣY Y can have arbitrarily small eigenvalues. A possible compromise is to use φ and ψ for VY X, the NOrmalized Cross-Covariance Operator (NOCCO). While the statistical meaning of NOCCO is not as direct as kernel CCA, it can incorporate the normalization by ΣXX and ΣY Y . We will establish the convergence of kernel CCA and NOCCO in Section 3. 3 Main theorems: convergence of kernel CCA and NOCCO We show the convergence of NOCCO in the RKHS norm, and the kernel CCA in L2 sense. The results may easily be extended to the convergence of the eigenspace corresponding to the m-th largest eigenvalue. Theorem 1. Let (εn)∞ n=1 be a sequence of positive numbers such that lim n→∞εn = 0, lim n→∞n1/3εn = ∞. (12) Assume VY X is compact, and the eigenspaces given by Eq. (8) are one-dimensional. Let φ, ψ, bφn, and bψn be the unit eigenfunctions of Eqs. (8) and (10). Then |⟨bφn, φ⟩HX | →1, |⟨bψn, ψ⟩HY| →1 in probability, as n goes to infinity. Theorem 2. Let (εn)∞ n=1 be a sequence of positive numbers which satisfies Eq. (12). Assume that φ and ψ are included in R(ΣXX) and R(ΣY Y ), respectively, and that VY X is compact. Then, for f, g, bfn, and bgn in Eqs.(9), (11), we have
( bfn −EX[ bfn(X)]) −(f −EX[f(X)])
L2(PX) →0,
(bgn −EY [bgn(Y )]) −(g −EY [g(Y )])
L2(PY ) →0 in probability, as n goes to infinity. The convergence of NOCCO in the RKHS norm is a very strong result. If kX and kY are continuous and bounded, the RKHS norm is stronger than the uniform norm of the continuous functions. In such cases, Theorem 1 implies bφn and bψn converge uniformly to φ and ψ, respectively. This uniform convergence is useful in practice, because in many applications the function value at each point is important. For any complete orthonormal systems (CONS) {φi}∞ i=1 of HX and {ψi}∞ i=1 of HY, the compactness assumption on VY X requires that the correlation of Σ−1/2 XX φi(X) and Σ−1/2 Y Y ψi(Y ) decay to zero as i →∞. This is not necessarily satisfied in general. A trivial example is the case of variables with Y = X, in which VY X = I is not compact. In this case, NOCCO is solved by an arbitrary function. Moreover, the kernel CCA does not have solutions, if ΣXX has arbitrarily small eigenvalues. Leurgans et al. ([10]) discuss CCA on curves, which are represented by stochastic processes on an interval, and use the Sobolev space of functions with square integrable second derivative. Since the Sobolev space is a RKHS, their method is an example of kernel CCA. They also show the convergence of estimators under the condition n1/2εn →∞. Although the proof can be extended to a general RKHS, convergence is measured by the correlation, |⟨bfn, ΣXXf⟩HX | (⟨bfn, ΣXX bfn⟩HX )1/2(⟨f, ΣXXf⟩HX )1/2 → 1, which is weaker than the L2 convergence in Theorem 2. In fact, using ⟨f, ΣXXf⟩HX = 1, it is easy to derive the above convergence from Theorem 2. On the other hand, convergence of the correlation does not necessarily imply ⟨( bfn −f), ΣXX( bfn −f)⟩HX →0. From the equality ⟨( bfn −f), ΣXX( bfn −f)⟩HX = (⟨bfn, ΣXX bfn⟩1/2 HX −⟨f, ΣXXf⟩1/2 HX )2 + 2{1 −⟨bfn, ΣXXf⟩HX /(∥Σ1/2 XX bfn∥HX ∥Σ1/2 XXf∥HX )} ∥Σ1/2 XX bfn∥HX ∥Σ1/2 XXf∥HX , we require ⟨bfn, ΣXX bfn⟩HX →⟨f, ΣXXf⟩HX = 1 in order to guarantee the left hand side converges to zero. However, with the normalization ⟨bfn, (bΣ(n) XX + εnI) bfn⟩HX = 1, convergence of ⟨bfn, ΣXX bfn⟩HX is not clear. We use the stronger assumption n1/3εn →∞to prove ⟨( bfn −f), ΣXX( bfn −f)⟩HX →0 in Theorem 2. 4 Outline of the proof of the main theorems We show only the outline of the proof in this paper. See [6] for the detail. 4.1 Preliminary lemmas We introduce some definitions for our proofs. Let H1 and H2 be Hilbert spaces. An operator T : H1 →H2 is called Hilbert-Schmidt if P∞ i=1 ∥Tϕi∥2 H2 < ∞for a CONS {ϕi}∞ i=1 of H1. Obviously ∥T∥≤∥T∥HS. For Hilbert-Schmidt operators, the Hilbert-Schmidt norm and inner product are defined as ∥T∥2 HS = P∞ i=1∥Tϕi∥2 H2, ⟨T1, T2⟩HS = P∞ i=1⟨T1ϕi, T2ϕi⟩H2. These definitions are independent of the CONS. For more details, see [5] and [8]. For a Hilbert space F, a Borel measurable map F : Ω→F from a measurable space F is called a random element in F. For a random element F in F with E∥F∥< ∞, there exists a unique element E[F] ∈F, called the expectation of F, such that ⟨E[F], g⟩H = E[⟨F, g⟩F] (∀g ∈F) holds. If random elements F and G in F satisfy E[∥F∥2] < ∞and E[∥G∥2] < ∞, then ⟨F, G⟩F is integrable. Moreover, if F and G are independent, we have E[⟨F, G⟩F] = ⟨E[F], E[G]⟩F. (13) It is easy to see under the condition Eq. (1), the random element kX (·, X)kY(·, Y ) in the direct product HX ⊗HY is integrable, i.e. E[∥kX (·, X)kY(·, Y )∥HX ⊗HY] < ∞. Combining Lemma 1 in [8] and Eq. (13), we obtain the following lemma. Lemma 3. The cross-covariance operator ΣY X is Hilbert-Schmidt, and ∥ΣY X∥2 HS =
EY X kX (·, X) −EX[kX (·, X)] kY(·, Y ) −EY [kY(·, Y )]
2 HX ⊗HY. The law of large numbers implies limn→∞⟨g, bΣ(n) Y Xf⟩HY = ⟨g, ΣY Xf⟩HY for each f and g in probability. The following lemma shows a much stronger uniform result. Lemma 4.
bΣ(n) Y X −ΣY X
HS = Op(n−1/2) (n →∞). Proof. Write for simplicity F = kX (·, X) −EX[kX (·, X)], G = kY(·, Y ) − EY [kY(·, Y )], Fi = kX (·, Xi) −EX[kX (·, X)], and Gi = kY(·, Yi) −EY [kY(·, Y )]. Then, F, F1, . . . , Fn are i.i.d. random elements in HX , and a similar property also holds for G, G1, . . . , Gn. Lemma 3 and the same argument as its proof implies
bΣ(n) Y X
2 HS =
1 n Pn i=1 Fi −1 n Pn j=1 Fj Gi −1 n Pn j=1 Gj
2 HX ⊗HY, ⟨ΣY X, bΣ(n) Y X⟩HS = E[FG], 1 n Pn i=1 Fi −1 n Pn j=1 Fj Gi −1 n Pn j=1 Gj HX ⊗HY. From these equations, we have
bΣ(n) Y X −ΣY X
2 HS =
1 n Pn i=1 Fi−1 n Pn j=1 Fj Gi−1 n Pn j=1 Gj −E[FG]
2 HX ⊗HY =
1 n 1 −1 n Pn i=1 FiGi − 1 n2 Pn i=1 Pn j̸=i(FiGj + FjGi) −E[FG]
2 HX ⊗HY. Using E[Fi] = E[Gi] = 0 and E[FiGjFkGℓ] = 0 for i ̸= j, {k, ℓ} ̸= {i, j}, we have E
bΣ(n) Y X −ΣY X
2 HS = 1 nE ∥FG∥2 HX ⊗HY −1 n∥E[FG]∥2 HX ⊗HY + O(1/n2). The proof is completed by Chebyshev’s inequality. The following two lemmas are essential parts of the proof of the main theorems. Lemma 5. Let εn be a positive number such that εn →0 (n →∞). Then
bV (n) Y X −(ΣY Y + εnI)−1/2ΣY X(ΣXX + εnI)−1/2
= Op(ε−3/2 n n−1/2). Proof. The operator on the left hand side is equal to (bΣ(n) Y Y + εnI)−1/2 −(ΣY Y + εnI)−1/2 bΣ(n) Y X(bΣ(n) XX + εnI)−1/2 + (ΣY Y + εnI)−1/2bΣ(n) Y X −ΣY X (bΣ(n) XX + εnI)−1/2 + (ΣY Y + εnI)−1/2ΣY X (bΣ(n) XX + εnI)−1/2 −(ΣXX + εnI)−1/2 . (14) From the equality A−1/2 −B−1/2 = A−1/2 B3/2 −A3/2 B−3/2 +(A−B)B−3/2, the first term in Eq. (14) is equal to (bΣ(n) Y Y +εnI)−1 2 Σ 3 2 Y Y −bΣ (n) 3 2 Y Y + bΣ(n) Y Y −ΣY Y bΣ(n) Y Y +εnI −3 2 bΣ(n) Y X(bΣ(n) XX +εnI)−1 2 . From ∥(bΣ(n) Y Y + εnI)−1/2∥≤1/√εn, ∥(bΣ(n) Y Y + εnI)−1/2bΣ(n) Y X(bΣ(n) XX + εnI)−1/2∥≤1 and Lemma 7, the norm of the above operator is upper-bounded by 1 εn 3 √εn max ∥ΣY Y ∥3/2, ∥bΣ(n) Y Y ∥3/2 + 1 ∥bΣ(n) Y Y −ΣY Y ∥. A similar bound applies to the third term of Eq. (14), and the second term is upper-bounded by 1 εn ∥ΣY X −bΣ(n) Y X∥. Thus, Lemma 4 completes the proof. Lemma 6. Assume VY X is compact. Then, for a sequence εn →0,
(ΣY Y + εnI)−1/2ΣY X(ΣXX + εnI)−1/2 −VY X
→0 (n →∞). Proof. It suffices to prove ∥{(ΣY Y + εnI)−1/2 −Σ−1/2 Y Y }ΣY X(ΣXX + εnI)−1/2∥and ∥Σ−1/2 Y Y ΣY X{(ΣXX + εnI)−1/2 −Σ−1/2 XX }∥converge to zero. The former is equal to
(ΣY Y + εnI)−1/2Σ1/2 Y Y −I VY X
. (15) Note that R(VY X) ⊂R(ΣY Y ), as remarked in Section 2.2. Let v = ΣY Y u be an arbitrary element in R(VY X) ∩R(ΣY Y ). We have ∥{(ΣY Y + εnI)−1/2Σ1/2 Y Y − I}v∥HY = ∥(ΣY Y + εnI)−1/2Σ1/2 Y Y {Σ1/2 Y Y −(ΣY Y + εnI)1/2}Σ1/2 Y Y u∥HY ≤∥Σ1/2 Y Y − (ΣY Y + εnI)1/2∥∥Σ1/2 Y Y u∥HY. Since (ΣY Y + εnI)1/2 →Σ1/2 Y Y in norm, we obtain {(ΣY Y + εnI)−1/2Σ1/2 Y Y −I}v →0 (n →∞) (16) for all v ∈R(VY X)∩R(ΣY Y ). Because VY X is compact, Lemma 8 in the Appendix shows Eq. (15) converges to zero. The convergence of the second norm is similar. 4.2 Proof of the main theorems Proof of Thm. 1. This follows from Lemmas 5, 6, and Lemma 9 in Appendix. Proof Thm. 2. We show only the convergence of bfn. W.l.o.g, we can assume bφn →φ in HX . From ∥Σ1/2 XX( bfn −f)∥2 HX = ∥Σ1/2 XX bfn∥2 HX −2⟨φ, Σ1/2 XX bfn⟩HX + ∥φ∥2 HX , it suffices to show Σ1/2 XX bfn converges to φ in probability. We have ∥Σ1/2 XX bfn −φ∥HX ≤∥Σ1/2 XX{(bΣ(n) XX + εnI)−1/2 −(ΣXX + εnI)−1/2}bφn∥HX + ∥Σ1/2 XX(ΣXX + εnI)−1/2(bφn −φ)∥HX + ∥Σ1/2 XX(ΣXX + εnI)−1/2φ −φ∥HX . Using the same argument as the bound on the first term in Eq. (14), the first term on the R.H.S of the above inequality is shown to converge to zero. The convergence of the second term is obvious. Using the assumption φ ∈R(ΣXX), the same argument as the proof of Eq. (16) applies to the third term, which completes the proof. 5 Concluding remarks We have established the statistical convergence of kernel CCA and NOCCO, showing that the finite sample estimators of the nonlinear mappings converge to the desired population functions. This convergence is proved in the RKHS norm for NOCCO, and in the L2 norm for kernel CCA. These results give a theoretical justification for using the empirical estimates of NOCCO and kernel CCA in practice. We have also derived a sufficient condition, n1/3εn →∞, for the decay of the regularization coefficient εn, which ensures the convergence described above. As [10] suggests, the order of the sufficient condition seems to depend on the function norm used to determine convergence. An interesting consideration is whether the order n1/3εn →∞can be improved for convergence in the L2 or RKHS norm. Another question that remains to be addressed is when to use kernel CCA, COCO, or NOCCO in practice. The answer probably depends on the statistical properties of the data. It might consequently be helpful to determine the relation between the spectral properties of the data distribution and the solutions of these methods. Acknowledgements This work is partially supported by KAKENHI 15700241 and Inamori Foundation. References [1] S. Akaho. A kernel method for canonical correlation analysis. Proc. Intern. Meeting on Psychometric Society (IMPS2001), 2001. [2] N. Aronszajn. Theory of reproducing kernels. Trans. American Mathematical Society, 69(3):337–404, 1950. [3] F. R. Bach and M. I. Jordan. Kernel independent component analysis. J. Machine Learning Research, 3:1–48, 2002. [4] C. R. Baker. Joint measures and cross-covariance operators. Trans. American Mathematical Society, 186:273–289, 1973. [5] N. Dunford and J. T. Schwartz. Linear Operators, Part II. Interscience, 1963. [6] K. Fukumizu, F. R. Bach, and A. Gretton. Consistency of kernel canonical correlation. Research Memorandum 942, Institute of Statistical Mathematics, 2005. [7] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. J. Machine Learning Research, 5:73– 99, 2004. [8] A. Gretton, O. Bousquet, A. Smola, and B. Sch¨olkopf. Measuring statistical dependence with Hilbert-Schmidt norms. Tech Report 140, Max-Planck-Institut f¨ur biologische Kybernetik, 2005. [9] A. Gretton, A. Smola, O. Bousquet, R. Herbrich, B. Sch¨olkopf, and N. Logothetis. Behaviour and convergence of the constrained covariance. Tech Report 128, MaxPlanck-Institut f¨ur biologische Kybernetik, 2004. [10] S. Leurgans, R. Moyeed, and B. Silverman. Canonical correlation analysis when the data are curves. J. Royal Statistical Society, Series B, 55(3):725–740, 1993. [11] T. Melzer, M. Reiter, and H. Bischof. Nonlinear feature extraction using generalized canonical correlation analysis. Proc. Intern. Conf. Artificial Neural Networks (ICANN2001), 353–360, 2001. A Lemmas used in the proofs We list the lemmas used in Section 4. See [6] for the proofs. Lemma 7. Suppose A and B are positive self-adjoint operators on a Hilbert space such that 0 ≤A ≤λI and 0 ≤B ≤λI hold for a positive constant λ. Then ∥A3/2 −B3/2∥≤3λ3/2∥A −B∥. Lemma 8. Let H1 and H2 be Hilbert spaces, and H0 be a dense linear subspace of H2. Suppose An and A are bounded operators on H2, and B is a compact operator from H1 to H2 such that Anu →Au for all u ∈H0, and supn ∥An∥≤M for some M > 0. Then AnB converges to AB in norm. Lemma 9. Let A be a compact positive operator on a Hilbert space H, and An (n ∈ N) be bounded positive operators on H such that An converges to A in norm. Assume the eigenspace of A corresponding to the largest eigenvalue is one-dimensional and spanned by a unit eigenvector φ, and the maximum of the spectrum of An is attained by a unit eigenvector φn. Then we have |⟨φn, φ⟩H| →1 as n →∞.
|
2005
|
99
|
2,920
|
Ordinal Regression by Extended Binary Classification Ling Li Learning Systems Group California Institute of Technology ling@caltech.edu Hsuan-Tien Lin Learning Systems Group California Institute of Technology htlin@caltech.edu Abstract We present a reduction framework from ordinal regression to binary classification based on extended examples. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranking rule from the binary classifier. A weighted 0/1 loss of the binary classifier would then bound the mislabeling cost of the ranking rule. Our framework allows not only to design good ordinal regression algorithms based on well-tuned binary classification approaches, but also to derive new generalization bounds for ordinal regression from known bounds for binary classification. In addition, our framework unifies many existing ordinal regression algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms, which demonstrates the usefulness of our framework. 1 Introduction We work on a type of supervised learning problems called ranking or ordinal regression, where examples are labeled by an ordinal scale called the rank. For instance, the rating that a customer gives on a movie might be one of do-not-bother, only-if-you-must, good, very-good, and run-to-see. The ratings have a natural order, which distinguishes ordinal regression from general multiclass classification. Recently, many algorithms for ordinal regression have been proposed from a machine learning perspective. For instance, Crammer and Singer [1] generalized the online perceptron algorithm with multiple thresholds to do ordinal regression. In their approach, a perceptron maps an input vector to a latent potential value, which is then thresholded to obtain a rank. Shashua and Levin [2] proposed new support vector machine (SVM) formulations to handle multiple thresholds. Some other formulations were studied by Rajaram et al. [3] and Chu and Keerthi [4]. All these algorithms share a common property: they are modified from well-known binary classification approaches. Since binary classification is much better studied than ordinal regression, a general framework to systematically reduce the latter to the former can introduce two immediate benefits. First, well-tuned binary classification approaches can be readily transformed into good ordinal regression algorithms, which saves immense efforts in design and implementation. Second, new generalization bounds for ordinal regression can be easily derived from known bounds for binary classification, which saves tremendous efforts in theoretical analysis. In this paper, we propose such a reduction framework. The framework is based on extended examples, which are extracted from the original examples and a given mislabeling cost matrix. The binary classifier trained from the extended examples can then be used to construct a ranking rule. We prove that the mislabeling cost of the ranking rule is bounded by a weighted 0/1 loss of the binary classifier. Hence, binary classifiers that generalize well could introduce ranking rules that generalize well. The advantages of the framework in algorithmic design and in theoretical analysis are both demonstrated in the paper. In addition, we show that our framework provides a unified view for many existing ordinal regression algorithms. The experiments on some benchmark data sets validate the usefulness of our framework in practice. The paper is organized as follows. In Section 2, we introduce our reduction framework. An unified view of some existing algorithms based on the framework is discussed in Section 3. Theoretical guarantee on the reduction, including derivations of new generalization bounds for ordinal regression, is provided in Section 4. We present experimental results of several new algorithms in Section 5, and conclude in Section 6. 2 The reduction framework In an ordinal regression problem, an example (x, y) is composed of an input vector x ∈X and an ordinal label (i.e., rank) y ∈Y = {1, 2, . . . , K}. Each example is assumed to be drawn i.i.d. from some unknown distribution P(x, y) on X ×Y. The generalization error of a ranking rule r: X →Y is then defined as C(r, P) def = E (x,y)∼P Cy,r(x) , where C is a K × K cost matrix with Cy,k being the cost of predicting an example (x, y) as rank k. Naturally we assume Cy,y = 0 and Cy,k > 0 for k ̸= y. Given a training set S = {(xn, yn)}N n=1 containing N examples, the goal is to find a ranking rule r that generalizes well, i.e., associates with a small C(r, P). The setting above looks similar to that of a multiclass classification problem, except that the ranks are ordered. The ordinal information can be interpreted in several ways. In statistics, the information is assumed to reflect a stochastic ordering on the conditional distributions P(y ≤k | x) [5]. Another interpretation is that the mislabeling cost depends on the “closeness” of the prediction. Consider an example (x, 4) with r1(x) = 3 and r2(x) = 1. The rule r2 should pay more for the erroneous prediction than the rule r1. Thus, we generally want each row of C to be V-shaped. That is, Cy,k−1 ≥ Cy,k if k ≤y and Cy,k ≤Cy,k+1 if k ≥y. A simple C with V-shaped rows is the classification cost matrix, with entries Cy,k = Jy ̸= kK.1 The classification cost is widely used in multiclass classification. However, because the cost is invariant for all kinds of mislabelings, the ordinal information is not taken into account. The absolute cost matrix, which is defined by Cy,k = |y −k|, is a popular choice that better reflects the ordering preference. Its rows are not only V-shaped, but also convex. That is, Cy,k+1 −Cy,k ≥Cy,k −Cy,k−1 for 1 < k < K. The convex rows encode a stronger preference in making the prediction “close.” In this paper, we shall always assume that the ordinal regression problem under study comes with a cost matrix of V-shaped rows, and discuss how to reduce the ordinal regression problem to a binary classification problem. Some of the results may require the rows to be convex. 2.1 Reducing ordinal regression to binary classification The ordinal information allows ranks to be compared. Consider, for instance, that we want to know how good a movie x is. An associated question would be: “is the rank of x greater than k?” For a fixed k, such a question is exactly a binary classification problem, and the rank of x can be determined by asking multiple questions for k = 1, 2, until (K −1). Frank and Hall [6] proposed to solve each binary classification problem independently and combine the binary outputs to a rank. Although their approach is simple, the generalization performance using the combination step cannot be easily analyzed. Our framework works differently. First, all the binary classification problems are solved jointly to obtain a single binary classifier. Second, a simpler step is used to convert the binary outputs to a rank, and generalization analysis can immediately follow. 1The Boolean test J·K is 1 if the inner condition is true, and 0 otherwise. Assume that fb(x, k) is a binary classifier for all the associated questions above. Consistent answers would be fb(x, k) = 1 (“yes”) for k = 1 until (y′ −1) for some y′, and 0 (“no”) afterwards. Then, a reasonable ranking rule based on the binary answers is r(x) = y′ = 1 + min {k: fb(x, k) = 1}. Equivalently, r(x) def = 1 + K−1 X k=1 fb(x, k). Although the definition can be flexibly applied even when fb is not consistent, a consistent fb is usually desired in order to introduce a good ranking rule r. Furthermore, the ordinal information can help to model the relative confidence in the binary outputs. That is, when k is farther from the rank of x, the answer fb(x, k) should be more confident. The confidence can be modeled by a real-valued function f : X × {1, 2, . . . , K −1} →R, with fb(x, k) = Jf(x, k) > 0K and the confidence encoded in the magnitude of f. Accordingly, r(x) def = 1 + K−1 X k=1 Jf(x, k) > 0K. (1) The ordinal information would naturally require f to be rank-monotonic, i.e., f(x, 1) ≥f(x, 2) ≥ · · · ≥f(x, K −1) for every x. Note that a rank-monotonic function f introduces consistent answers fb. Again, although the construction (1) can be applied to cases where f is not rankmonotonic, a rank-monotonic f is usually desired. When f is rank-monotonic, we have f(x, k) > 0 for k < r(x), and f(x, k) ≤0 for k ≥r(x). Thus the cost of the ranking rule r on an example (x, y) is Cy,r(x) = K−1 X k=r(x) (Cy,k −Cy,k+1) + Cy,K = K−1 X k=1 (Cy,k −Cy,k+1) Jf(x, k) ≤0K + Cy,K. (2) Define the extended examples (x(k), y(k)) with weights wy,k as x(k) = (x, k), y(k) = 2Jk < yK −1, wy,k = |Cy,k −Cy,k+1| . (3) Because row y in C is V-shaped, the binary variable y(k) equals the sign of (Cy,k −Cy,k+1) if the latter is not zero. Continuing from (2), Cy,r(x) = y−1 X k=1 wy,k · y(k)Jf(x(k)) ≤0K + K−1 X k=y wy,k · y(k) 1 −Jf(x(k)) > 0K + Cy,K = y−1 X k=1 wy,kJy(k)f(x(k)) ≤0K + Cy,y + K−1 X k=y wy,kJy(k)f(x(k)) < 0K ≤ K−1 X k=1 wy,kJy(k)f(x(k)) ≤0K. (4) Inequality (4) shows that the cost of r on example (x, y) is bounded by a weighted 0/1 loss of f on the extended examples. It becomes an equality if the degenerate case f(x(k)) = 0 does not happen. When f is not rank-monotonic but row y of C is convex, the inequality (4) could be alternatively proved from K−1 X k=r(x) (Cy,k −Cy,k+1) ≤ K−1 X k=1 (Cy,k −Cy,k+1) Jf(x(k)) ≤0K. The inequality above holds because (Cy,k −Cy,k+1) is decreasing due to the convexity, and there are exactly (r(x) −1) zeros and (K −r(x)) ones in the values of Jf(x(k)) ≤0K in (1). Altogether, our reduction framework consists of the following steps: we first use (3) to transform all training examples (xn, yn) to extended examples (x(k) n , y(k) n ) with weights wyn,k (also denoted as w(k) n ). All the extended examples would then be jointly learned by a binary classifier f with confidence outputs, aiming at a low weighted 0/1 loss. Finally, a ranking rule r is constructed from f using (1). The cost bound in (4) leads to the following theorem. Theorem 1 (reduction) An ordinal regression problem with a V-shaped cost matrix C can be reduced to a binary classification problem with the extended examples in (3) and the ranking rule r in (1). If f is rank-monotonic or every row of C is convex, for any example (x, y) and its extended examples (x(k), y(k)), the weighted sum of the 0/1 loss of f(x(k)) bounds the cost of r(x). 2.2 Thresholded model From Theorem 1 and the illustrations above, a rank-monotonic f is preferred for our framework. A popular approach to obtain such a function f is to use a thresholded model [1, 4, 5, 7]: f(x, k) = g(x) −θk. As long as the threshold vector θ is ordered, i.e., θ1 ≤θ2 ≤· · · ≤θK−1, the function f is rank-monotonic. The question is then, “when can a binary classification algorithm return ordered thresholds?” A mild but sufficient condition is shown as follows. Theorem 2 (ordered thresholds) If every row of the cost matrix is convex, and the binary classification algorithm minimizes the loss Λ(g) + N X n=1 K−1 X k=1 w(k) n · ℓ y(k) n (g(xn) −θk) , (5) where ℓ(ρ) is non-increasing in ρ, there exists an optimal solution (g∗, θ∗) such that θ∗is ordered. PROOF For an optimal solution (g, θ), assume that θk > θk+1 for some k. We shall prove that switching θk and θk+1 would not increase the objective value of (5). First, consider an example with yn = k + 1. Since y(k) n = 1 and y(k+1) n = −1, switching the thresholds changes the objective value by w(k) n [ℓ(g(xn) −θk+1) −ℓ(g(xn) −θk)] + w(k+1) n [ℓ(θk −g(xn)) −ℓ(θk+1 −g(xn))] . (6) Because ℓ(ρ) is non-increasing, the change is non-positive. For an example with yn < k + 1, we have y(k) n = y(k+1) n = −1. The change in the objective is (w(k) n −w(k+1) n ) [ℓ(θk+1 −g(xn)) −ℓ(θk −g(xn))] . Note that row yn of the cost matrix being convex leads to w(k) n ≤w(k+1) n if yn < k + 1. Since ℓ(ρ) is non-increasing, the change above is also non-positive. The case for examples with yn > k + 1 is similar and the change there is also non-positive. Thus, by switching adjacent pairs of strictly decreasing thresholds, we can actually obtain a solution (g∗, θ∗) with a smaller or equal objective value in (5), and g∗= g. The optimality of (g, θ) shows that (g∗, θ∗) is also optimal. ■ Note that if ℓ(ρ) is strictly decreasing for ρ < 0, and there are training examples for every rank, the change (6) is strictly negative. Thus, the optimal θ∗for any g∗is always ordered. 3 Algorithms based on the framework So far the reduction works only by assuming that x(k) = (x, k) is a pair understandable by f. Actually, any lossless encoding from (x, k) to a vector can be used to encode the pair. With proper choices of the cost matrix, the encoding scheme of (x, k), and the binary learning algorithm, many existing ordinal regression algorithms can be unified in our framework. In this section, we will briefly discuss some of them. It happens that a simple encoding scheme for (x, k) via a coding matrix E of (K −1) rows works for all these algorithms. To form x(k), the vector ek, which denotes the k-th row of E, is appended after x. We will mostly work with E = γIK−1, where γ is a positive scalar and IK−1 is the (K −1) × (K −1) identity matrix. 3.1 Perceptron-based algorithms The perceptron ranking (PRank) algorithm proposed by Crammer and Singer [1] is an online ordinal regression algorithm that employs the thresholded model with f(x, k) = ⟨u, x⟩−θk. Whenever a training example is not predicted correctly, the current u and θ are updated in a way similar to the perceptron learning rule [8]. The algorithm was proved to keep an ordered θ, and a mistake bound was also proposed [1]. With the simple encoding scheme E = IK−1, we can see that f(x, k) = (u, −θ), x(k) . Thus, when the absolute cost matrix is taken and a modified perceptron learning rule2 is used as the underlying binary classification algorithm, the PRank algorithm is a specific instance of our framework. The orderliness of the thresholds is guaranteed by Theorem 2, and the mistake bound is a direct application of the well-known perceptron mistake bound (see for example Freund and Schapire [8]). Our framework not only simplifies the derivation of the mistake bound, but also allows the use of other perceptron algorithms, such as a batch-mode algorithm rather than an online one. 3.2 SVM-based algorithms SVM [9] can be thought as a generalized perceptron with a kernel that computes the inner product on transformed input vectors φ(x). For the extended examples (x, k), we can suitably define the extended kernel as the original kernel plus the inner product between the extensions, K ((x, k), (x′, k′)) = ⟨φ(x), φ(x′)⟩+ ⟨ek, ek′⟩. Then, several SVM-based approaches for ordinal regression are special instances of our framework. For example, the approach of Rajaram et al. [3] is equivalent to using the classification cost matrix, the coding matrix E defined with ek,i = γ · Jk ≤iK for some γ > 0, and the hard-margin SVM. When E = γIK−1 and the traditional soft-margin SVM are used in our framework, the binary classifier f(x, k) has the form ⟨u, φ(x)⟩−θk −b, and can be obtained by solving min u,θ,b ∥u∥2 + ∥θ∥2 /γ2 + κ N X n=1 K−1 X k=1 w(k) n max n 0, 1 −y(k) n (⟨u, φ(xn)⟩−θk −b) o . (7) The explicit (SVOR-EXP) and implicit (SVOR-IMC) approaches of Chu and Keerthi [4] can be regarded as instances of our framework with a modified soft-margin SVM formulation (since they excluded the term ∥θ∥2 /γ2 and added some constraints on θ). Thus, many of their results can be alternatively explained with our reduction framework. For example, their proof for ordered θ of SVOR-IMC is implied from Theorem 2. In addition, they found that SVOR-EXP performed better in terms of the classification cost, and SVOR-IMC preceded in terms of the absolute cost. This finding can also be explained by reduction: SVOR-EXP is an instance of our framework using the classification cost and SVOR-IMC comes from using the absolute cost. Note that Chu and Keerthi paid much effort in designing and implementing suitable optimizers for their modified formulation. If the unmodified soft-margin SVM (7) is directly used in our framework with the absolute cost, we obtain a new support vector ordinal regression formulation.3 From Theorem 2, the thresholds θ would be ordered. The dual of (7) can be easily solved with state-ofthe-art SVM optimizers, and the formulations of Chu and Keerthi can be approximated by setting γ to a large value. As we shall see in Section 5, even a simple setting of γ = 1 performs similarly to the approaches of Chu and Keerthi in practice. 4 Generalization bounds With the extended examples, new generalization bounds can be derived for ordinal regression problems with any cost matrix. A simple result that comes immediately from (4) is: 2To precisely replicate the PRank algorithm, the (K −1) extended examples sprouted from a same example should be considered altogether in updating the perceptron weight vector. 3The formulation was only briefly mentioned in a footnote, but not studied, by Chu and Keerthi [4]. Theorem 3 (reduction of generalization error) Let cy = Cy,1 + Cy,K and c = maxy cy. If f is rank-monotonic or every row of C is convex, there exists a distribution ˆP on (X, Y ), where X contains the encoding of (x, k) and Y is a binary label, such that E (x,y)∼P Cy,r(x) ≤c · E (X,Y )∼ˆ P JY f(X) ≤0K. PROOF We prove by constructing ˆP. Given the conditions, following (4), we have Cy,r(x) ≤ K−1 X k=1 wy,kJy(k)f(x(k)) ≤0K = cy · E k∼PkJy(k)f(x(k)) ≤0K, where Pk(k | y) = wy,k/cy is a probability distribution because cy = PK−1 k=1 wy,k. Equivalently, we can define a distribution ˆP(x(k), y(k)) that generates (x(k), y(k)) by drawing the tuple (x, y, k) from P(x, y) and Pk(k | y). Then, the generalization error of r is E (x,y)∼P Cy,r(x) ≤ E (x,y)∼P cy · E k∼PkJy(k)f(x(k)) ≤0K ≤c · E (x(k),y(k))∼ˆ P Jy(k)f(x(k)) ≤0K. (8) ■ Theorem 3 shows that, if the binary classifier f generalizes well when examples are sampled from ˆP, the constructed ranking rule would also generalize well. The terms y(k)f(x(k)), which are exactly the margins of the associated binary classifier fb(x, k), would be analogously called the margins for ordinal regression, and are expected to be positive and large for correct and confident predictions. Herbrich et al. [5] derived a large-margin bound for an SVM-based thresholded model using pairwise comparisons between examples. However, the bound is complicated because O(N 2) pairs are taken into consideration, and the bound is restricted because it is only applicable to hard-margin cases, i.e., for all n, the margins y(k) n f(x(k) n ) ≥∆> 0. Another large-margin bound was derived by Shashua and Levin [2]. However, the bound is not data-dependent, and hence does not fully explain the generalization performance of large-margin ranking rules in reality (for more discussions on data-dependent bounds, see the work of, for example, Bartlett and Shawe-Taylor [10]). Next, we show how a novel data-dependent bound for SVM-based ordinal regression approaches can be derived from our reduction framework. Our bound includes only O(KN) extended examples, and applies to both hard-margin and soft-margin cases, i.e., the margins y(k)f(x(k)) can be negative. Similar techniques can be used to derive generalization bounds when AdaBoost is the underlying classifier (see the work of Lin and Li [7] for one of such bounds). Theorem 4 (data-dependent bound for support vector ordinal regression) Assume that f(x, k) ∈ n f : (x, k) 7→⟨u, φ(x)⟩−θk, ∥u∥2 + ∥θ∥2 ≤1, ∥φ(x)∥2 + 1 ≤R2o . If θ is ordered or every row of C is convex, for any margin criterion ∆, with probability at least 1−δ, every rank rule r based on f has generalization error no more than β N · N X n=1 K−1 X k=1 w(k) n Jy(k) n f(x(k) n ) ≤∆K + O log N √ N , R ∆, r log 1 δ ! , where β = maxy cy miny cy . PROOF Consider the extended training set ˆS = (x(k) n , y(k) n ) , which contains N(K −1) elements. Each element is a possible outcome from the distribution ˆP constructed in Theorem 3. Note, however, that these elements are not all independent. Thus, we cannot directly use the whole extended set as i.i.d. outcomes from ˆP. Nevertheless, some subsets of ˆS do contain i.i.d. outcomes from ˆP. One way to extract such a subset is to choose independent kn from Pk(k | yn) for each (xn, yn). The subset would be named T = (x(kn) n , y(kn) n ) N n=1. Bartlett and Shawe-Taylor [10] showed that with probability at least (1 −δ/2) over the choice of N i.i.d. outcomes from ˆP, which is the case of T, E (x(k),y(k))∼ˆ P Jy(k)f(x(k)) ≤0K ≤1 N N X n=1 Jy(kn) n f(x(kn) n ) ≤∆K + O log N √ N , R ∆, r log 1 δ ! . (9) Table 1: Test error with absolute cost Reduction based on SVOR-IMC with kernel data set C4.5 boost-stump SVM-perceptr. perceptron Gaussian [4] pyrimidines 1.565 ± 0.072 1.360 ± 0.054 1.304 ± 0.040 1.315 ± 0.039 1.294 ± 0.046 machine 0.987 ± 0.024 0.875 ± 0.017 0.842 ± 0.022 0.814 ± 0.019 0.990 ± 0.026 boston 0.950 ± 0.016 0.846 ± 0.015 0.732 ± 0.013 0.729 ± 0.013 0.747 ± 0.011 abalone 1.560 ± 0.006 1.458 ± 0.005 1.383 ± 0.004 1.386 ± 0.005 1.361 ± 0.003 bank 1.700 ± 0.005 1.481 ± 0.002 1.404 ± 0.002 1.404 ± 0.002 1.393 ± 0.002 computer 0.701 ± 0.003 0.604 ± 0.002 0.565 ± 0.002 0.565 ± 0.002 0.596 ± 0.002 california 0.974 ± 0.004 0.991 ± 0.003 0.940 ± 0.001 0.939 ± 0.001 1.008 ± 0.001 census 1.263 ± 0.003 1.210 ± 0.001 1.143 ± 0.002 1.143 ± 0.002 1.205 ± 0.002 Let bn = Jy(kn) n f(x(kn) n ) ≤∆K be a Boolean random variable introduced by kn ∼Pk(k | yn). The variable has mean c−1 yn · PK−1 k=1 w(k) n Jy(k) n f(x(k) n ) ≤∆K. An extended Chernoff bound shows that when each bn is chosen independently, with probability at least (1 −δ/2) over the choice of bn, 1 N N X n=1 bn ≤1 N N X n=1 1 cyn K−1 X k=1 w(k) n Jy(k) n f(x(k) n ) ≤∆K + O 1 √ N , r log 1 δ ! . (10) The desired result can be obtained by combining (8), (9), and (10) with a union bound. ■ 5 Experiments We performed experiments with eight benchmark data sets that were used by Chu and Keerthi [4]. The data sets were produced by quantizing some metric regression data sets with K = 10. We used the same training/test ratio and also averaged the results over 20 trials. Thus, with the absolute cost matrix, we can fairly compare our results with those of SVOR-IMC [4]. We tested our framework with E = γIK−1 and three different binary classification algorithms. The first binary algorithm is Quinlan’s C4.5 [11]. The second is AdaBoost-stump which uses AdaBoost to aggregate 500 decision stumps. The third one is SVM with the perceptron kernel [12], with a simple setting of γ = 1. Note that the Gaussian kernel was used by Chu and Keerthi [4]. We used the perceptron kernel instead to gain the advantage of faster parameter selection. The parameter κ of the soft-margin SVM was determined by a 5-fold cross validation procedure with log2 κ = −17, −15, . . . , 3, and LIBSVM [13] was adopted as the solver. For a fair comparison, we also implemented SVOR-IMC with the perceptron kernel and the same parameter selection procedure in LIBSVM. We list the mean and the standard error of all test results in Table 1, with entries within one standard error of the lowest one marked in bold. With our reduction framework, all the three binary learning algorithms could be better than SVOR-IMC with the Gaussian kernel on some of the data sets, which demonstrates that they achieve decent out-of-sample performances. Among the three algorithms, SVM-perceptron is significantly better than the other two. Within the three SVM-based approaches, the two bank computer california census 0 2 4 6 avg. training time (hour) reduction SVOR−IMC Figure 1: Training time (including automatic parameter selection) of the SVM-based approaches with the perceptron kernel with the perceptron kernel are better than SVORIMC with the Gaussian kernel in test performance. Our direct reduction to the standard SVM performs similarly to SVOR-IMC with the same perceptron kernel, but is much easier to implement. In addition, our direct reduction is significantly faster than SVOR-IMC in training, which is illustrated in Figure 1 using the four largest data sets.4 The main cause to the time difference is the speedup heuristics. While, to the best of our knowledge, not much 4The results are averaged CPU time gathered on a 1.7G Dual Intel Xeon machine with 1GB of memory. has been done to improve the original SVOR-IMC algorithm, plenty of heuristics, such as shrinking and advanced working set selection in LIBSVM, can be seamlessly adopted by our direct reduction. This difference demonstrates another advantage of our reduction framework: improvements to binary classification approaches can be immediately inherited by reduction-based ordinal regression algorithms. 6 Conclusion We presented a reduction framework from ordinal regression to binary classification based on extended examples. The framework has the flexibility to work with any reasonable cost matrix and any binary classifiers. We demonstrated the algorithmic advantages of the framework in designing new ordinal regression algorithms and explaining existing algorithms. We also showed that the framework can be used to derive new generalization bounds for ordinal regression. Furthermore, the usefulness of the framework was empirically validated by comparing three new algorithms constructed from our framework with the state-of-the-art SVOR-IMC algorithm. Acknowledgments We wish to thank Yaser S. Abu-Mostafa, Amrit Pratap, John Langford, and the anonymous reviewers for valuable discussions and comments. Ling Li was supported by the Caltech SISL Graduate Fellowship, and Hsuan-Tien Lin was supported by the Caltech EAS Division Fellowship. References [1] K. Crammer and Y. Singer. Pranking with ranking. In T. G. Dietterich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Information Processing Systems 14, vol. 1, pp. 641–647. MIT Press, 2002. [2] A. Shashua and A. Levin. Ranking with large margin principle: Two approaches. In S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Information Processing Systems 15, pp. 961–968. MIT Press, 2003. [3] S. Rajaram, A. Garg, X. S. Zhou, and T. S. Huang. Classification approach towards ranking and sorting problems. In N. Lavraˇc, D. Gamberger, H. Blockeel, and L. Todorovski, eds., Machine Learning: ECML 2003, vol. 2837 of Lecture Notes in Artificial Intelligence, pp. 301–312. Springer-Verlag, 2003. [4] W. Chu and S. S. Keerthi. New approaches to support vector ordinal regression. In L. D. Raedt and S. Wrobel, eds., ICML 2005: Proceedings of the 22nd International Conference on Machine Learning, pp. 145–152. Omnipress, 2005. [5] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In A. J. Smola, P. L. Bartlett, B. Sch¨olkopf, and D. Schuurmans, eds., Advances in Large Margin Classifiers, chapter 7, pp. 115–132. MIT Press, 2000. [6] E. Frank and M. Hall. A simple approach to ordinal classification. In L. D. Raedt and P. Flach, eds., Machine Learning: ECML 2001, vol. 2167 of Lecture Notes in Artificial Intelligence, pp. 145–156. SpringerVerlag, 2001. [7] H.-T. Lin and L. Li. Large-margin thresholded ensembles for ordinal regression: Theory and practice. In J. L. Balc´azar, P. M. Long, and F. Stephan, eds., Algorithmic Learning Theory: ALT 2006, vol. 4264 of Lecture Notes in Artificial Intelligence, pp. 319–333. Springer-Verlag, 2006. [8] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999. [9] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 2nd edition, 1999. [10] P. Bartlett and J. Shawe-Taylor. Generalization performance of support vector machines and other pattern classifiers. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, eds., Advances in Kernel Methods: Support Vector Learning, chapter 4, pp. 43–54. MIT Press, 1998. [11] J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986. [12] H.-T. Lin and L. Li. Novel distance-based SVM kernels for infinite ensemble learning. In Proceedings of the 12th International Conference on Neural Information Processing, pp. 761–766, 2005. [13] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm.
|
2006
|
1
|
2,921
|
Multiple timescales and uncertainty in motor adaptation Konrad P. K¨ording Rehabilitation Institute of Chicago Northwestern University, Dept. PM&R Chicago, IL 60611 konrad@koerding.com Joshua B. Tenenbaum Massachusetts Institute of Technology Cambridge, MA 02139 jbt@mit.edu Reza Shadmehr Johns Hopkins University Baltimore, MD 21205 reza@bme.jhu.edu Abstract Our motor system changes due to causes that span multiple timescales. For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. Here we hypothesize that the nervous system adapts in a way that reflects the temporal properties of such potential disturbances. According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. A system that adapts in this way predicts many properties observed in saccadic gain adaptation. It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction. 1 Introduction Saccades are rapid eye movements that shift the direction of gaze from one target to another. The eyes move so fast [1] that visual feedback can not usually be used during the movement. For that reason, without adaptation any changes in the properties of the oculomotor plant would lead to inaccurate saccades [2]. Motor gain is the ratio of actual and desired movement distances. If the motor gain decreases to below one then the nervous system must send a stronger command to produce a movement of the same size. Indeed, it has been observed that if saccades overshoot the target, the gain tends to decrease and if they undershoot, the gain tends to increase. The saccadic jump paradigm [3] is often used to probe such adaptation [4]: while the subject moves its eyes towards a target, the target is moved. This is not distinguishable to the subject from a change in the properties of the oculomotor plant [5]. Using this paradigm it is possible to probe the mechanism that is normally used to adapt to ongoing changes of the oculomotor plant. 1.1 Disturbances to the motor plant Properties of the oculomotor plant may change due to a variety of disturbances, such as various kinds of fatigue and disease. The fundamental characteristic of these disturbances is that their effects unfold over a wide range of timescales. Here we model each disturbance as a random walk with a characteristic timescale (Figures 1A and B) over which the disturbance is expected to go away. disturbanceτ(t + Δ) = (1 −1/τ)disturbanceτ(t) + ϵτ (1) where ϵτ is drawn from a mean zero normal distribution of width σ τ, and τ is the timescale. The larger τ the closer (1 −1/τ is to 1 and the longer does a disturbance typically last. 1.2 Parameter choice For the experiments that we want to explain only those timescales will matter that are not much longer than the overall time of the experiments (because they would already have been integrated out) and that are not much shorter than the time of an individual saccade (because they would average out). For that reason we chose the distribution of τ to be 30 values exponentially scaled between 1 and 33333 saccades. The distribution of expected gains thus only depends on the distribution of στ, a characterization of how important disturbances are at various timescales. It seems plausible that disturbances that have a short timescale tend to be more variable than those that have a long timescale, and we choose: στ = c/τ where c is one of the two free parameters of our model. Moreover, as we expect each disturbance to be relatively small, we assume linearity and that the motor gain is simply one plus the sum of all the disturbances: gain(t) = 1 + τ disturbanceτ(t) (2) If the motor plant underwent such changes in its properties, and if the nervous system produced the same motor commands without adaptation, then saccade gain would differ from one, resulting in motor error. However, with each saccade, the brain observes consequences of the motor commands. We assume that this observation is corrupted by noise: observation(t) = gain(t) + w (3) where w is the second free parameter of our model, the observation noise with a width σ w. Throughout this paper we choose σw = 0.05 which we estimated from the spread of saccade gains over typical periods of 200 saccades and c = 0.002 because that yielded good fits to the data by Hopp and Fuchs [2]. We chose to model all data using the same set of parameters to avoid issues of overfitting. 1.3 Inference Given this explicit model, Bayesian statistics allows deriving an optimal adaptation strategy. We observe that the system is equivalent to the generative model of the Kalman filter [6] with a diagonal transition matrix M = diag(1−1/τ) and an observation matrix H that is a vector consisting of one 1 for each of the 30 potential disturbances, and a diagonal process noise matrix of Q = diag(τ −1). Process noise is what is driving the changes of each of the disturbances. We obtain the solution that is well known from the Kalman Filter literature. We use the Kalman filter toolbox written by Kevin Murphy to numerically solve these equations. An optimally adapting system needs to explicitly represent contribution of each timescale. Because the contribution of each timescale can never be known precisely, the Bayesian learner represents what it knows as a probability distribution. As the model is linear and the noises are Gaussian, it is sufficient to keep first and second order statistics. And so the learner represents what it knows about the contribution of each timescale as a best estimate, but also keeps a measure of uncertainty around this estimate (Fig 1C). Any point along the +0% gain line is a point where the fast and slow timescale cancel each other. There is a line associated with any possible gain (e.g. +30% and -30%). Every timestep the system starts with its belief that it has from the previous timestep (sketched in yellow) and combines this with information from the current saccade (sketched in blue) to come up with a new estimate (sketched in red). Two important changes happen to the belief of the learner over time. (1) When time passes, disturbances can be expected to get smaller but at the same time our uncertainty about them increases. (2) when a movement error is observed then this biases the sum of the disturbances to the observed error value and it also decreases the uncertainty. These effects are sketched in Figure 1D. Normally the adaptation mechanism is responding to the small drifts that happen to the oculomotor plant and the estimate from the saccade is largely overlapping with the prior belief and with the new belief. When the light is turned off the estimate of each of the state estimate prior belief evidence from saccade 0 0 -45 45 -45 45 slow disturbance [%] fast disturbance [%] C B A D 0 4000 -0.1 0 0.1 time [saccades] - 0.1 0 0.1 0.9 1 1.1 slow (e.g. disease) fast (e.g. fatigue) motor gain Normal saccades In the dark Saccadic jump Washout Reversal +30 saccades gain=0 gain=-30 gain=30 disturbance gain 1 dτ 2 dτ 3 dτ gain error 1 dτ 2 dτ 3 dτ gain error t t+1 Figure 1: A generative model for changes in the motor plant and the corresponding optimal inference. A) Various disturbances d evolve over time as independent random walks. The gain is a linear function of all these random walks. The observed error is a noisy version of the gain. B) An example of a system with two timescales (fast and slow), and the resulting gain. C) Optimal inference during a saccade adaptation experiment. For illustrative purposes, here we assume only two timescales. The yellow cloud represents the learners belief about the current combination of disturbances (prior). The system observes a saccade with an error of +30%. The region about the blue line is the uncertainty about the observation (i.e., the likelihood). Combining this information with the prior belief (yellow) leads to the posterior estimate (red). After a single observation of the +30% condition the most probable estimate thus is that it is a fast disturbance. D) The changes of estimates under various perturbations. Here we simulated a saccade on every 10th time step of the model. Each column shows three consecutive trials (top to bottom). Only in the darkness case saccades 1 3 and 50 are shown. In the dark, parameter uncertainties increase because the learner is not allowed to make observations (sensory noise is effectively infinite). In a gain increase paradigm, initially most of the error is associated with the fast perturbations. After 30 saccades in the gain increase paradigm, most of the error is associated with slow perturbations. Washout trials that follow gain increase do not return the system to a naive state. Rather, estimates of fast and slow perturbations cancel each other. Gain decrease following gain increase training will mostly affect the fast system. 0.9 -200 0 1400 2800 0.7 0.8 1.0 1.1 Gain Saccade Number Monkey experiment: Hopp and Fuchs Optimal adaptation A B Monkey Experiment: Robinson et al 1 0 2000 Day 1 Day 2 Day 3 Day 4 Day 5 4000 6000 0.4 0.6 0.8 Saccade Gain Saccade Number Bayesian adaptation 400 800 1200 0 Timecourse [saccades] p<0.06 Day 1 Day 2 Robinson et al Darkness Darkness Darkness Darkness Darkness Darkness Darkness Darkness C D E 400 800 1200 0 Timecourse [saccades] Day 1 Day 2 Bayesian adaptation F Figure 2: Saccadic gain adaptation in a target jump paradigm. A) Data replotted from Hopp and Fuchs [2] with permission. Each dot is one saccade, the thick lines are exponential fits to the intervals [0 1400] and [1400:2800]. Starting at saccade number 0 the target is jumping 30% short of the target giving the impression of muscles that are too strong. The gain then decreases until the manipulation is ended at saccade number 1400. B) The same plot is shown for the optimal Bayesian learner. Changes without feedback. C) Data reprinted from [7]. Normal saccadic gain change paradigm as in Figure 2, however now the monkey spends its nights without vision and the paradigm is continued for many days. D) The same plot as in C) but for the Bayesian learner. E) Comparison of the saccadic gain change timecourses obtained by fitting an exponential. F) the same figure as in E) for the Bayesian learner disturbances slowly creeps towards zero. At the same time, however the uncertainty increases a lot and larger uncertainty allows faster learning because the new information is more precise than the prior information. In the saccadic jump paradigm the error is much larger than it would be during normal life and this is first interpreted by the learner as a fast change and as it persists progressively interpreted as a slow change. When the saccadic jumps ends then the fast timescale goes negative fast and the slow timescale slowly approaches zero. In a reversal setting the fast timescale becomes very negative and the slow timescale goes towards zero. Already with two timescales the optimal learner can thus exhibit a large number of interesting properties. 2 Results: Comparison with experimental data 2.1 Saccadic gain adaptation In an impressive range of experiments started by Mclaughlin [3], investigators have examined how monkeys adapt their saccadic gain. Figure 2A shows how the gain changes over time so that saccades progressively become more precise. The rate of adaptation typically starts fast and then progressively gets slower. This is a classic pattern that is reflected in numerous motor adaptation paradigms [8, 9]. The same patterns are seen for the Bayesian multiscale learner (Figure 2B). Fast timescale disturbances are assumed to increase and decrease faster than slow timescale disturbances. Therefore, when the gain rapidly changes, it is a priori most likely that it will go away fast. (Fig. 1D, saccadic jump). Between trials, the estimates of the fast disturbances decay fast, but this decay is smaller in the slower timescales. If the gain change is maintained, the relative contribution of the fast timescales diminishes in comparison to the slow timescales (Fig. 1D, +30 saccades). As fast timescales adapt fast but decay fast as well and slow timescales adapt and decay slowly, this implies that the gain change is driven by progressively slower timescales resulting in the transition from initial fast adapting to a progressively slower adapting. 2.2 Saccadic gain adaptation after sensory deprivation The effects of a wide range of timescales and uncertainty about the causes of changes of the oculomotor plant will largely be hidden if experiments are of a relatively short duration and no uncertainty is produced. However, in a recent experiment Robinson et al analyzed saccadic gain adaptation [7] in a way that allowed insight into many timescales as well as insight into the way the nervous system deals with uncertainty. The adaptation target was set to -50%. The monkey adapted for about 1500 saccades every day for 21 consecutive days. Because of the long duration many different timescales are involved in this process. Interestingly, during the rest of the day the monkey wore goggles that blocked vision. During these breaks monkeys will accumulate uncertainty about the state of their oculomotor plant. Figure 2C shows results from such an experiment and figure 2D shows the results we are getting from the Bayesian learner. The results are surprisingly similar given that we used the same parameters that we had used the model parameters inferred from the Hopp and Fuchs data. Two effects are visible in the data. (1) There are several timescales during adaptation: there is a fast (100 saccades) and a slow (10 days) timescale. Closer examination of the data reveals a wide spectrum of timescales. (2) The state estimate is affected by the periods of darkness. During the breaks that are paired with darkness the system is decaying back to a gain of zero, as predicted by the model. Moreover, darkness leads to increased uncertainty. Increased uncertainty means that new information is relatively more precise than old information which in turn leads to faster learning. Consequently monkeys learn faster during the second day (after spending a night without feedback) than during the first (quantified in figure 2E and F). The finding that the Bayesian learner seems to change faster than the monkey may be related to the context being somewhat different than in the Hopp and Fuchs experiment. The system seems to represent uncertainty and clearly represents the way the motor plant is expected to change in the absence of feedback. It has been proposed that the nervous system may use a set of integrators where one is learning fast and the other is learning slowly [10, 11]. The Bayesian learner, however, keeps a measure of uncertainty about its estimates. For that reason only the Bayesian learner can explain the fact that sensory deprivation appears to enhance learning rates. 2.3 Gain adaptation with reversals Kojima et al [12] reported a host of surprising behavioral results during saccade adaptation. In these experiments the adaptation direction was changed 3 times. The saccadic gain was initially increased, then decreased until it reached unity, and finally increased again (Figure 3A). The saccadic gain increased faster during the second gain-up session than during the first(Figure 3B). Therefore, the reversal learning did not washout the system. The Bayesian learner shows a similar phenomenon and provides a rationale: At the end of the first gain-up session for the Bayesian learner, most of the gain change is associated with a slow timescale (Figure 3C). In the subsequent gain-down session, errors produce rapid changes in the fast timescales so that by the time the gain estimate reaches unity, the fast and slow timescales have opposite estimates. Therefore, the gain-down session did not reset the system, but the latent variables store the history of adaptation. In the subsequent gain-up session, the rate of re-adaptation is faster than initial adaptation because the fast timescales decay upwards in between trials (Figure 3D). After about 100 saccades the speed gain from the low frequencies is over and is turned into a slowed increase due to the decreased error term. In a second experiment, Kojima et al [12] found that saccade gains could change despite the fact that the animal was provided with no feedback to guide its performance. In this experiment the monkeys were again trained in a gain-up following by a gain-down session. Afterwards they spent some time in the dark. When they come out of the dark their gain had spontaneously increased (Figure 3E). The same effect is seen for the Bayesian learner (Figure 3F). In the dark period, the system makes no observations and therefore cannot learn from error. However, the estimates are still affected by their timescales of change: the estimate moves up fast along the fast timescales but slowly along the slow timescales. At the start of the darkness period there is a positive upward and a negative downward disturbance inferred by the system (Figure 1C, reversal). Consequently, by the end of the dark period, the estimate has become gain-up, the gain learned in the initial session. This produces the apparent spontaneous recovery observed in Figure 3F. Updating without feedback leads the system to infer unobserved dynamics of the oculomotor plant and these dynamics lead to the observed changes. 0 1056 2000 Saccade number 1.0 1.2 1.4 Gain E F Kojima et al Optimal 0.05 Dark 1 5 10 control test -4 -4 Gain change rate [x 10 /saccade] Gain change rate [x 10 /saccade] 0 500 1000 1500 Saccade number 0.8 1.0 1.2 1.4 Gain 4.9 6.4 A B C D Kojima et al Kojima et al Optimal Optimal Figure 3: The double reversal paradigm. A) The gain is first adapted up until it reaches about 1.2 with a negative target jump of 35%. Then it is adapted down with a positive target jump of 35%. Once the gain reaches 1 again it is adapted up with a positive target jump again. Data reprinted from [12] with permission. B) The speed of adaptation is compared between the first adaptation and the second positive adaptation. C) the same as in A) for the Bayesian learner. D) the same as in B for the Bayesian learner. E) Double reversal paradigm with darkness, reprinted from [12]. The gain used by the monkey is changing during this interval. F) The same graph is shown for the Bayesian learner. 3 Discussion Traditional models of adaptation simply change motor commands to reduce prediction errors [13]. Our approach differs from traditional approaches in three major ways. (1) The system represents its knowledge of the properties of the motor system at different timescales and explicitly models how these disturbances evolve over time. (2) It represents the uncertainty it has about the magnitude of the disturbances. (3) It formulates the computational aim of adaptation in terms of optimally predicting ongoing changes in the properties of the motor plant. Multiple studies address each single of these points on its own. Multi-timescale learning is a classical phenomenon described frequently [14, 8]. Two timescales had been proposed in the context of connectionist learning theory [11]. In the context of motor adaptation Smith et al. [10] proposed a model where the motor system responds to error with two systems: one that is highly sensitive to error but rapidly forgets and another that has poor sensitivity to error but has strong retention. In the context of classical conditioning, it has been proposed that the nervous system should keep a measure of uncertainty about its current parameter estimates to allow an optimal combination of new information with current knowledge [15]. Even the earliest studies of oculomotor adaptation realized that the objective of adaptation is to allow precise movement with a relentlessly changing motor plant [3]. Our approach unifies these ideas in a consistent computational framework and explains a wide range of experiments. Multi timescale adaptation and learning is a near universal phenomenon [14, 8, 16, 17]. Within the area of psychology it was found that learning follows multiscale behavior [17]. It has been proposed that multiscale learning may arise from chunking effects [14, 18]. The work presented here suggests a different interpretation. Multiscale learning in cognitive systems may be a result of a system that has originally evolved to deal with ever changing motor problems. Multiscale adaptation can also be seen in the way visual neurons adapt to changing visual stimuli [16]. The phenomenon of spontaneous recovery in classical conditioning [19, 20] is largely equivalent to the findings of Kojima et al [12] and can also be explained within the Bayesian multiscale learner framework. The presented model obviously does not explain all known effects in motor or even saccadic gain adaptation. For example it has been found that adapting up usually has a somewhat different timecourse to adapting down [21, 16, 12]. Moreover it seems that adaptation speed of monkeys can be very different on one day than the other and from one experimental setting to the other (e.g. Figure 2E and F). In learning reach control, there is more direct evidence that people can actually modify their rates of adaptation as a function of the auto-correlations of the perturbation [22]. This can be seen as the system learning about the size of the change parameter σ τ in this theory. Moreover, we certainly estimate the uncertainty we have about a visual stimulus in a continuous fashion: uncertainty is smallest for a high contrast stimulus in our fovea and progressively larger with decreasing contrast and increasing eccentricity. An important question for further enquiry is how the nervous system solves problems that require multiple timescale adaptation. The necessary effects could potentially be implemented directly by synapses that could exhibit LTP with powerlaw characteristics [23, 24]. Alternatively, small groups of neurons may jointly represent the estimates along with their uncertainties. In summary, if we begin with the assumption that the nervous system optimally solves the problem of producing reliable movements with a motor plant that is affected by perturbations that have multiple timescales, then the learner will exhibit numerous properties that appear to match those reported in saccade and reach adaptation experiments. References [1] W Becker. Metrics. In R. H. Wurtz and M Goldberg, editors, The Neurobiology of Saccadic Eye Movements, pages 13–67. Elsevier, Amsterdam, 1989. [2] J. J. Hopp and A. F. Fuchs. The characteristics and neuronal substrate of saccadic eye movement plasticity. Prog Neurobiol, 72(1):27–53, 2004. [3] SC McLaughlin. Parametric adjustment in saccadic eye movement. Percept. Psychophys., 2:359–362, 1967. [4] J. Wallman and A. F. Fuchs. Saccadic gain modification: visual error drives motor adaptation. J Neurophysiol, 80(5):2405–16, 1998. [5] D. O. Bahcall and E. Kowler. Illusory shifts in visual direction accompany adaptation of saccadic eye movements. Nature, 400(6747):864–6, 1999. [6] R. E. Kalman. A new approach to linear filtering and prediction problems. J. of Basic Engineering (ASME), 82D:35–45, 1960. [7] F. R. Robinson, R. Soetedjo, and C. Noto. Distinct short-term and long-term adaptation to reduce saccade size in monkey. J Neurophysiol, 2006. [8] K. M. Newell. Motor skill acquisition. Annu Rev Psychol, 42:213–37, 1991. [9] J. W. Krakauer, C. Ghez, and M. F. Ghilardi. Adaptation to visuomotor transformations: consolidation, interference, and forgetting. J Neurosci, 25(2):473–8, 2005. [10] A.M. Smith, A. Ghazzizadeh, and R. Shadmehr. Interacting adaptive processes with different timescales underlie short-term motor learning. PLoS Biol, 4(e179), 2006. [11] G. Hinton and C. Plaut. Using fast weights to deblur old memories. In Erlbaum, editor, 9th Annual Conference of the Cognitive Science Society, pages 177–186, Hillsdale,NJ, 1987. [12] Y. Kojima, Y. Iwamoto, and K. Yoshida. Memory of learning facilitates saccadic adaptation in the monkey. J Neurosci, 24(34):7531–9, 2004. [13] K. A. Thoroughman and R. Shadmehr. Learning of action through adaptive combination of motor primitives. Nature, 407(6805):742–7, 2000. [14] John R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990. [15] A. J. Yu and P. Dayan. Uncertainty, neuromodulation, and attention. Neuron, 46(4):681–92, 2005. [16] A. L. Fairhall, G. D. Lewen, W. Bialek, and R. R. de Ruyter Van Steveninck. Efficiency and ambiguity in an adaptive neural code. Nature, 412(6849):787–92, 2001. [17] H.P. Bahrick, L.E. Bahrick, A.S. Bahrick, and P.O. Bahrick. Maintenance of foreign language vocabulary and the spacing effect. Psychological Science, 4:31321, 1993. [18] P. I. Pavlik and J. R. Anderson. An act-r model of the spacing effect. In F. Detje, D. Doerner, and H. Schaub, editors, In Proceedings of the Fifth International Conference on Cognitive Modeling, pages 177–182, Bamberg, Germany, 2003. Universitats-Verlag Bamberg. [19] D. C. Brooks and M. E. Bouton. A retrieval cue for extinction attenuates spontaneous recovery. J Exp Psychol Anim Behav Process, 19(1):77–89, 1993. [20] R. A. Rescorla. Spontaneous recovery varies inversely with the training-extinction interval. Learn Behav, 32(4):401–8, 2004. [21] J. M. Miller, T. Anstis, and W. B. Templeton. Saccadic plasticity: parametric adaptive control by retinal feedback. J Exp Psychol Hum Percept Perform, 7(2):356–66, 1981. [22] M. Smith, E. Hwang, and R. Shadmehr. Learning to learn- optimal adjustment of the rate at which the motor system adapts. In In Proceedings of the Society for Neuroscience, 2004. [23] C. A. Barnes. Memory deficits associated with senescence: a neurophysiological and behavioral study in the rat. J Comp Physiol Psychol, 93(1):74–104, 1979. [24] S. Fusi, P. J. Drew, and L. F. Abbott. Cascade models of synaptically stored memories. Neuron, 45(4):599– 611, 2005.
|
2006
|
10
|
2,922
|
Efficient Learning of Sparse Representations with an Energy-Based Model Marc’Aurelio Ranzato Christopher Poultney Sumit Chopra Yann LeCun Courant Institute of Mathematical Sciences New York University, New York, NY 10003 {ranzato,crispy,sumit,yann}@cs.nyu.edu Abstract We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces “stroke detectors” when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps. 1 Introduction Unsupervised learning methods are often used to produce pre-processors and feature extractors for image analysis systems. Popular methods such as Wavelet decomposition, PCA, Kernel-PCA, NonNegative Matrix Factorization [1], and ICA produce compact representations with somewhat uncorrelated (or independent) components [2]. Most methods produce representations that either preserve or reduce the dimensionality of the input. However, several recent works have advocated the use of sparse-overcomplete representations for images, in which the dimension of the feature vector is larger than the dimension of the input, but only a small number of components are non-zero for any one image [3, 4]. Sparse-overcomplete representations present several potential advantages. Using high-dimensional representations increases the likelihood that image categories will be easily (possibly linearly) separable. Sparse representations can provide a simple interpretation of the input data in terms of a small number of “parts” by extracting the structure hidden in the data. Furthermore, there is considerable evidence that biological vision uses sparse representations in early visual areas [5, 6]. It seems reasonable to consider a representation “complete” if it is possible to reconstruct the input from it, because the information contained in the input would need to be preserved in the representation itself. Most unsupervised learning methods for feature extraction are based on this principle, and can be understood in terms of an encoder module followed by a decoder module. The encoder takes the input and computes a code vector, for example a sparse and overcomplete representation. The decoder takes the code vector given by the encoder and produces a reconstruction of the input. Encoder and decoder are trained in such a way that reconstructions provided by the decoder are as similar as possible to the actual input data, when these input data have the same statistics as the training samples. Methods such as Vector Quantization, PCA, auto-encoders [7], Restricted Boltzmann Machines [8], and others [9] have exactly this architecture but with different constraints on the code and learning algorithms, and different kinds of encoder and decoder architectures. In other approaches, the encoding module is missing but its role is taken by a minimization in code space which retrieves the representation [3]. Likewise, in non-causal models the decoding module is missing and sampling techniques must be used to reconstruct the input from a code [4]. In sec. 2, we describe an energy-based model which has both an encoding and a decoding part. After training, the encoder allows very fast inference because finding a representation does not require solving an optimization problem. The decoder provides an easy way to reconstruct input vectors, thus allowing the trainer to assess directly whether the representation extracts most of the information from the input. Most methods find representations by minimizing an appropriate loss function during training. In order to learn sparse representations, a term enforcing sparsity is added to the loss. This term usually penalizes those code units that are active, aiming to make the distribution of their activities highly peaked at zero with heavy tails [10] [4]. A drawback for these approaches is that some action might need to be taken in order to prevent the system from always activating the same few units and collapsing all the others to zero [3]. An alternative approach is to embed a sparsifying module, e.g. a non-linearity, in the system [11]. This in general forces all the units to have the same degree of sparsity, but it also makes a theoretical analysis of the algorithm more complicated. In this paper, we present a system which achieves sparsity by placing a non-linearity between encoder and decoder. Sec. 2.1 describes this module, dubbed the “Sparsifying Logistic”, which is a logistic function with an adaptive bias that tracks the mean of its input. This non-linearity is parameterized in a simple way which allows us to control the degree of sparsity of the representation as well as the entropy of each code unit. Unfortunately, learning the parameters in encoder and decoder can not be achieved by simple backpropagation of the gradients of the reconstruction error: the Sparsifying Logistic is highly non-linear and resets most of the gradients coming from the decoder to zero. Therefore, in sec. 3 we propose to augment the loss function by considering not only the parameters of the system but also the code vectors as variables over which the optimization is performed. Exploiting the fact that 1) it is fairly easy to determine the weights in encoder and decoder when “good” codes are given, and 2) it is straightforward to compute the optimal codes when the parameters in encoder and decoder are fixed, we describe a simple iterative coordinate descent optimization to learn the parameters of the system. The procedure can be seen as a sort of deterministic version of the EM algorithm in which the code vectors play the role of hidden variables. The learning algorithm described turns out to be particularly simple, fast and robust. No pre-processing is required for the input images, beyond a simple centering and scaling of the data. In sec. 4 we report experiments of feature extraction on handwritten numerals and natural image patches. When the system has a linear encoder and decoder (remember that the Sparsifying Logistic is a separate module), the filters resemble “object parts” for the numerals, and localized, oriented features for the natural image patches. Applying these features for the classification of the digits in the MNIST dataset, we have achieved by a small margin the best accuracy ever reported in the literature. We conclude by showing a hierarchical extension which suggests the form of simple and complex cell receptive fields, and leads to a topographic layout of the filters which is reminiscent of the topographic maps found in area V1 of the visual cortex. 2 The Model The proposed model is based on three main components, as shown in fig. 1: • The encoder: A set of feed-forward filters parameterized by the rows of matrix WC, that computes a code vector from an image patch X. • The Sparsifying Logistic: A non-linear module that transforms the code vector Z into a sparse code vector ¯Z with components in the range [0, 1]. • The decoder: A set of reverse filters parameterized by the columns of matrix WD, that computes a reconstruction of the input image patch from the sparse code vector ¯Z. The energy of the system is the sum of two terms: E(X, Z, WC, WD) = EC(X, Z, WC) + ED(X, Z, WD) (1) The first term is the code prediction energy which measures the discrepancy between the output of the encoder and the code vector Z. In our experiments, it is defined as: EC(X, Z, WC) = 1 2||Z −Enc(X, WC)||2 = 1 2||Z −WCX||2 (2) The second term is the reconstruction energy which measures the discrepancy between the reconstructed image patch produced by the decoder and the input image patch X. In our experiments, it Figure 1: Architecture of the energy-based model for learning sparse-overcomplete representations. The input image patch X is processed by the encoder to produce an initial estimate of the code vector. The encoding prediction energy EC measures the squared distance between the code vector Z and its estimate. The code vector Z is passed through the Sparsifying Logistic non-linearity which produces a sparsified code vector ¯Z. The decoder reconstructs the input image patch from the sparse code. The reconstruction energy ED measures the squared distance between the reconstruction and the input image patch. The optimal code vector Z∗for a given patch minimizes the sum of the two energies. The learning process finds the encoder and decoder parameters that minimize the energy for the optimal code vectors averaged over a set of training samples. 0:01 30 30 0:1 0:1 10 Figure 2: Toy example of sparsifying rectification produced by the Sparsifying Logistic for different choices of the parameters η and β. The input is a sequence of Gaussian random variables. The output, computed by using eq. 4, is a sequence of spikes whose rate and amplitude depend on the parameters η and β. In particular, increasing β has the effect of making the output approximately binary, while increasing η increases the firing rate of the output signal. is defined as: ED(X, Z, WD) = 1 2||X −Dec( ¯Z, WD)||2 = 1 2||X −WD ¯Z||2 (3) where ¯Z is computed by applying the Sparsifying Logistic non-linearity to Z. 2.1 The Sparsifying Logistic The Sparsifying Logistic module is a non-linear front-end to the decoder that transforms the code vector into a sparse vector with positive components. Let us consider how it transforms the k-th training sample. Let zi(k) be the i-th component of the code vector and ¯zi(k) be its corresponding output, with i ∈[1..m] where m is the number of components in the code vector. The relation between these variables is given by: ¯zi(k) = ηeβzi(k) ζi(k) , i ∈[1..m] with ζi(k) = ηeβzi(k) + (1 −η)ζi(k −1) (4) where it is assumed that η ∈[0, 1]. ζi(k) is the weighted sum of values of eβzi(n) corresponding to previous training samples n, with n ≤k. The weights in this sum are exponentially decaying as can be seen by unrolling the recursive equation in 4. This non-linearity can be easily understood as a weighted softmax function applied over consecutive samples of the same code unit. This produces a sequence of positive values which, for large values of β and small values of η, is characterized by brief and punctuate activities in time. This behavior is reminiscent of the spiking behavior of neurons. η controls the sparseness of the code by determining the “width” of the time window over which samples are summed up. β controls the degree of “softness” of the function. Large β values yield quasi-binary outputs, while small β values produce more graded responses; fig. 2 shows how these parameters affect the output when the input is a Gaussian random variable. Another view of the Sparsifying Logistic is as a logistic function with an adaptive bias that tracks the average input; by dividing the right hand side of eq. 4 by ηeβzi(k) we have: ¯zi(k) = h 1 + e−β(zi(k)−1 β log( 1−η η ζi(k−1)))i−1 , i ∈[1..m] (5) Notice how β directly controls the gain of the logistic. Large values of this parameter will turn the non-linearity into a step function and will make ¯Z(k) a binary code vector. In our experiments, ζi is treated as trainable parameter and kept fixed after the learning phase. In this case, the Sparsifying Logistic reduces to a logistic function with a fixed gain and a learned bias. For large β in the continuous-time limit, the spikes can be shown to follow a homogeneous Poisson process. In this framework, sparsity is a “temporal” property characterizing each single unit in the code, rather than a “spatial” property shared among all the units in a code. Spatial sparsity usually requires some sort of ad-hoc normalization to ensure that the components of the code that are “on” are not always the same ones. Our solution tackles this problem differently: each unit must be sparse when encoding different samples, independently from the activities of the other components in the code vector. Unlike other methods [10], no ad-hoc rescaling of the weights or code units is necessary. 3 Learning Learning is accomplished by minimizing the energy in eq. 1. Indicating with superscripts the indices referring to the training samples and making explicit the dependencies on the code vectors, we can rewrite the energy of the system as: E(WC, WD, Z1, . . . , ZP ) = P X i=1 [ED(Xi, Zi, WD) + EC(Xi, Zi, WC)] (6) This is also the loss function we propose to minimize during training. The parameters of the system, WC and WD, are found by solving the following minimization problem: {W ∗ C, W ∗ D} = argmin{Wc,Wd}minZ1,...,ZP E(Wc, Wd, Z1, . . . , ZP ) (7) It is easy to minimize this loss with respect to WC and WD when the Zi are known and, particularly for our experiments where encoder and decoder are a set of linear filters, this is a convex quadratic optimization problem. Likewise, when the parameters in the system are fixed it is straightforward to minimize with respect to the codes Zi. These observations suggest a coordinate descent optimization procedure. First, we find the optimal Zi for a given set of filters in encoder and decoder. Then, we update the weights in the system fixing Zi to the value found at the previous step. We iterate these two steps in alternation until convergence. In our experiments we used an on-line version of this algorithm which can be summarized as follows: 1. propagate the input X through the encoder to get a codeword Zinit 2. minimize the loss in eq. 6, sum of reconstruction and code prediction energy, with respect to Z by gradient descent using Zinit as the initial value 3. compute the gradient of the loss with respect to WC and WD, and perform a gradient step where the superscripts have been dropped because we are referring to a generic training sample. Since the code vector Z minimizes both energy terms, it not only minimizes the reconstruction energy, but is also as similar as possible to the code predicted by the encoder. After training the decoder settles on filters that produce low reconstruction errors from minimum-energy, sparsified code vectors ¯Z∗, while the encoder simultaneously learns filters that predict the corresponding minimumenergy codes Z∗. In other words, the system converges to a state where minimum-energy code vectors not only reconstruct the image patch but can also be easily predicted by the encoder filters. Moreover, starting the minimization over Z from the prediction given by the encoder allows convergence in very few iterations. After the first few thousand training samples, the minimization over Z requires just 4 iterations on average. When training is complete, a simple pass through the encoder will produce an accurate prediction of the minimum-energy code vector. In the experiments, two regularization terms are added to the loss in eq. 6: a “lasso” term equal to the L1 norm of WC and WD, and a “ridge” term equal to their L2 norm. These have been added to encourage the filters to localize and to suppress noise. Notice that we could differently weight the encoding and the reconstruction energies in the loss function. In particular, assigning a very large weight to the encoding energy corresponds to turning the penalty on the encoding prediction into a hard constraint. The code vector would be assigned the value predicted by the encoder, and the minimization would reduce to a mean square error minimization through back-propagation as in a standard autoencoder. Unfortunately, this autoencoder-like Figure 3: Results of feature extraction from 12x12 patches taken from the Berkeley dataset, showing the 200 filters learned by the decoder. learning fails because Sparsifying Logistic is almost always highly saturated (otherwise the representation would not be sparse). Hence, the gradients back-propagated to the encoder are likely to be very small. This causes the direct minimization over encoder parameters to fail, but does not seem to adversely affect the minimization over code vectors. We surmise that the large number of degrees of freedom in code vectors (relative to the number of encoder parameters) makes the minimization problem considerably better conditioned. In other words, the alternated descent algorithm performs a minimization over a much larger set of variables than regular back-prop, and hence is less likely to fall victim to local minima. The alternated descent over code and parameters can be seen as a kind of deterministic EM. It is related to gradient-descent over parameters (standard back-prop) in the same way that the EM algorithm is related to gradient ascent for maximum likelihood estimation. This learning algorithm is not only simple but also very fast. For example, in the experiments of sec. 4.1 it takes less than 30 minutes on a 2GHz processor to learn 200 filters from 100,000 patches of size 12x12, and after just a few minutes the filters are already very similar to the final ones. This is much more efficient and robust than what can be achieved using other methods. For example, in Olshausen and Field’s [10] linear generative model, inference is expensive because minimization in code space is necessary during testing as well as training. In Teh et al. [4], learning is very expensive because the decoder is missing, and sampling techniques [8] must be used to provide a reconstruction. Moreover, most methods rely on pre-processing of the input patches such as whitening, PCA and low-pass filtering in order to improve results and speed up convergence. In our experiments, we need only center the data by subtracting a global mean and scale by a constant. 4 Experiments In this section we present some applications of the proposed energy-based model. Two standard data sets were used: natural image patches and handwritten digits. As described in sec. 2, the encoder and decoder learn linear filters. As mentioned in sec. 3, the input images were only trivially pre-processed. 4.1 Feature Extraction from Natural Image Patches In the first experiment, the system was trained on 100,000 gray-level patches of size 12x12 extracted from the Berkeley segmentation data set [12]. Pre-processing of images consists of subtracting the global mean pixel value (which is about 100), and dividing the result by 125. We chose an overcomplete factor approximately equal to 2 by representing the input with 200 code units1. The Sparsifying Logistic parameters η and β were equal to 0.02 and 1, respectively. The learning rate for updating WC was set to 0.005 and for WD to 0.001. These are decreased progressively during training. The coefficients of the L1 and L2 regularization terms were about 0.001. The learning rate for the minimization in code space was set to 0.1, and was multiplied by 0.8 every 10 iterations, for at most 100 iterations. Some components of the sparse code must be allowed to take continuous values to account for the average value of a patch. For this reason, during training we saturated the running sums ζ to allow some units to be always active. Values of ζ were saturated to 109. We verified empirically that subtracting the local mean from each patch eliminates the need for this saturation. However, saturation during training makes testing less expensive. Training on this data set takes less than half an hour on a 2GHz processor. Examples of learned encoder and decoder filters are shown in figure 3. They are spatially localized, and have different orientations, frequencies and scales. They are somewhat similar to, but more localized than, Gabor wavelets and are reminiscent of the receptive fields of V1 neurons. Interest1Overcompleteness must be evaluated by considering the number of code units and the effective dimensionality of the input as given by PCA. + 1 + 1 = 1 + 1 + 1 + 1 + 1 + 0.8 + 0.8 Figure 4: Top: A randomly selected subset of encoder filters learned by our energy-based model when trained on the MNIST handwritten digit dataset. Bottom: An example of reconstruction of a digit randomly extracted from the test data set. The reconstruction is made by adding “parts”: it is the additive linear combination of few basis functions of the decoder with positive coefficients. ingly, the encoder and decoder filter values are nearly identical up to a scale factor. After training, inference is extremely fast, requiring only a simple matrix-vector multiplication. 4.2 Feature Extraction from Handwritten Numerals The energy-based model was trained on 60,000 handwritten digits from the MNIST data set [13], which contains quasi-binary images of size 28x28 (784 pixels). The model is the same as in the previous experiment. The number of components in the code vector was 196. While 196 is less than the 784 inputs, the representation is still overcomplete, because the effective dimension of the digit dataset is considerably less than 784. Pre-processing consisted of dividing each pixel value by 255. Parameters η and β in the temporal softmax were 0.01 and 1, respectively. The other parameters of the system have been set to values similar to those of the previous experiment on natural image patches. Each one of the filters, shown in the top part of fig. 4, contains an elementary “part” of a digit. Straight stroke detectors are present, as in the previous experiment, but curly strokes can also be found. Reconstruction of most single digits can be achieved by a linear additive combination of a small number of filters since the output of the Sparsifying Logistic is sparse and positive. The bottom part of fig. 4 illustrates this reconstruction by “parts”. 4.3 Learning Local Features for the MNIST dataset Deep convolutional networks trained with backpropagation hold the current record for accuracy on the MNIST dataset [14, 15]. While back-propagation produces good low-level features, it is well known that deep networks are particularly challenging for gradient-descent learning. Hinton et al. [16] have recently shown that initializing the weights of a deep network using unsupervised learning before performing supervised learning with back-propagation can significantly improve the performance of a deep network. This section describes a similar experiment in which we used the proposed method to initialize the first layer of a large convolutional network. We used an architecture essentially identical to LeNet-5 as described in [15]. However, because our model produces sparse features, our network had a considerably larger number of feature maps: 50 for layer 1 and 2, 50 for layer 3 and 4, 200 for layer 5, and 10 for the output layer. The numbers for LeNet-5 were 6, 16, 100, and 10 respectively. We refer to our larger network as the 50-50-200-10 network. We trained this networks on 55,000 samples from MNIST, keeping the remaining 5,000 training samples as a validation set. When the error on the validation set reached its minimum, an additional five sweeps were performed on the training set augmented with the validation set (unless this increased the training loss). Then the learning was stopped, and the final error rate on the test set was measured. When the weights are initialized randomly, the 50-50-200-10 achieves a test error rate of 0.7%, to be compared with the 0.95% obtained by [15] with the 6-16-100-10 network. In the next experiment, the proposed sparse feature learning method was trained on 5x5 image patches extracted from the MNIST training set. The model had a 50-dimensional code. The encoder filters were used to initialize the first layer of the 50-50-200-10 net. The network was then trained in the usual way, except that the first layer was kept fixed for the first 10 epochs through the training set. The 50 filters after training are shown in fig. 5. The test error rate was 0.6%. To our knowledge, this is the best results ever reported with a method trained on the original MNIST set, without deskewing nor augmenting the training set with distorted samples. The training set was then augmented with samples obtained by elastically distorting the original training samples, using a method similar to [14]. The error rate of the 50-50-200-10 net with random initialization was 0.49% (to be compared to 0.40% reported in [14]). By initializing the first layer with the filters obtained with the proposed method, the test error rate dropped to 0.39%. While this is the best numerical result ever reported on MNIST, it is not statistically different from [14]. Figure 5: Filters in the first convolutional layer after training when the network is randomly initialized (top row) and when the first layer of the network is initialized with the features learned by the unsupervised energy-based model (bottom row). Architecture Training Set Size 20K 60K 60K + Distortions 6-16-100-10 [15] 0.95 0.60 5-50-100-10 [14] 0.40 50-50-200-10 1.01 0.89 0.70 0.60 0.49 0.39 Table 1: Comparison of test error rates on MNIST dataset using convolutional networkswith various training set size: 20,000, 60,000, and 60,000 plus 550,000 elastic distortions. For each size, results are reported with randomly initialized filters, and with first-layer filters initialized using the proposed algorithm (bold face). 4.4 Hierarchical Extension: Learning Topographic Maps It has already been observed that features extracted from natural image patches resemble Gabor-like filters, see fig. 3. It has been recently pointed out [6] that these filters produce codes with somewhat uncorrelated but not independent components. In order to capture higher order dependencies among code units, we propose to extend the encoder architecture by adding to the linear filter bank a second layer of units. In this hierarchical model of the encoder, the units produced by the filter bank are now laid out on a two dimensional grid and filtered according to a fixed weighted mean kernel. This assigns a larger weight to the central unit and a smaller weight to the units in the periphery. In order to activate a unit at the output of the Sparsifying Logistic, all the afferent unrectified units in the first layer must agree in giving a strong positive response to the input patch. As a consequence neighboring filters will exhibit similar features. Also, the top level units will encode features that are more translation and rotation invariant, de facto modeling complex cells. Using a neighborhood of size 3x3, toroidal boundary conditions, and computing code vectors with 400 units from 12x12 input patches from the Berkeley dataset, we have obtained the topographic map shown in fig. 6. Filters exhibit features that are locally similar in orientation, position, and phase. There are two low frequency clusters and pinwheel regions similar to what is experimentally found in cortical topography. CODE LEVEL 1 CODE LEVEL 2 INPUT X K Wc Wd Spars. Logistic Ec Ed CODE Z CONVOL. Eucl. Dist. Eucl. Dist. 0.08 0.12 0.08 0.12 0.23 0.12 0.08 0.12 0.08 K = Figure 6: Example of filter maps learned by the topographic hierarchical extension of the model. The outline of the model is shown on the right. 5 Conclusions An energy-based model was proposed for unsupervised learning of sparse overcomplete representations. Learning to extract sparse features from data has applications in classification, compression, denoising, inpainting, segmentation, and super-resolution interpolation. The model has none of the inefficiencies and idiosyncrasies of previously proposed sparse-overcomplete feature learning methods. The decoder produces accurate reconstructions of the patches, while the encoder provides a fast prediction of the code without the need for any particular preprocessing of the input images. It seems that a non-linearity that directly sparsifies the code is considerably simpler to control than adding a sparsity term in the loss function, which generally requires ad-hoc normalization procedures [3]. In the current work, we used linear encoders and decoders for simplicity, but the model authorizes non-linear modules, as long as gradients can be computed and back-propagated through them. As briefly presented in sec. 4.4, it is straightforward to extend the original framework to hierarchical architectures in encoder, and the same is possible in the decoder. Another possible extension would stack multiple instances of the system described in the paper, with each system as a module in a multi-layer structure where the sparse code produced by one feature extractor is fed to the input of a higher-level feature extractor. Future work will include the application of the model to various tasks, including facial feature extraction, image denoising, image compression, inpainting, classification, and invariant feature extraction for robotics applications. Acknowledgments We wish to thank Sebastian Seung and Geoff Hinton for helpful discussions. This work was supported in part by the NSF under grants No. 0325463 and 0535166, and by DARPA under the LAGR program. References [1] Lee, D.D. and Seung, H.S. (1999) Learning the parts of objects by non-negative matrix factorization. Nature, 401:788-791. [2] Hyvarinen, A. and Hoyer, P.O. (2001) A 2-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41:2413-2423. [3] Olshausen, B.A. (2002) Sparse codes and spikes. R.P.N. Rao, B.A. Olshausen and M.S. Lewicki Eds. MIT press:257-272. [4] Teh, Y.W. and Welling, M. and Osindero, S. and Hinton, G.E. (2003) Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4:1235-1260. [5] Lennie, P. (2003) The cost of cortical computation. Current biology, 13:493-497 [6] Simoncelli, E.P. (2005) Statistical modeling of photographic images. Academic Press 2nd ed. [7] Hinton, G.E. and Zemel, R.S. (1994) Autoencoders, minimum description length, and Helmholtz free energy. Advances in Neural Information Processing Systems 6, J. D. Cowan, G. Tesauro and J. Alspector (Eds.), Morgan Kaufmann: San Mateo, CA. [8] Hinton, G.E. (2002) Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771-1800. [9] Doi E., Balcan, D.C. and Lewicki, M.S. (2006) A theoretical analysis of robust coding over noisy overcomplete channels. Advances in Neural Information Processing Systems 18, MIT Press. [10] Olshausen, B.A. and Field, D.J. (1997) Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision Research, 37:3311-3325. [11] Foldiak, P. (1990) Forming sparse representations by local anti-hebbian learning. Biological Cybernetics, 64:165-170. [12] The berkeley segmentation dataset http://www.cs.berkeley.edu/projects/vision/grouping/segbench/ [13] The MNIST database of handwritten digits http://yann.lecun.com/exdb/mnist/ [14] Simard, P.Y. Steinkraus, D. and Platt, J.C. (2003) Best practices for convolutional neural networks. ICDAR [15] LeCun, Y. Bottou, L. Bengio, Y. and Haffner, P. (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324. [16] Hinton, G.E., Osindero, S. and Teh, Y. (2006) A fast learning algorithm for deep belief nets. Neural Computation 18, pp 1527-1554.
|
2006
|
100
|
2,923
|
Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models Alexander T. Ihler Padhraic Smyth Donald Bren School of Information and Computer Science U.C. Irvine ihler@ics.uci.edu smyth@ics.uci.edu Abstract Data sets that characterize human activity over time through collections of timestamped events or counts are of increasing interest in application areas as humancomputer interaction, video surveillance, and Web data analysis. We propose a non-parametric Bayesian framework for modeling collections of such data. In particular, we use a Dirichlet process framework for learning a set of intensity functions corresponding to different categories, which form a basis set for representing individual time-periods (e.g., several days) depending on which categories the time-periods are assigned to. This allows the model to learn in a data-driven fashion what “factors” are generating the observations on a particular day, including (for example) weekday versus weekend effects or day-specific effects corresponding to unique (single-day) occurrences of unusual behavior, sharing information where appropriate to obtain improved estimates of the behavior associated with each category. Applications to real–world data sets of count data involving both vehicles and people are used to illustrate the technique. 1 Introduction As sensor and storage technologies continue to improve in terms of both cost and performance, increasingly rich data sets are becoming available that characterize the rhythms of human activity over time. Examples include logs of radio frequency identification (RFID) tags, freeway traffic over time (loop-sensor data), crime statistics, email and Web access logs, and many more. Such data can be used to support a variety of different applications, such as classification of human or animal activities, detection of unusual events, or to support the broad understanding of behavior in a particular context such as the temporal patterns of Web usage. To ground the discussion, consider data consisting of a collection of individual or aggregated events from a single sensor, e.g., a time-stamped log recording every entry and exit from a building, or the timing and number of highway traffic accidents. For example, Figure 1 shows several days worth of data from a building log, smoothed so that the similarities in patterns are more readily visible. Of interest is the modeling of the underlying intensity of the process generating the data, where intensity here refers to the rate at which events occur. These processes are typically inhomogeneous in time (as in Figure 1), as they arise from the aggregated behavior of individuals, and thus exhibit a temporal dependence linked to the rhythms of the underlying human activity. The complexity of this temporal dependence is application-dependent and generally unknown before observing the data, suggesting that non- or semi-parametric methods (methods whose complexity is capable of growing as the number of observations increase) may be particularly appropriate. Formulating the underlying event generation as an inhomogeneous Poisson process is a common first step (see, e.g., [1, 4]), as it allows the application of various classic density estimation techniques to estimate the time-dependent intensity function (a normalized version of the rate function; see Section 2). Techniques used in this context include kernel density estimation [2], wavelet analysis [3], discretization [1], and nonparametric Bayesian models [4, 5]. 0 6 12 18 24 0 10 20 30 40 50 60 Figure 1: Count data from a building entry log observed on ten Mondays, each smoothed using a kernel function [2, 6] to enable visual comparison. Among these, nonparametric Bayesian approaches have a number of appealing advantages. First, they allow us to represent and reason about uncertainty in the intensity function, providing not just a single estimate but a distribution over functions. Second, the Bayesian framework provides natural methods for model selection, allowing the data to be naturally explained by a parsimonious set of intensity functions, rather than using the most complex explanation (though similar effects may be achieved using penalized likelihood functions [3]). Finally, Bayesian methods generalize to multiple or hierarchical models, which allow information to be shared among several related but differing sets of observations (e.g., multiple days of data). This second point is crucial for many problems, as we rarely obtain many observations of exactly the same process under exactly the same conditions; instead, we observe multiple instances which are thought to be similar, but may in fact represent any number of slightly differing circumstances. For example, behavior may be dependent on not only time of day but also day of week, type of day (weekend or weekday), unobserved factors such as the weather, or other unusual circumstances. Sharing information allows us to improve our model, but we should only do so where appropriate (itself best indicated by similarity in the data). By being Bayesian, we can remain agnostic of what data should be shared and reason over our uncertainty in this structure. In what follows we propose a non-parametric Bayesian framework for modeling intensity functions for event data over time. In particular, we describe a Dirichlet process framework for learning the unknown rate functions, and learn a set of such functions corresponding to different categories. Individual time-periods (e.g., individual days) are then represented as additive combinations of intensity functions, depending on which categories are assigned to each time-period. This allows the model to learn in a data-driven fashion what “factors” are generating the observations on a particular day, including (for example) weekday versus weekend effects as well as day-specific effects corresponding to unusual behavior present only on a single day. Applications to two real–world data sets, a building access log and accident statistics, are used to illustrate the technique. We will discuss in more detail in the sections that follow how our proposed approach is related to prior work on similar topics. Broadly speaking, from the viewpoint of modeling of inhomogeneous time-series of counts, our work extends the work of [4] to allow sharing of information among multiple, related processes (e.g., different days). Our approach can also be viewed as an alternative to the hierarchical Dirichlet process (HDP, [7]) for problems where the patterns across different groups are much more constrained than would be expected under an HDP model. 2 Poisson processes A common model for continuous-time event (counting) data is the Poisson process [8]. As the discrete Poisson distribution is characterized by a rate parameter λ, the Poisson process1 is characterized by a rate function λ(t); it has the property that over any given time interval T , the number of events occurring within that time is Poisson with rate given by λ = R T λ(t). We shall use a Bayesian semi-parametric model for λ(t), described next. Let us suppose that we have a single collection of event times {τi} arising from a Poisson process with rate function λ(t), i.e., {τi} ∼P[τ ; λ(t)] (1) 1Here, we shall use the term Poisson process interchangeably with inhomogeneous Poisson process, meaning that the rate is a non-constant function of time t. where λ(t) is defined on t ∈[−∞, ∞]. We may write λ(t) = γf(t), where γ = R ∞ −∞λ(t) and f(t) is the intensity function, a normalized version of the rate function. A Bayesian model places prior distributions on these quantities; by selecting a parametric prior for γ and a nonparametric prior for f(t), we obtain a semi-parametric prior for λ(t). Specifically, we choose γ ∼Γ(a, b) f(t) = Z K(t; θ)dG(θ) G ∼DP[αG0] where Γ is the gamma distribution, K is a kernel function (for example a Gaussian distribution) and DP is a Dirichlet process [9] with parameter α and base distribution G0. The Dirichlet process provides a nonparametric prior for f(t), such that (with probability one) f has the form of a mixture model with infinitely many components: f(t) = P j wjK(t; θj). If desired we may also place prior distributions on some or all of these quantities (e.g., α, {a, b}, or the parameters of G0) as well. Dirichlet processes and their variations [7, 9–11] have gained recent attention for their ability to provide representations consisting of arbitrarily large mixture models. In particular, they have been the subject of recent work in modeling intensity functions for Poisson processes defined over time [4] and space–time [5]. 2.1 Monte Carlo Inference For the Poisson process model just described, the likelihood of the data {τi}, i = 1 . . . N at some time T is given by p({τi}; γ, f(·)) = exp à − Z T −∞ γf(t) ! γN Y i f(τi) which, as T →∞(i.e., as we observe a complete data set) becomes p({τi}; γ, f(·)) = £ exp(−γ)γN¤ "Y i f(τi) # (2) The rightmost term (term involving f) has the same form as the likelihood of the τi as i.i.d. samples from the mixture model distribution defined by f. As in many mixture model applications, it will be helpful to create auxiliary assignment variables zi for each τi, indicating with which of the mixture components the sample τi is associated. The complete data likelihood is then p({τi, zi}; γ, f(·)) = £ exp(−γ)γN¤ "Y i wziK(τi; θzi) # . Inference is typically accomplished using Markov chain Monte Carlo (MCMC) sampling [9]. Specifically, although the posterior for γ has a simple closed form, p(γ|{τi}) ∝Γ(N + a, 1 + b), sampling from f is more complicated. Samples from f can be drawn in a variety of ways. One of the most common methods is the so-called “Chinese Restaurant Process” (CRP, [7, 9]), in which the relative weights wj are marginalized out while drawing the assignment variables zi. Such exact sampling approaches work by exploiting the fact that only a finite number of the mixture components are occupied by the data; by treating the unoccupied clusters as a single group, the infinite number of potential associations can be treated as a finite number. The operations involved (such as sampling values for θj given a collection of associated event times τi) are easier for certain choices of K and G than others; for example using a Gaussian kernel and normal-Wishart distribution, the necessary quantities have convenient closed forms [9]. Another, more brute-force way around the issue of having infinitely many mixture components is to perform approximate sampling using a “truncated” Dirichlet process representation [12, 13]. As described in [12], for a given α, data set size N, and tolerance ϵ, one can compute a maximum number of components M necessary to approximate the Dirichlet process with a Dirichlet distribution using the relation ϵ ≈4 N exp[−(M −1)/α] and in this manner, can work with finite numbers of mixture components. This representation will prove useful in Section 3. The truncated DP approximation is helpful primarily because it allows us to sample the (complete) function f(t) (as compared to only the “occupied” part in the CRP formulation). Given a set of assignments {zi} occupying (arbitrarily numbered) clusters 1 . . . J, we can sample the weights wj in two steps. First, we sample the occupied mixture weights, wj (j ≤J), and the total unoccupied weight ¯w = P∞ J+1 wj, by drawing independent, Gamma-distributed random variables according to Γ(Nj, 1) and Γ(α, 1), respectively, and normalizing them to sum to one. The values of weights wj in the unoccupied clusters (j > J) can then be sampled given ¯w using the stick–breaking representation of Sethuraman [14]. Note that the truncated DP approximation highlights the importance of also sampling α if we hope for our representation to act non-parametric in the sense that it may grow more complex as the data increase, since for a fixed α and ϵ the number of components M is quite insensitive to N. For more details on sampling such hyper-parameters see e.g. [10]. 2.2 Finite Time Domains Our description of non-parametric Bayesian techniques for Poisson processes has so far made implicit use of the fact that the domain of f(t) is infinite. When the domain of f is finite, for example [0, 1], a few minor complications arise. For example, the kernel functions K(·) should properly be defined as positive only on this interval. One possible solution to this issue is to use an alternate kernel function, such as the Beta distribution [4]. However, this means that posterior sampling of the parameters θ is no longer possible in closed form. Although methods such as Metropolis-Hastings may be used [4], they can be highly dependent on the choice of proposal density. Here, we take a slightly different approach, drawing truncated Gaussian kernels with parameters sampled from a truncated Normal-Wishart distribution. Specifically, we define K(t; θ = [µ, σ2]) = N(t; µ, σ2)χ1(µ) R 1 0 N(x; µ, σ2)dx [µ, σ2] ∼χ1(µ) χ1(σ) NW(µ, σ2) where χ1(t) is one on [0, 1] and zero elsewhere and NW is the normal-Wishart distribution. Sampling in this model turns out to be relatively simple and efficient using rejection methods. Given the restrictions imposed on µ and σ, one can show that the normalizing quantity Z = R 1 0 N(x; µ, σ2) is always greater than one-third. Thus, to sample from the posterior we simply draw from the original, closed form posterior distribution, discarding (and re-sampling) if µ ̸∈[0, 1], σ ̸∈[0, 1], or with probability 1 −(3Z)−1. 3 Categorical Models As mentioned in the introduction, we often have several collections d = 1 . . . D of observations, {τdi} with i = 1 . . . Nd, arising from D instances of the same or similar processes. If these processes are known to be identical and independent, sharing information among them is relatively easy—we obtain D observations Nd with which to estimate γ, and the τdi are collectively used to estimate f(t). However, if these processes are not necessarily identical, sharing information becomes more difficult. Yet it is just this situation which is most typical. Again consider Figure 1, which shows event data from ten different Mondays. Clearly, there is a great deal of consistency in both size and shape, although not every day is exactly the same, and one or two stand out as different. Were we to also look at, for example, Sundays or Tuesdays (as we do in Section 4), we would see that although Sunday and Monday appear quite different and, one suspects, have little shared information, Monday and Tuesday appear relatively similar and this similarity can probably be used to improve our rate estimates for both days. In this example, we might reasonably assume that the category memberships are known (for example, whether a given day is a weekday or weekend, or a Monday or Tuesday), though we shall relax this assumption in later sections. Then, given a structure of potential relationships, what is a reasonable model for sharing information among categories? There are, of course, many possible choices; we use a simple additive model, described in the next section. 3.1 Additive Models The intuition behind an additive model is that the data arises from the superposition of several underlying causes present during the period of interest. Again, we initially assume that the category memberships are known; thus, if a category is associated with a particular day, the activity profile associated with that category will be observed, along with additional activity arising from each of the other categories present. Let us associate a rate function λc(t) = γcfc(t) with each category in our model. We define the rate function of a given day d to be the sum of the rate functions of each category to which d belongs. Denoting by sdc the (binary-valued) membership indicator, i.e., that category c is present during day d, we have that λd(t) = P c:sdc=1 λc(t). At first, this model might seem quite restrictive. However, it matches our intuition of how the data is generated, stemming from the presence or absence of a particular behavioral pattern associated with some underlying cause (such as it being a work day). In fact, we do not want a model which is too flexible, such as a linear combination of patterns, since it is not physically meaningful to say, for example, that a day is only “part” Monday. To learn the profiles associated with a given cause (e.g., things that happen every day versus only on weekdays or only on Mondays), it makes sense to take an “all or nothing” model where the pattern is either present, or not. This also suggests that other methods of coupling Dirichlet processes, such as the hierarchical Dirichlet process [7], may be too flexible. The HDP couples the parameters of components across levels, but only loosely relates the actual shape of the profile, since it allows components to be larger or smaller (or even disappear completely). In [7], this is a desirable quality, but in our application it is not. Using an additive model allows both a consistent size and shape to emerge for each category, while associating deviations from that profile to categories further down in the hierarchy. Inference in this system is not significantly more difficult than in the single rate function case (Section 2). We define the association as [ydi, zdi], where ydi indicates which of the categories generated event τdi. It is easy to sample ydi according to p(ydi = c|{λc(t)}) ∝[λc(τdi)] / [P c′ λc′(τdi)]. 3.2 Sampling Membership Of course, it is frequently the case that the membership(s) of each collection of data are not known precisely. In an extreme case, we may have no idea which collections are similar and should be grouped together and wish to find profiles in an unsupervised manner. More commonly, however, we have some prior knowledge and interpretation of the profiles but do not wish to strictly enforce a known membership. For example, if we create categories with assigned meanings (weekdays, weekends, Sundays, Mondays, and so on), a day which is nominally a Monday but also happens to be a holiday, closure, or other unusual circumstances may be completely different from other Monday profiles. Similarly, a day with unusual extra activity (receptions, talks, etc.) may see behavior unique to its particular circumstances and warrant an additional category to represent it. We can accommodate both these possibilities by also sampling the values of the membership indicator variables sdc, i.e., the binary indicator that day d sees behavior from category c. To this end, let us assume we have some prior knowledge of these membership probabilities, pdc(sdc); we may then re-sample from their posterior distributions at each iteration of MCMC. This sampling step is difficult to do outside the truncated representation. Although up until this point we could easily have elected to use, for example, the CRP formulation for sampling, the association variables {ydi, zdi} are tightly coupled with the memberships sdc since if any ydi = c we must have that sdc = 1. Instead, to sample the sdc we condition on the truncated rate functions λc(t), with truncation depth M chosen to provide arbitrarily high precision. The likelihood of the data under these rate functions for any values of {sdc} can then be computed directly via (2) where γ = X c sdcγc and f(t) = γ−1 X c sdcλc(t). In practice, we propose changing the value of each membership variable sdc individually given the others, though more complex moves could also be applied. This gives the following sequence of MCMC sampling: (1) given a truncated representation of the {λc(t)}, sample membership variables {sdc}; (2) given {λc(t)} and {sdc}, sample associations {zdi}; (3) given associations {zdi}, sample 0 6 12 18 24 0 1 2 3 0 6 12 18 24 0 10 20 30 40 50 0 6 12 18 24 0 10 20 30 40 50 (a) Sundays (b) Mondays (c) Tuesdays Figure 2: Posterior mean estimates of rate functions for building entry log data, estimated individually for each day (dotted) and learned by sharing information among multiple days (solid) for (a) Sundays, (b) Mondays, and (c) Tuesdays. Sharing information among similar days gives greatly improved estimates of the rate functions, resolving otherwise obscured features such as the decrease during and increase subsequent to lunchtime. category magnitudes {γc} and a truncated representation of each fc(t) consisting of weights {wj} and parameters {θj}. 4 Experiments In this section we consider the application of our model to two data sets, one (mentioned previously) from the entry log of people entering a large campus building (produced by optical sensors at the front door), and the other from a log of vehicular traffic accidents. By design, both data sets contain about ten weeks worth of observations. In both cases, we have a plausible prior structure for and interpretation of the categories, i.e., that similar days will have similar profiles. To this end, we create categories for “all days”, “weekends”, “weekdays”, and “Sundays” through “Saturdays”. Each of these categories has a high probability (pdc = .99) of membership for each eligible day. To account for the possibility of unusual increases in activity, we also add categories unique to each day, with lower prior probability (pdc = .20) of membership. This allows but discourages each day to add a new category if there is evidence of unusual activity. 4.1 Building Entry Data To see the improvement in the estimated rate functions when information is shared among similar days, Figure 2 shows results from three different days of the week (Sunday, Monday, Tuesday). Each panel shows the estimated profiles of each of the ten days estimated individually (using only that day’s observations) under a Dirichlet process mixture model (dotted lines). Superimposed in each panel is a single, black curve corresponding to the total profile for that day of week estimated using our categorical model; so, (a) shows the sum of the rate functions for “all days”, “weekends”, and “Sundays”, while (b) shows the sum of “all days”, “weekdays”, and “Mondays”. We use the same prior distributions for both the individual estimates and the shared estimate. Several features are worth noting. First, by sharing several days worth of observations, the model can produce a much more accurate estimate of the profiles. In this case, no single day contains enough observations to be confident about the details of the rate function, so each individually–estimated rate function appears relatively smooth. However, when information from other days is included, the rate function begins to resolve into a clearly bi-modal shape for weekdays. This “bi-modal” rate behavior is quite real, and corresponds to the arrival of occupants in the morning (first mode), a lull during lunchtime, and a larger, narrower second peak as most occupants return from lunch. Second, although Monday and Tuesday profiles appear similar, they also have distinct behavior, such as increased activity late Tuesday morning. This behavior too has some basis in reality, corresponding to a regular weekly meeting held around lunchtime over most (though not quite all) of the weeks in question. The breakdown of a particular day (the first Tuesday) into its component categories is shown in Figure 3. As we might expect, there is little consistency between weekdays and weekends, quite a bit of similarity among weekdays and among just Tuesdays, and (for this particular day) very little to set it apart from other Tuesdays. We can also check to see that the category memberships sdc are being used effectively. One of the Mondays in our data set fell on a holiday (the individual profile very near zero). If we average the probabilities computed during MCMC to estimate the posterior probability of the sdc for that 0 6 12 18 24 0 10 20 30 40 0 6 12 18 24 0 10 20 30 40 0 6 12 18 24 0 10 20 30 40 0 6 12 18 24 0 10 20 30 40 (a) All Days (b) Weekdays (c) Tuesdays (d) Unique Figure 3: Posterior mean estimates of the rate functions for each category to which the first Tuesday data might belong. For comparison, the total rate (sum of all categories) is shown as the dotted line. (a) The “all days” category is small, indicating little consistency in the data between weekdays and weekends; (b) the “weekdays” category is larger, and contains a component which appears to correspond to the occupants’ return from lunch; (c) the “Tuesday” category has modes in the morning and afternoon, perhaps capturing regular meetings or classes; (d) the “unique” category (a category unique to this particular day) shows little or no activity. 0 6 12 18 24 0 0.2 0.4 0.6 0 6 12 18 24 0 5 10 15 0 6 12 18 24 0 5 10 15 20 Figure 4: Profiles associated with individual-day categories in the entry log data for several days with known events (periods between dashed vertical lines). The model successfully learns which days have significant unusual activity and associates reasonable profiles with that activity (note that increases in entrance count rate typically occurs shortly before or at the beginning of the event time). particular day, we find that it has near-zero probability of belonging to either the weekday or Monday categories, and uses only the all-day and unique categories. We can also examine days which have high probability of requiring their own category (indicating unusual activity). For this data set, we also have partial ground truth, consisting of a number of dates and times when activities were scheduled to take place in the building. Figure 4 shows three such days, and the corresponding rate profiles associated with their single-day categories. Again, all three days are estimated to have additional activity, and the period of time for that activity corresponds well with the actual start and end time shown in the schedule (dashed vertical lines). 0 6 12 18 24 0 20 40 60 Figure 5: Posterior mean and uncertainty for a single day of accident data, estimated individually (red) and with data sharing (black). Sharing data considerably reduces the posterior uncertainty in the profile shape. 4.2 Vehicular Accident Data Our second data set consists of a database of vehicular accident times recorded by North Carolina police departments. As we might expect of driving patterns, there is still less activity on weekends, but far more than was observed in the campus building log. As before, sharing information allows us to decrease our posterior uncertainty on the rate for any particular day. Figure 5 quantifies this idea by showing the posterior means and (pointwise) two-sigma confidence intervals for the rate function estimated for the same day (the first Monday in the data set) using that day’s data only (red curves) and using the category-based additive model (black). The additive model leverages the additional data to produce much tighter estimates of the rate profile. As with the previous example, the additional data also helps resolve detailed features of each day’s profile, as seen in Figure 6. For example, the weekday profiles show a tri-modal shape, with one mode corresponding to the morning commute, a small mode around noon, and another large mode 0 6 12 18 24 0 20 40 60 80 100 0 6 12 18 24 0 20 40 60 80 100 0 6 12 18 24 0 20 40 60 80 100 (a) Sundays (b) Mondays (c) Fridays Figure 6: Posterior mean estimates of rate functions for vehicular accidents, estimated individually for each day (dotted) and with sharing among multiple days (solid) for (a) Sundays, (b) Mondays, and (c) Fridays. As in Figure 2, sharing information helps resolve features which the individual days do not have enough data to reliably estimate. around the evening commute. This also helps make the pattern of deviation on Friday clear, showing (as we would expect) increased activity at night. 5 Conclusions The increasing availability of logs of “human activity” data provides interesting opportunities for the application of statistical learning techniques. In this paper we proposed a non-parametric Bayesian approach to learning time-intensity profiles for such activity data, based on an inhomogeneous Poisson process framework. The proposed approach allows collections of observations (e.g., days) to be grouped together by category (day of week, weekday/weekend, etc.) which in turn leverages data across different collections to yield higher quality profile estimates. When the categorization of days is not a priori certain (e.g., days that fall on a holiday or days with unusual non-recurring additional activity) the model can infer the appropriate categorization, allowing (for example) automated detection of unusual events. On two large real-world data sets the model was able to infer interpretable activity profiles that correspond to real-world phenomena. Directions for further work in this area include richer models that allow for incorporation of observed covariates such as weather and other exogenous phenomena, as well as modeling of multiple spatially-correlated sensors (e.g., loop sensor data for freeway traffic). References [1] S. Scott and P. Smyth. The Markov modulated Poisson process and Markov Poisson cascade with applications to web traffic data. Bayesian Statistics, 7:671–680, 2003. [2] R. Helmers, I.W. Mangku, and R. Zitikis. Consistent estimation of the intensity function of a cyclic Poisson process. J. Multivar. Anal., 84(1):19–39, January 2003. [3] R. Willett and R. Nowak. Multiscale Poisson intensity and density estimation. submitted to IEEE Trans. IT, January 2005. [4] A. Kottas. Bayesian nonparametric mixture modeling for the intensity function of non-homogeneous Poisson processes. Technical Report ams2005-02, Department of Applied Math and Statistics, U.C. Santa Cruz, Santa Cruz, CA, 2005. [5] A. Kottas and B. Sanso. Bayesian mixture modeling for spatial Poisson process intensities, with applications to extreme value analysis. Technical Report ams2005-19, Dept. of Applied Math and Statistics, U.C. Santa Cruz, Santa Cruz, CA, 2005. [6] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, NY, 1986. [7] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. In NIPS 17, 2004. [8] D.R. Cox. Some statistical methods connected with series of events. J. R. Stat. Soc. B, 17:129–164, 1955. [9] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. J. of Comp. Graph. Stat., 9:283–297, 2000. [10] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. J. Amer. Stat. Assoc., 90:577–588, 1995. [11] L.F. James. Functionals of Dirichlet processes, the Cifarelli-Reganzzini identity and Beta-Gamma processes. Ann. Stat., 33(2):647–660, 2005. [12] H. Ishwaran and L.F. James. Gibbs sampling methods for stick-breaking priors. J. Amer. Stat. Assoc., 96:161–173, 2001. [13] H. Ishwaran and L.F. James. Approximate Dirichlet process computing in finite normal mixtures: smoothing and prior information. J. Comp. Graph. Statist., 11:508–532, 2002. [14] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639–650, 1994.
|
2006
|
101
|
2,924
|
Using Combinatorial Optimization within Max-Product Belief Propagation John Duchi Daniel Tarlow Gal Elidan Daphne Koller Department of Computer Science Stanford University Stanford, CA 94305-9010 {jduchi,dtarlow,galel,koller}@cs.stanford.edu Abstract In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called COMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. COMPOSE uses combinatorial optimization for computing exact maxmarginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondenceproblems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of COMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments. 1 Introduction Markov random fields (MRFs) [12] have been applied to a wide variety of real-world problems. However, the probabilistic inference task in MRFs — computing the posterior distribution of one or more variables — is tractable only in small tree-width networks, which are not often an appropriate model in practice. Thus, one typically must resort to the use of approximate inference methods, most commonly (in recent years) some variant of loopy belief propagation [11]. An alternative approach, whose popularity has grown in recent years, is based on the maximum a posteriori (MAP) inference problem — computing the single most likely assignment relative to the distribution. Somewhat surprisingly, there are certain classes of networks where MAP inference can be performed very efficiently using combinatorial optimization algorithms, even though posterior probability inference is intractable. So far, two main such classes of networks have been studied. Regular (or associative) networks [18], where the potentials encode a preference for adjacent variables to take the same value, can be solved optimally or almost optimally using a minimum cut algorithm. Conversely, matching networks, where the potentials encode a type of mutual exclusion constraints between values of adjacent variables, can be solved using matching algorithms. These types of networks have been shown to be applicable in a variety of applications, such as stereo reconstruction [13] and segmentation for regular networks, and image correspondence [15] or word alignment for matching networks [19]. In many real-world applications, however, the problem formulation does not fall neatly into one of these tractable subclasses. The problem may well have a large component that can be well-modeled as regular or as a matching problem, but there may be additional constraints that take it outside this restricted scope. For example, in a task of registering features between two images or 3D scans, we may formulate the task as a matching problem, but may also want to encode constraints that enforce the preservation of local or global geometry [1]. Unfortunately, once the network contains some “non-complying” potentials, it is not clear if and how one can apply the combinatorial optimization algorithm, even if only as a subroutine. In practice, in such networks, one often simply resorts to applying standard inference methods, such as belief propagation. Unfortunately, belief propagation may be far from an ideal procedure for these types of networks. In many cases, the MRF structures associated with the tractable components are quite dense and contain many small loops, leading to convergence problems and bad approximations. Indeed, recent empirical studies studies [17] show that belief propagation methods perform considerably worse than min-cut-based methods when applied to a variety of (purely) regular MRFs. Thus, falling back on belief propagation methods for these MRFs may result in poor performance. The main contribution of this paper is a message-passing scheme for max-product inference that can exploit combinatorial optimization algorithms for tractable subnetworks. The basic idea in our algorithm, called COMPOSE (Combinatorial Optimization for Max-Product on Subnetworks), is that the network can often be partitioned into a number of subnetworks whose union is equivalent to the original distribution. If we can efficiently solve the MAP problem for each of these subnetworks, we would like to combine these results in order to find an approximate MAP for the original problem. The obvious difficulty is that a MAP solution, by itself, provides only a single assignment, and one cannot simply combine different assignments. The key insight is that we can combine the information from the different sub-networks by computing max-marginals for each one. A maxmarginal for an individual variable X is a vector that specifies, for each value x, the probability of the MAP assignment in which X = x. If we have a black box that computes a max-marginal for each variable X in a subnetwork, we can embed that black box as a subroutine in a max-product belief propagation algorithm, without changing the algorithm’s basic properties. In the remainder of this paper, we define the COMPOSE scheme, and show how combinatorial algorithms for both regular networks and matching networks can be embedded in this framework. In particular, we also describe efficient combinatorial optimization algorithms for both types of networks that can compute all the max-marginals in the network at a cost similar to that of finding the single MAP assignment. We evaluate the applicability of COMPOSE on synthetic networks and on an image registration task for scans of a cell obtained using an electron microscope, all of which are matching problems with additional pairwise constraints. We compare COMPOSE to variants of both max-product and sum-product belief propagation, as well as to straight matching. Our results demonstrate that the ability of COMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments. 2 Markov Random Fields In this paper, for simplicity of presentation, we restrict our discussion to pairwise Markov networks (or Markov Random Fields) over discrete variables X = {X1, . . . , XN}. We emphasize that our results extend easily to the more general case of non-pairwise Markov networks. We denote an assignment of values to X with x, and an assignment of a value to a single variable Xi with xi. A pairwise Markov network M is defined as a graph G = (V, E) and set of potentials F that include both node potentials φi(xi) and edge potentials φij(xi, xj). The network encodes a joint probability distribution via an unnormalized density P ′ F(x) = QN i=1 φi(xi) Q i,j∈U φij(xi, xj), defining the distribution as PF(x) = 1 Z P ′ F(x), where Z is the partition function given by Z = P x′ P ′ F(x). There are different types of queries that one may want to compute on a Markov network. Most common are (conditional) probability queries, where the task is to compute the marginal probability of one or more variables, possibly given some evidence. This type of inference is essentially equivalent to computing the partition function, which sums up exponentially many assignments, a computation which is currently intractable except in networks of low tree width. An alternative type of inference task is the is maximum a posteriori (MAP) problem — finding arg maxx PF(x) = arg maxx P ′ F(x). In the MAP problem, we can avoid computing the partition function, so there are certain classes of networks to which the MAP assignment can be computed effectively, even though computing the partition problem can be shown to be intractable; we describe two such important classes in Section 4. In general, however, an exact solution to the MAP problem is also intractable. Max-product belief propagation (MPBP) [20] is a commonly-used method for finding an approximate solution. In this algorithm, each node Xi passes to its neighboring nodes Ni a message which is a vector defining a value for each value xi: δi→j(xj) := max xi φi(xi)φij(xi, xj) Y k∈Ni−{j} δk→i(xi) . At convergence, each variable can compute its own local belief as: bi(xi) = φi(xi) Q k∈Ni δk→i(xi). In a tree structured MRF, if such messages are passed from the leaves towards a single root, the value of the message passed by Xi towards the root encodes a partial max-marginal: the entry for xi is the probability of the most likely assignment, to the subnetwork emanating from Xi away from the root, where we force Xi = xi. At the root, we obtain exact max-marginals for the entire joint distribution. However, applied to a network with loops, MPBP often does not converge, even when combined with techniques such as smoothing and asynchronous message passing, and the answers obtained can be quite approximate. 3 Composing Max-Product Inference on Subnetworks We now describe the COMPOSE scheme for decomposing the network into hopefully more tractable components, and allowing approximate max-product computation over the network as a whole to be performed by iteratively computing max-product in one component and passing approximate maxmarginals to the other(s). As the unnormalized probability of an assignment in a Markov network is a product of local potentials, we can partition the potentials in an MRF into an ensemble of k subgraphs G1, . . . Gk over the same set of nodes V, associated edges E1, . . . , Ek and sets of factors F1, . . . , Fk. We require that the product of the potentials in these subnetworks maintain the same information as the original MRF. That is, if we originally have a factor φi ∈F and associated factors φ(1) i ∈F1, . . . , φ(k) i ∈Fk, we must have that Qk l=1 φ(l) i (Xi) = φi(Xi). One method of partitioning that achieves this equality is simply to select, for each potential φi, one subgraph in which it appears unchanged, and set all of the other φ(l) i to be 1. Even if MAP inference in the original network is intractable, it may be tractable in each of the sub-networks in the ensemble. But how do we combine the results from MAP inference in an ensemble of networks over the same set of variables? Our approach draws its motivation from the MPBP algorithm, which computes messages that correspond to pseudo-max-marginals over single variables (approximate max-marginals, that do not account for the loops in the network). We begin by conceptually reformulating the ensemble as a set of networks over disjoint sets of variables {X(l) 1 , . . . , X(l) n } for l = 1, . . . , k; we enforce consistency of the joint assignment using a set of “communicator” variables X1, . . . , Xn, such that each X(l) i must take the same value as Xi. We assume that each subnetwork is associated with an algorithm that can “read in” pseudo-max-marginals over the communicator variables, and compute pseudo-max-marginals over these variables. More precisely, let δ(l)→i be the message sent from subnetwork l to Xi and δi→(l) the opposite message. Then we define the COMPOSE message passing scheme as follows: δ(l)→i(xi) = max x(l) : X(l) i =xi PFl(x(l)) Y j̸=i δj→(l)(X(l) j ) (1) δi→(l) = Y l′̸=l δ(l′)→i. (2) That is, each subnetwork computes its local pseudo-max-marginals over each of the individual variables, given, as input, the pseudo-max-marginals over the others. The separate pseudo-maxmarginals are integrated via the communicator variables. It is not difficult to see that this message passing scheme is equivalent to a particular scheduling algorithm for max-product belief propagation over the ensemble of networks, assuming that the max-product computation in each of the subnetworks is computed exactly using a black-box subroutine. We note that this message passing scheme is somewhat related to the tree-reweighted maxproduct (TRW) method of Wainwright et al. [8], where the network distribution is partitioned as a weighted combination of trees, which also communicate pseudo-max-marginals with each other. 4 Efficient Computation of Max-Marginals In this section, we describe two important classes of networks where the MAP problem can be solved efficiently using combinatorial algorithms: matching networks, which can be solved using bipartite matching algorithms; and regular networks, which can be solved using (iterated application of) minimum cut algorithms. We show how the same algorithms can be adapted, at minimal computational cost, for computing not only the single MAP assignment, but also the set of max-marginals. This allows these algorithms to be used as one of our “black boxes” in the COMPOSE framework. Bipartite matching. Many problems can be well-formulated as maximum-score (or minimum weight) bipartite matching: We are given a graph G = (A, U), whose nodes are partitioned into disjoint sets A = A ∪B. In G, each edge (a, b) has one endpoint in A and the other in B and an associated score c(a, b). A bipartite matching is a subset of the edges W ⊂U such that each node appears in at most one edge. The notion of a matching can be relaxed to include other types of degree constraints, e.g., constraining certain nodes to appear in at most k edges. The score of the matching is simply the sum of the scores of the edges in W. The matching problem can also be formulated as an MRF, in several different ways. For example, in the degree-1 case (each node in A is matched to one node in B), we can have a variable Xa for each a ∈A whose possible values are all of the nodes in B. The edge scores in the matching graph are then simply singleton potentials in the MRF, where φa(Xa = b) = exp(c(a, b)). Unfortunately, while the costs can be easily encoded in an MRF, the degree constraints on the matching induce a set of pairwise mutual-exclusion potentials on all pairs of variables in the MRF, leading to a fully connected network. Thus, standard methods for MRF inference cannot handle the networks associated with matching problems. Nevertheless, finding the maximum score bipartite matching (with any set of degree constraints) can be accomplished easily using standard combinatorial optimization algorithms (e.g., [6]). However, we also need to find all the max-marginals. Fortunately, we can adapt the standard algorithm for finding a single best matching to also find all of the max-marginals. A standard solution to the max-matching problem reduces it to a max-weight flow problem, by introducing an additional “source” node that connects to all the nodes in A, and an additional “sink” node that connects to all the nodes in B; the capacity of these edges is the degree constraint of the node (1 for a 1-to-1 matching). We now run a standard max-weight flow algorithm, and define an edge to be in the matching if it bears flow. Standard results show that, if the edge capacities are integers, then the flow too is integral, so that it defines a matching. Let w∗be the weight of the flow in the graph. A flow in the graph defines a residual graph, where there is an edge in the graph whose capacity is the amount of flow it can carry relative to the current flow. Thus, for example, if the current solution carries a unit of flow along a particular edge (a, b) in the original graph, the residual graph will have an edge with a unit capacity going in the reverse direction, corresponding to the fact that we can now choose to “eliminate” the flow from a to b. The scores in these inverse edges are also negative, corresponding to the fact that score is lost when we reduce the flow. Our goal now is to find, for each pair (a, b), the score of the optimal matching where we force this pair to be matched. If this pair is matched in the current solution, then the score is simply w∗. Otherwise, we simply find the highest scoring path from b to a in the residual graph. Any edges on this new path from A to B will be included in the new matching; any edges from B to A were included in the old matching, but are not in the new matching because of the augmenting path. This path is the best way of changing the flow so as to force flow from a to b. Letting ∆be the weight of this augmenting path, the overall score of the new flow is w∗+ ∆. It follows that the cost of this path is necessarily negative, for otherwise it would have been optimal to apply it to the original flow, improving its score. Thus, we can find the highest-scoring path by simply negating all edge costs and finding the shortest path in the graph. Thus, to compute all of the max-marginals, we simply need to find the shortest path from every node a ∈A to every node b ∈B. We can find this using the Floyd-Warshall all-pairs-shortest-paths algorithm, which runs in O((nA + nB)3) time, for nA = |A| and nB = |B|; or we can run a singlesource shortest-path algorithm for each node in B, at a total cost of O(nB · nAnB log(nAnB)). By comparison, the cost of solving the initial flow problem is O(n3 A log(nA)). Minimum Cuts. A very different class of networks that admits an efficient solution is based on the application of a minimum cut algorithm to a graph. At a high level, these networks encode situations where adjacent variables like to take “similar” values. There are many variants of this condition. The simplest variant is applied to pairwise MRFs over binary-valued random variables. In this case, a potential is said to be regular if: φij(Xi = 1, Xj = 1) · φij(Xi = 0, Xj = 0) ≥ φij(Xi = 0, Xj = 1) · φij(Xi = 1, Xj = 0). For MRFs with only regular potentials, the MAP solution can be found as the minimum cut of a weighted graph constructed from the MRF [9]. This construction can be extended in various ways (see [9] for a survey), including to the class of networks with non-binary variables whose negative-log-probability is a convex function [5]. Moreover, for a range of conditions on the potentials, an α-expansion procedure [2], which iteratively applies a mincut to a series of graphs, can be used to find a solution with guaranteed approximation error relative to the optimal MAP assignment. As above, a single joint assignment does not suffice for our purposes. In recent work, Kohli and Torr [7], studying the problem of confidence estimation in MAP problems, showed how all of the max-marginals in a regular network can be computed using dynamic algorithms for flow computations. Their method also applies to non-binary networks with convex potentials (as in [5]), but not to networks for which α-expansion is used to find an approximate MAP assignment. 5 Experimental Results We evaluate COMPOSE on the image correspondence problem, which is characteristic of matching problems with geometric constraints. We compare both max-product tree-reparameterization (TRMP) [8] and asynchronous max-product (AMP). The axes along which we compare all algorithms are: the ability to achieve convergence, the time it takes to reach a solution, and the quality — log of the unnormalized likelihood — of the solution found, in the Markov network that defines the problem. We use standard message damping of .3 for the max-product algorithms and a convergence threshold of 10−3 for all propagation algorithms. All tests were run on a 3.4 GHz Pentium 4 processor with 2GB of memory. We focus our experiments on an image correspondence task, where the goal is to find a 1to-1 mapping between landmarks in two images. Here, we have a set of template points S = {x1, . . . , xn} and a set T of target points, {x′ 1, . . . , x′ n}. We encode our MRF with a variable Xi for each marker xi in the source image, whose value corresponds to its aligned candidate x′ j in the target image. Our MRF contains singleton potentials φi, which may encode both local appearance information, so that a marker xi prefers to be aligned to a candidate x′ j in the target image whose neighborhood looks similar to xi’s, or a distance potential so that markers xi prefer to be aligned to candidates x′ j in locations close to those in the source image. The MRF also contains pairwise potentials {φij} that can encode dependencies between the landmark assignments. In particular, we may want to encode geometric potentials, which enforce a preference for preservation of distance or orientation for pairs of markers xi, xj and their assigned targets x′ k, x′ l. Finally, as the goal is to find a 1-to-1 mapping between landmarks in the source and target images, we also encode a set of mutual exclusion potentials over pairs of variables, enforcing the constraint that no two markers are assigned to the same candidate x′ k. Our task is to find the MAP solution in this MRF. Synthetic Networks. We first experimented with synthetically generated networks that follow the above form. To generate the networks, we first create a source “image” that contains a set of template points S = {x1, . . . , xn}, chosen by uniformly sampling locations from a two-dimensional plane. Next, the target set of points T = {x′ 1, . . . , x′ n} is generated by generating one point from each template point xi, sampling from a Gaussian distribution with mean xi and a diagonal covariance matrix σ2I. As there was no true local information, the matching (or singleton) potentials for both types of synthetic networks were generated uniformly at random on [0, 1). The ‘correct’ matching point, or the one the template variable generates, was given weight .7, ensuring that the correct matching gets a non-negligible weight without making the correspondence too obvious. We consider two different formulations for the geometric potentials. The first utilizes a minimum spanning tree connecting the points in S, and the second simply a chain. In both cases, we generate pairwise geometric potentials φij(Xi, Xj) that are Gaussian with mean µ = (xi −xj) and standard deviation proportional to the Euclidean distance between xi and xj and variance σ2. Results for the two constructions were similar, so, due to lack of space, we present results only for the line networks. Fig. 1(a) shows the cumulative percentage of convergent runs as a function of CPU time. COMPOSE converges significantly more often than either AMP or state-of-the-art TRMP. For TRMP, we created one tree over all the geometric and singleton potentials to quickly pass information through the entire graph; the rest of the trees chosen for TRMP were over a singleton potential, all the neighboring mutual exclusion potentials, and pairwise potentials neighboring the singleton, allowing us to maintain the mutual exclusion constraints during different reparameterization steps in TRMP. Since (a) (b) (c) (d) Figure 1: (a) Cumulative percentage of convergent runs versus CPU time on networks with 30 variables and sigma ranging from 3 to 9. (b) The effect of changing the number of variables on the log score. Shown is the difference between the log score of each algorithm and the score found by AMP. (c) Direct comparison of COMPOSE to TRMP on individual runs from the same set of networks as in (b), grouped by algorithm convergence. (d) Score of assignment based on intermediate beliefs versus time for COMPOSE, TRMP, and matching on 100 variable networks. All algorithms were allowed to run for 5000 seconds. sum-product algorithms are known in general to be less susceptible to oscillation than their maxproduct counterparts, we also compared against sum-product asynchronous belief propagation. In our experiments, however, sum-production BP did not achieve good scores even on runs in which it did converge, perhaps because the distribution was fairly diffuse, leading to an averaging of diverse solutions; we omit results for lack of space. Fig. 1(b) shows the average difference in log scores between each algorithm’s result and the average log score of AMP as a function of the number of variables in the networks. COMPOSE clearly outperforms the other algorithms, gaining a larger score margin as the size of the problem increases. In the synthetic tests we ran for (b) and (c), COMPOSE achieved the best score in over 90% of cases. This difference was greatest in more difficult problems, where there is greater variance in the locations of candidates in the target image leading to difficulty achieving a 1-to-1 correspondence. In Fig. 1(c), we further examine scores from individual runs, comparing COMPOSE directly to the strongest competitor, TRMP. COMPOSE consistently outperforms TRMP and never loses by more than a small margin; COMPOSE often achieves scores on the order of 240 times better than those achieved by TRMP. Interestingly, there appears not to be a strong correlation between relative performance and whether or not the algorithms converged. Fig. 1(d) examines the intermediate scores obtained by COMPOSE and TRMP on intermediate assignments reached during the inference process, for large (100 variable) problems. Though COMPOSE does not reach convergence in messages, it quickly takes large steps to a very good score on the large networks. TRMP also takes larger steps near the beginning, but it is less consistent and it never achieves a score as high as COMPOSE. This indicates that COMPOSE scales better than TRMP to larger problems. This behavior may also help to explain the results from (c), where we see that, even when COMPOSE does not converge in messages, it still is able to achieve good scores. Overall, these results indicate that we can use intermediate results for COMPOSE even before convergence. Real Networks. We now consider real networks generated for the task of electron microscope tomography: the three-dimensional reconstruction of cell and organelle structures based on a series of images obtained at different tilt angles. The problem is to localize and track markers in images across time, and it is a difficult one; traditional methods like cross correlation and graph matching often result in many errors. We can encode the problem, however, as an MRF, as described above. In this case, the geometric constraints were more elaborate, and it was not clear how to construct a good set of spanning trees. We therefore used a variant on AMP called residual max-product (RMP) [3] that schedules messages in an informed way over the network; in this work and others, we have found this variant to achieve better performance than TRMP on difficult networks. Fig. 2(a) shows a source set of markers in an electron tomography image; Fig. 2(b) shows the correspondence our algorithm achieves, and Fig. 2(c) shows the correspondence that RMP achieves. Note that, in Fig. 2(c), points from the source image are assigned to the same point in the target image, whereas COMPOSE does not have the same failing. Of the twelve pairs of images we tested, RMP failed to converge on 11/12 within 20 minutes, whereas COMPOSE failed to converge on only two of the twelve. Because the network structure was difficult for loopy approximate methods, we ran experiments where we replaced mutual exclusion constraints with soft location constraints on individual landmarks; while convergence improved, actual performance was inferior. Fig. 2(d) shows the scores for the different methods we use to solve these problems. Using RMP as the baseline score, we see the difference in scores for the different methods. It is clear that, though RMP and TRMP run on a simpler network with soft mutual exclusion constraints are competitive with, and even very slightly better than COMPOSE on simple problems, as problems become more difficult (more variance in target images), COMPOSE clearly dominates. We also compare COMPOSE to simply finding the best matching of markers to candidates without any geometric information; COMPOSE dominates this approach, never scoring worse than the matching. (a) (b) (c) (d) Figure 2: (a) Labeled markers in a source electron microscope image (b) Candidates COMPOSE assigns in the target image (c) Candidates RMP assigns in the target image (note the Xs through incorrect or duplicate assignments) (d) A score comparison of COMPOSE, matching, and RMP on the image correspondences 6 Discussion In this paper, we have presented COMPOSE, an algorithm that exploits the presence of tractable substructures in MRFs within the context of max-product belief propagation. Motivated by the existence of very efficient algorithms to extract all max-marginals from combinatorial substructures, we presented a variation of belief propagation methods that used the max-marginals to take large steps in inference. We also demonstrated that COMPOSE significantly outperforms state-of-the-art methods on different challenging synthetic and real problems. We believe that one of the major reasons that belief propagation algorithms have difficulty with the augmented matching problems described above is that the mutual exclusion constraints create a phenomenon where small changes to local regions of the network can have strong effects on distant parts of the network, and it is difficult for belief propagation to adequately propagate information. Some existing variants of belief propagation (such as TRMP) attempt to speed the exchange of information across opposing sides of the network by means of intelligent message scheduling. Even intelligently-scheduled message passing is limited, however, as messages are inherently local. If there are oscillations across a wide diameter, due to global interactions in the network, they might contribute significantly to poor performance by BP algorithms. COMPOSE slices the network along a different axis, using subnetworks that are global in nature but that do not have all of the information about any subset of variables. If the component of the network that is difficult for belief propagation can be encoded in an efficient special-purpose subnetwork such as a matching, then we have a means of effectively propagating global information. We conjecture that COMPOSE’s ability to globally pass information contributes both to its improved convergence and to the better results it obtains even without convergence. Some very recent work explores the case where a regular MRF contains terms that are not regular [14, 13], but this work is largely specific to certain types of “close-to-regular” MRFs. It would be interesting to compare COMPOSE and these methods on a range of networks containing regular subgraphs. Our work is also related to work trying to solve the quadratic assignment problem (QAP) [10], a class of problems of which our generalized matching networks are a special case. Standard algorithms for QAP include simulated annealing, tabu search, branch and bound, and ant algorithms [16]; the latter have some of the flavor of message passing, walking trails over the graph representing a QAP and iteratively updating scores of different assignments to the QAP. To the best of our knowledge, however, none of these previous methods attempts to use a combinatorial algorithm as a component in a general message-passing algorithm, thereby exploiting the structure of the pairwise constraints. There are many interesting directions arising from this work. It would be interesting to perform a theoretical analysis of the COMPOSE approach, perhaps providing conditions under which it is guaranteed to provide a certain level of approximation. A second major direction is the identification of other tractable components within real-world MRFs that one can solve using combinatorial optimization methods, or other efficient approaches. For example, the constraint satisfaction community has studied several special-purpose constraint types that can be solved more efficiently than using generic methods [4]; it would be interesting to explore whether these constraints arise within MRFs, and, if so, whether the special-purpose procedures can be integrated into the COMPOSE framework. Overall, we believe that real-world MRFs often contain large structured sub-parts that can be solved efficiently with special-purpose algorithms; the combination of special-purpose solvers within a general inference scheme may allow us to solve problems that are intractable to any current method. Acknowledgments This research was supported by the Defense Advanced Research Projects Agency (DARPA) under the Transfer Learning Program. We also thank David Karger for useful conversations and insights. References [1] D. Anguelov, D. Koller, P. Srinivasan, S. Thrun, H. Pang, and J. Davis. The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In NIPS, 2004. [2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In ICCV, 1999. [3] G. Elidan, I. McGraw, and D. Koller. Residual belief propagation. In UAI, 2006. [4] J. Hooker, G. Ottosson, E.S. Thorsteinsson, and H.J. Kim. A scheme for unifying optimization and constraint satisfaction methods. In Knowledge Engineering Review, 2000. [5] H. Ishikawa. Exact optimization for Markov random fields with convex priors. PAMI, 2003. [6] J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley, 2005. [7] P. Kohli and P. Torr. Measuring uncertainty in graph cut solutions - efficiently computing min-marginal energies using dynamic graph cuts. In ECCV, 2006. [8] V. Kolmogorov and M. Wainwright. On the optimality of tree-reweighted max-product message-passing. In UAI ’05. [9] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? In ECCV, 2002. [10] E. Lawler. The quadratic assignment problem. In Management Science, 1963. [11] K. Murphy and Y. Weiss. Loopy belief propagation for approximate inference: An empirical study. In UAI ’99. [12] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [13] A. Raj, G. Singh, and R. Zabih. MRF’s for MRI’s: Bayesian reconstruction of MR images via graph cuts. In CVPR, 2006. To appear. [14] C. Rother, S. Kumar, V. Kolmogorov, and A. Blake. Digital tapestry. In CVPR, 2005. [15] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 2000. [16] T. St¨utzle and M. Dorigo. ACO algorithms for the quadratic assignment problem. In New Ideas in Optimization. 1999. [17] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother. A comparative study of energy minimization methods for Markov random fields. In ECCV, 2006. [18] B. Taskar, V. Chatalbashev, and D. Koller. Learning associative markov networks. In ICML ’04. [19] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: a large margin approach. In ICML ’05. [20] Y. Weiss and W. Freeman. On the optimality of solutions of the max-productbelief-propagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47, 2001.
|
2006
|
102
|
2,925
|
Branch and Bound for Semi-Supervised Support Vector Machines Olivier Chapelle1 Max Planck Institute T¨ubingen, Germany chapelle@tuebingen.mpg.de Vikas Sindhwani University of Chicago Chicago, USA vikass@cs.uchicago.edu S. Sathiya Keerthi Yahoo! Research Santa Clara, USA selvarak@yahoo-inc.com Abstract Semi-supervised SVMs (S3VM) attempt to learn low-density separators by maximizing the margin over labeled and unlabeled examples. The associated optimization problem is non-convex. To examine the full potential of S3VMs modulo local minima problems in current implementations, we apply branch and bound techniques for obtaining exact, globally optimal solutions. Empirical evidence suggests that the globally optimal solution can return excellent generalization performance in situations where other implementations fail completely. While our current implementation is only applicable to small datasets, we discuss variants that can potentially lead to practically useful algorithms. 1 Introduction A major line of research on extending SVMs to handle partially labeled datasets is based on the following idea: solve the standard SVM problem while treating the unknown labels as additional optimization variables. By maximizing the margin in the presence of unlabeled data, one learns a decision boundary that traverses through low data-density regions while respecting labels in the input space. In other words, this approach implements the cluster assumption for semi-supervised learning – that points in a data cluster have similar labels. This idea was first introduced in [14] under the name Transductive SVM, but since it learns an inductive rule defined over the entire input space, we refer to this approach as Semisupervised SVM (S3VM). Since its first implementation in [9], a wide spectrum of techniques have been applied to solve the non-convex optimization problem associated with S3VMs, e.g., local combinatorial search [9], gradient descent [6], continuation techniques [3], convex-concave procedures [7], and deterministic annealing [12]. While non-convexity is partly responsible for this diversity of methods, it is also a departure from one of the nicest features of SVMs. Several experimental studies have established that S3VM implementations show varying degrees of empirical success. This is conjectured to be closely tied to their susceptibility to local minima problems. The following questions motivate this paper: How well do current S3VM implementations approximate the exact, globally optimal solution of the non-convex problem associated with S3VMs ? Can one expect significant improvements in generalization performance by better approaching the global solution? We believe that these questions are of fundamental importance for S3VM research and are largely unresolved. This is partly due to the lack of simple implementations that practitioners can use to benchmark new algorithms against the global solution, even on small-sized problems. 1Now part of Yahoo! Research, chap@yahoo-inc.com Our contribution in this paper is to outline a class of Branch and Bound algorithms that are guaranteed to provide the globally optimal solution for S3VMs. Branch and bound techniques have previously been noted in the context of S3VM in [16], but no details were presented there. We implement and evaluate a branch and bound strategy that can serve as an upper baseline for S3VM algorithms. This strategy is not practical for typical semisupervised settings where large amounts of unlabeled data is available. But we believe it opens up new avenues of research that can potentially lead to more efficient variants. Empirical results on some semi-supervised tasks presented in section 7 show that the exact solution found by branch and bound has excellent generalization performance, while other S3VM implementations perform poorly. These results also show that S3VM can compete and even outperform graph-based techniques (e.g.,[17, 13]) on problems where the latter class of methods have typically excelled. 2 Semi-Supervised Support Vector Machines We consider the problem of binary classification. The training set consists of l labeled examples {(xi, yi)}l i=1, yi = ±1, and of u the unlabeled examples {xi}n i=l+1, with n = l + u. In the linear case, the following objective function is minimized on both the hyperplane parameters w and b, and on the label vector yu := [yl+1 . . . yn]⊤, min w,b,yu,ξi≥0 1 2w2 + C l X i=1 ξp i + C∗ n X i=l+1 ξp i (1) under constraints yi(w · xi + b) ≥1 −ξi, 1 ≤i ≤n. Non linear decision boundaries can be constructed using the kernel trick [15]. While in general any convex loss function can be used, it is common to either penalize the training errors linearly (p = 1) or quadratically (p = 2). In the rest of the paper, we consider p = 2. The first two terms in (1) correspond to a standard SVM. The last one takes into account the unlabeled points and can be seen as an implementation of the cluster assumption [11] or low density separation assumption [6]; indeed, it drives the outputs of the unlabeled points away from 0 (see figure 1). −1.5 −1 −0.5 0 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 Signed output Loss Figure 1: With p = 2 in (1), the loss of a point with label y and signed output t is max(0, 1 −yt)2. For an unlabeled point, this is miny max(0, 1 −yt)2 = max(0, 1 −|t|)2. For simplicity, we take C∗= C. But in practice, it is important to set these two values independently because C reflects our confidence in the labels of the training points, while C∗corresponds to our belief in the low density separation assumption. In addition, we add the following balancing constraint to (1), 1 u n X i=l+1 max(yi, 0) = r. (2) This constraint is necessary to avoid unbalanced solutions and has also been used in the original implementation [9]. Ideally, the parameter r should be set to the ratio of positive points in the unlabeled set. Since it is unknown, r is usually estimated through the class ratio on the labeled set. In that case, one may wish to ”soften” this constraint, as in [6]. For the sake of simplicity, in the rest of the paper, we set r to the true ratio of positive points in the unlabeled set. Let us call I the objective function to be minimized: I(w, b, yu) = 1 2w2 + C n X i=1 max(0, 1 −yi(w · xi + b))2. There are two main strategies to minimize I: (1) For a given fixed w and b, the optimal yu is simply given by the signs of w·xi +b. Then a continuous optimization on w and b can be done [6]. But note that the constraint (2) is then not straightforward to enforce. (2) For a given yu, the optimization on w and b is a standard SVM training. Let’s define J (yu) = min w,b I(w, b, yu). (3) Now the goal is to minimize J over a set of binary variables (and each evaluation of J is a standard SVM training). This was the approach followed in [9] and the one that we take in this paper. The constraint (2) is implemented by setting J (yu) = +∞for all vectors yu not satisfying it. 3 Branch and bound 3.1 Branch and bound basics Suppose we want to minimize a function f over a space X, where X is usually discrete. A branch and bound algorithm has two main ingredients: Branching : the region X is recursively split into smaller subregions. This yields a tree structure where each node corresponds to a subregion. Bounding : consider two (disjoint) subregions (i.e. nodes) A and B ⊂X. Suppose that an upper bound (say a) on the best value of f over A is known and a lower bound (say b) on the best value of f over B is known and that a < b. Then, we know there is an element in the subset A that is better than all elements of B. So, when searching for the global minimizer we can safely discard the elements of B from the search: the subtree corresponding to B is pruned. 3.2 Branch and bound for S3VM The aim is to minimize (3) over all 2u possible choices for the vector yu,1 which constitute the set X introduced above. The binary search tree has the following structure. Any node corresponds to a partial labeling of the data set and its two children correspond to the labeling of some unlabeled point. One can thus associate with any node a labeled set L containing both the original labeled examples and a subset S of unlabeled examples {(xj, yj)}j∈S⊆[l+1...n] to which the labels yj have been assigned. One can also associate an unlabeled set U = [l + 1 . . . n] \ S corresponding to the subset of unlabeled points which have not been assigned a label yet. The size of the subtree rooted at this node is thus 2|U|. The root of the tree has only the original set of labeled examples associated with it, i.e S is empty. The leaves in the tree correspond to a complete labeling of the dataset, i.e. U is empty. All other nodes correspond to partial labelings. As for any branch and bound algorithm, we have to decide about the following choices, Branching: For a given node in the tree (i.e. a partial labeling of the unlabeled set), what should be its two children (i.e. which unlabeled point should be labeled next)? Bounding: Which upper and lower bounds should be used? Exploration: In which order will the search tree be examined? In other words, which subtree should be explored next? Note that the tree is not built explicitly but on the fly as we explore it. 1There are actually only „ u ur « effective choices because of the constraint (2). Concerning the upper bound, we decided to have the following simple strategy: for a leaf node, the upper bound is simply the value of the function; for a non leaf node, there is no upper bound. In other words, the upper bound is the best objective function found so far. Coming back to the notations of section 3.1, the set A is the leaf corresponding to the best solution found so far and the set B is the subtree that we are considering to explore. Because of this choice for the upper bound, a natural way to explore the tree is a depth first search. Indeed it is important to go to the leaves as often as possible in order to have a tight upper bound and thus perform aggressive pruning. The choice of the lower bound and the branching strategy are presented next. 4 Lower bound We consider a simple lower bound based on the following observation. The minimum of the objective function (1) is smaller when C∗= 0 than when C∗> 0. But C∗= 0 corresponds to a standard SVM, ignoring the unlabeled data. We can therefore compute a lower bound at a given node by optimizing a standard SVM on the labeled set associated with this node. We now present a more general framework for computing lower bounds. It is based on the dual objective function of SVMs. Let D(α, yU) be the dual objective function, where yU corresponds to the labels of the unlabeled points which have not been assigned a label yet, D(α, yU) = n X i=1 αi −1 2 n X i,j=1 αiαjyiyj K(xi, xj) + δij 2C . (4) The dual feasibility is αi ≥0 and X αiyi = 0. (5) Now suppose that we have a strategy that, given yU, finds a vector α(yU) satisfying (5). Since the dual is maximized, D(α(yU), yU) ≤max α D(α, yU) = J (yU), where J has been defined in (3). Let Q(yU) := D(α(yU), yU) and lb a lower bound on (or the value of) min Q(yU), where the minimum is taken over all yU satisfying the balancing constraint (2). Then lb is also a lower bound for the value of the objective function corresponding to that node. The goal is thus to find a choice for α(yU) such that a lower bound on Q can be computed efficiently. The choice corresponding to the lower bound presented above is the following. Train an SVM on the labeled points, obtain the vector α and complete it with zeros for the unlabeled points. Then Q(yU) is the same for all the possible labelings of the unlabeled points and the lower bound is the SVM objective function on the labeled points. Here is a sketch of another possibility for α(yU) that one can explore: instead of completing the vector α by zeros, we complete it by a constant γ which would typically be of the same order of magnitude as α. Then Q(yU) = P αi −1 2y⊤Hy, where Hij = αiαjKij. To lower bound Q, one can use results from the quadratic zero-one programming literature [10] or solve a constrained eigenvalue problem [8]. Finally, note that unless P U yi = 0, the constraint P αiyi = 0 will not be satisfied. One remedy is to train the supervised SVM with the constraint P αiyi = −γ P U yi = γ(n −2ru + P L yi) (because of (2)). In the primal, this amounts to penalizing the bias term b. 5 Branching At a given node, some unlabeled points have already been assigned a label. Which unlabeled point should be labeled next? Since our strategy is to reach a good solution as soon as possible (see last paragraph of section 3.2), it seems natural to assign the label that we are the most confident about. A simple possibility would be to branch on the unlabeled point which is the nearest from another labeled point using a reliable distance metric. But we now present a more principled approach based on the analysis of the objective value. We say that we are ”confident” about a particular label of an unlabeled point when assigning the opposite label results in a big increase of the objective value: this partial solution would then be unlikely to lead to the optimal one. Let us formalize this strategy. Remember from section 3.2 that a node is associated with a set L of currently labeled examples and a set U of unlabeled examples. Let s(L) be the SVM objective function trained on the labeled set, s(L) = min w,b 1 2w2 + C X (xi,yi)∈L max(0, 1 −yi(w · xi + b))2. (6) As discussed in the previous section, the lower bound is s(L). Now our branching strategy consists in selecting the following point in U, arg max x∈U, y∈±1 s(L ∪{x, y}) (7) In other words, we want to find the unlabeled point x∗and its label y∗which would make the objective function increase as much as possible. Then we branch on x∗, but start exploring the branch with the most likely label −y∗. This strategy has an intuitive link with the ”label propagation” idea [17]: an unlabeled point which is near from a labeled point is likely to be of the same label; otherwise, the objective function would be large. A main disadvantage of this approach is that to solve (7), a lot of SVM trainings are necessary. It is however possible to approximately compute s(L ∪{x, y}). The idea is similar to the fast approximation of the leave-one-out solution [5]. Here the situation is ”add-one-in”. If an SVM has been trained on the set L it is possible to efficiently compute the solution when one point is added in the training set. This is under the assumption that the set of support vectors does not change when adding this point. In practice, the set is likely to change and the solution will only be approximate. Proposition 1 Consider training an SVM on a labeled set L with quadratic penalization of the errors (cf (6) or (4)). Let f be the learned function and sv be the set of support vectors. Then, if sv does not change while adding a point (x, y) in the training set, s(L ∪{x, y}) = s(L) + max(0, 1 −yf(x))2 2S2x + 1/C (8) where S2 x = K(x, x) −v⊤K−1 sv v, Ksv = K(xi, xj) + δij 2C i,j∈sv 1 1⊤ 0 ! and v⊤= ( ˜K(xi, x)i∈sv 1). The proof is omitted because of lack of space. It is based on the fact that s(L) = 1 2y⊤ svK−1 sv ysv and relies on the block matrix inverse formula. 6 Algorithm The algorithm is implemented recursively (see algorithm 1). At the beginning, the upper bound can either be set to +∞or to a solution found by another algorithm. Note that the SVM trainings are incremental: whenever we go down the tree, one point is added in the labeled set. For this reason, the retraining can be done efficiently (also see [2]) since effectively, we just need to update the inverse of a matrix. 7 Experiments We consider here two datasets where other S3VM implementations are unable to achieve satisfying test error rates. This naturally raises the following questions: Is this weak perAlgorithm 1 Branch and bound for S3VM(BB). Function: (Y ∗, v) ←S3VM(Y, ub) % Recursive implementation Input: Y : a partly labeled vector (0 for unlabeled) ub: an upper bound on the optimal objective value. Output: Y ∗: optimal fully labeled vector v: corresponding objective function. if P max(0, Yi) > ur OR P max(0, −Yi) < n −ur then return % Constraint (2) can not be satisfied end if −→Do not explore this subtree v ←SVM(Y ) % Compute the SVM objective function on the labeled points. if v > ub then return % The lower bound is higher than the upper bound end if −→Do not explore this subtree if Y is fully labeled then Y ∗←Y return % We are at a leaf end if Find index i and label y as in (7) % Find next unlabeled point to label Yi ←−y % Start first by the most likely label (Y ∗, v) ←S3VM(Y, ub) % Find (recursively) the best solution Yi ←−Yi % Switch the label (Y ∗ 2 , v2) ←S3VM(Y, min(ub, v)) % Explore other branch with updated upper-bound if v2 < v then Y ∗←Y ∗ 2 and v ←v2 % Keep the best solution end if formance due to the unsuitability of the S3VM objective function for these problems or do these methods get stuck at highly sub-optimal local minima? 7.1 Two moons The “two moons” dataset is now a standard benchmark for semi-supervised learning algorithms. Most graph-based methods such as [17] easily solve this problem , but so far, all S3VM algorithms find it difficult to construct the right boundary (an exception is [12] using an L1 loss). We drew 100 random realizations of this dataset, fixed the bandwidth of an RBF kernel to σ = 0.5 and set C = 10. Each moon contained 50 unlabeled points. We compared ∇S3VM[6], cS3VM[3], CCCP [7], SVMlight [9] and DA [12]. For the first 3 methods, there is no direct way to enforce the constraint (2). However, these methods have a constraint that the mean output on the unlabeled point should be equal to some constant. This constant is normally fixed to the mean of the labels, but for the sake of consistency we did a dichotomy search on this constant in order to have (2) satisfied. Results are presented in table 1. Note that the test errors for other S3VM implementations are likely to be improved by hyperparameter tuning, but they will still stay very high. For comparison, we have also included the results of a state-of-the-art graph based method, LapSVM [13] whose hyperparameters were optimized for the test error and the threshold adjusted to satisfy the constraint (2). Matlab source code and a demo of our algorithm on the “two moons” dataset is accessible as supplementary material with this paper. 7.2 COIL Extensive benchmark results reported in [4, benchmark chapter] show that on problems where classes are expected to reside on low-dimensional non-linear manifolds, e.g., handwritten digits, graph-based algorithms significantly outperform S3VM implementations. Table 1: Results on the two moons dataset (averaged over 100 random realizations) Test error (%) Objective function ∇S3VM 59.3 13.64 cS3VM 45.7 13.25 CCCP 64 39.55 SVMlight 66.2 20.94 DA 34.1 46.85 BB 0 7.81 LapSVM 3.7 N/A We consider here such a dataset by selecting three confusible classes from the COIL20 dataset [6] (see figure 2). There are 72 images per class, corresponding to rotations of 5 degrees (and thus yielding a one dimensional manifold). We randomly selected 2 images per class to be in the labeled set and the rest being unlabeled. Results are reported in table 2. The hyperparameters were chosen to be σ = 3000 and C = 100. Figure 2: The 3 cars from the COIL dataset, subsampled to 32×32 Table 2: Results on the Coil dataset (averaged over 10 random realizations) Test error (%) Objective function ∇S3VM 60.6 267.4 cS3VM 60.6 235 CCCP 47.5 588.3 SVMlight 55.3 341.6 DA 48.2 611 BB 0 110.7 LapSVM 7.5 N/A From tables 1 and 2, it appears clearly that (1) the S3VM objective function leads to excellent test errors; (2) other S3VM implementations fail completely in finding a good minimum of the objective function2 and (3) the global S3VM solution can actually outperform graphbased alternatives even if other S3VM implementations are not found to be competitive. Concerning the running time, it is of the order of a minute for both datasets. We do not expect this algorithm to be able to handle datasets much larger than couple of hundred points. 8 Discussion and Conclusion We implemented and evaluated one strategy amongst many in the class of branch and bound methods to find the globally optimal solution of S3VMs. The work of [1] is the most closely related to our methods. However that paper presents an algorithm for linear S3VMs and relies on generic mixed integer programming which does not make use of the problem structure as our methods can. This basic implementation can perhaps be made more efficient by choosing better bounding and branching schemes. Also, by fixing the upper bound as the currently best objective 2The reported test errors are somehow irrelevant and should not be used for ranking the different algorithms. They should just be interpreted as ”failure”. value, we restricted our implementation to follow depth-first search. It is conceivable that breadth-first search is equally or more effective in conjunction with alternative upper bounding schemes. Pruning can be done more aggressively to speed-up termination at the expense of obtaining a solution that is suboptimal within some tolerance (i.e prune B if a < b −ϵ). Finally, we note that a large family of well-tested branch and bound procedures from zeroone quadratic programming literature can be immediately applied to the S3VM problem for the special case of squared loss. An interesting open question is whether one can provide a guarantee for polynomial time convergence under some assumptions on the data and the kernel. Concerning the running time of our current implementation, we have observed that it is most efficient whenever the global minimum is significantly smaller than most local minima: in that case, the tree can be pruned efficiently. This happens when the clusters are well separated and C and σ are not too small. For these reasons, we believe that this implementation does not scale to large datasets, but should instead be considered as a proof of concept: the S3VM objective function is very well suited for semi-supervised learning and more effort should be made on trying to efficiently find good local minima. References [1] K. Bennett and A. Demiriz. Semi-supervised support vector machines. In Advances in Neural Information processing systems 12, 1998. [2] G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In Advances in Neural Information Processing Systems, pages 409–415, 2000. [3] O. Chapelle, M. Chi, and A. Zien. A continuation method for semi-supervised svms. In International Conference on Machine Learning, 2006. [4] O. Chapelle, B. Sch¨olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, 2006. in press. www.kyb.tuebingen.mpg.de/ssl-book/. [5] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46:131–159, 2002. [6] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In Tenth International Workshop on Artificial Intelligence and Statistics, 2005. [7] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Large scale transductive SVMs. Journal of Machine Learning Research, 7:1687–1712, 2006. [8] W. Gander, G. H. Golub, and U. Von Matt. A constrained eigenvalue problem. Linear Algebra and its Applications, 114/115:815–839, 1989. [9] T. Joachims. Transductive inference for text classification using support vector machines. In International Conference on Machine Learning, 1999. [10] P.M. Pardalos and G.P. Rodgers. Computational aspects of a branch and bound algorithm for quadratic zero-one programming. Computing, 45:131–144, 1990. [11] M. Seeger. A taxonomy of semi-supervised learning methods. In O. Chapelle, B. Sch¨olkopf, and A. Zien, editors, Semi-Supervised Lerning. MIT Press, 2006. [12] V. Sindhwani, S. Keerthi, and O. Chapelle. Deterministic annealing for semi-supervised kernel machines. In International Conference on Machine Learning, 2006. [13] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: From transductive to semi-supervised learning. In International Conference on Machine Learning, 2005. [14] V. Vapnik and A. Sterin. On structural risk minimization or overall risk in a problem of pattern recognition. Automation and Remote Control, 10(3):1495–1503, 1977. [15] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., New York, 1998. [16] W. Wapnik and A. Tscherwonenkis. Theorie der Zeichenerkennung. Akademie Verlag, Berlin, 1979. [17] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report 02-107, CMU-CALD, 2002.
|
2006
|
103
|
2,926
|
Differential Entropic Clustering of Multivariate Gaussians Jason V. Davis Inderjit Dhillon Dept. of Computer Science University of Texas at Austin Austin, TX 78712 {jdavis,inderjit}@cs.utexas.edu Abstract Gaussian data is pervasive and many learning algorithms (e.g., k-means) model their inputs as a single sample drawn from a multivariate Gaussian. However, in many real-life settings, each input object is best described by multiple samples drawn from a multivariate Gaussian. Such data can arise, for example, in a movie review database where each movie is rated by several users, or in time-series domains such as sensor networks. Here, each input can be naturally described by both a mean vector and covariance matrix which parameterize the Gaussian distribution. In this paper, we consider the problem of clustering such input objects, each represented as a multivariate Gaussian. We formulate the problem using an information theoretic approach and draw several interesting theoretical connections to Bregman divergences and also Bregman matrix divergences. We evaluate our method across several domains, including synthetic data, sensor network data, and a statistical debugging application. 1 Introduction Gaussian data is pervasive in all walks of life and many learning algorithms—e.g. k-means, principal components analysis, linear discriminant analysis, etc—model each input object as a single sample drawn from a multivariate Gaussian. For example, the k-means algorithm assumes that each input is a single sample drawn from one of k (unknown) isotropic Gaussians. The goal of k-means can be viewed as the discovery of the mean of each Gaussian and recovery of the generating distribution of each input object. However, in many real-life settings, each input object is naturally represented by multiple samples drawn from an underlying distribution. For example, a student’s scores in reading, writing, and arithmetic can be measured at each of four quarters throughout the school year. Alternately, consider a website where movies are rated on the basis of originality, plot, and acting. Here, several different users may rate the same movie. Multiple samples are also ubiquitous in time-series data such as sensor networks, where each sensor device continually monitors its environmental conditions (e.g. humidity, temperature, or light). Clustering is an important data analysis task used in many of these applications. For example, clustering sensor network devices has been used for optimizing routing of the network and also for discovering trends between sensor nodes. If the k-means algorithm is employed, then only the means of the distributions will be clustered, ignoring all second order covariance information. Clearly, a better solution is needed. In this paper, we consider the problem of clustering input objects, each of which can be represented by a multivariate Gaussian distribution. The “distance” between two Gaussians can be quantified in an information theoretic manner, in particular by their differential relative entropy. Interestingly, the differential relative entropy between two multivariate Gaussians can be expressed as the convex combination of two Bregman divergences—a Mahalanobis distance between mean vectors and a Burg matrix divergence between the covariance matrices. We develop an EM style clustering algorithm and show that the optimal cluster parameters can be cheaply determined via a simple, closed-form solution. Our algorithm is a Bregman-like clustering method that clusters both means and covariances of the distributions in a unified framework. We evaluate our method across several domains. First, we present results from synthetic data experiments, and show that incorporating second order information can dramatically increase clustering accuracy. Next, we apply our algorithm to a real-world sensor network dataset comprised of 52 sensor devices that measure temperature, humidity, light, and voltage. Finally, we use our algorithm as a statistical debugging tool by clustering the behavior of functions in a program across a set of known software bugs. 2 Preliminaries We first present some essential background material. The multivariate Gaussian distribution is the multivariate generalization of the standard univariate case. The probability density function (pdf) of a d-dimensional multivariate Gaussian is parameterized by mean vector µ and positive definite covariance matrix Σ: p(x|µ, Σ) = 1 (2π) d 2 |Σ| 1 2 exp −1 2(x −µ)T Σ−1(x −µ) , where |Σ| is the determinant of Σ. The Bregman divergence [2] with respect to φ is defined as: Dφ(x, y) = φ(x) −φ(y) −(x −y)T ∇φ(y), where φ is a real-valued, strictly convex function defined over a convex set Q = dom(φ) ⊂Rd such that φ is differentiable on the relative interior of Q. For example, if φ(x) = xT x, then the resulting Bregman divergence is the standard squared Euclidean distance. Similarly, if φ(x) = xT AT Ax, for some arbitrary non-singular matrix A, then the resulting divergence is the Mahalanobis distance MS−1(x, y) = (x−y)T S−1(x−y), parameterized by the covariance matrix S, S−1 = AT A. Alternately, if φ(x) = i(xi log xi −xi), then the resulting divergence is the (unnormalized) relative entropy. Bregman divergences generalize many properties of squared loss and relative entropy. Bregman divergences can be naturally extended to matrices, as follows: Dφ(X, Y ) = φ(X) −φ(Y ) −tr((∇φ(Y ))T (X −Y )), where X and Y are matrices, φ is a real-valued, strictly convex function defined over matrices, and tr(A) denotes the trace of A. Consider the function φ(X) = ∥X∥2 F . Then the corresponding Bregman matrix divergence is the squared Frobenius norm, ∥X −Y ∥2 F . The Burg matrix divergence is generated from a function of the eigenvalues λ1, ..., λd of the positive definite matrix X: φ(X) = − i log λi = −log |X|, the Burg entropy of the eigenvalues. The resulting Burg matrix divergence is: B(X, Y ) = tr(XY −1) −log |XY −1| −d. (1) As we shall see later, the Burg matrix divergence will arise naturally in our application. Let λ1, ..., λd be the eigenvalues of X and v1, ..., vd the corresponding eigenvectors and let γ1, ..., γd be the eigenvalues of Y with eigenvectors w1, ..., wd. The Burg matrix divergence can also be written as B(X, Y ) = i j λi γj (vT i wj)2 − i log λi γi −d. From the first term above, we see that the Burg matrix divergence is a function of the eigenvalues as well as of the eigenvectors of X and Y . The differential entropy of a continuous random variable x with probability density function f is defined as h(f) = − f(x) log f(x)dx. It can be shown [3] that an n-bit quantization of a continuous random variable with pdf f has Shannon entropy approximately equal to h(f) + n. The continuous analog of the discrete relative entropy is the differential relative entropy. Given a random variable x with pdf’s f and g, the differential relative entropy is defined as D(f||g) = f(x) log f(x) g(x) dx. 3 Clustering Multivariate Gaussians via Differential Relative Entropy Given a set of n multivariate Gaussians parameterized by mean vectors m1, ..., mn and covariances S1, ..., Sn, we seek a disjoint and exhaustive partitioning of these Gaussians into k different clusters, π1, ..., πk. Each cluster j can be represented by a multivariate Gaussian parameterized by mean µj and covariance Σj. Using differential relative entropy as the distance measure between Gaussians, the problem of clustering may be posed as the minimization (over all clusterings) of k j=1 {i:πi=j} D(p(x|mi, Si)||p(x|µj, Σj)). (2) 3.1 Differential Relative Entropy and Multivariate Gaussians We first show that the differential entropy between two multivariate Gaussians can be expressed as a convex combination of a Mahalanobis distance between means and the Burg matrix divergence between covariance matrices. Consider two multivariate Gaussians, parameterized by mean vectors m and µ, and covariances S and Σ, respectively. We first note that the differential relative entropy can be expressed as D(f||g) = f log f −f log g = −h(f) − f log g. The first term is just the negative differential entropy of p(x|m, S), which can be shown [3] to be: h(p(x|m, S)) = d 2 + 1 2 log(2π)d|S|. (3) We now consider the second term: p(x|m, S) log p(x|µ, Σ) = p(x|m, S) −1 2(x −µ)T Σ−1(x −µ) −log(2π) d 2 |Σ| 1 2 = −1 2 p(x|m, S)tr(Σ−1(x −µ)(x −µ)T ) − p(x|m, S) log(2π) d 2 |Σ| 1 2 = −1 2tr Σ−1E (x −µ)(x −µ)T −1 2 log(2π)d|Σ| = −1 2tr Σ−1E ((x −m) + (m −µ))((x −m) + (m −µ))T −1 2 log(2π)d|Σ| = −1 2tr Σ−1S + Σ−1(m −µ)(m −µ)T −1 2 log(2π)d|Σ| = −1 2tr Σ−1S −1 2(m −µ)T Σ−1(m −µ) −1 2 log(2π)d|Σ|. The expectation above is taken over the distribution p(x|m, S). The second to last line above follows from the definition of Σ = E[(x −m)(x −m)T ] and also from the fact that E[(x − m)(m −µ)T ] = E[x −m](m −µ)T = 0. Thus, we have D(p(x|m, S)||p(x|µ, Σ)) = −d 2 −1 2 log(2π)d|S| + 1 2tr(Σ−1S) + 1 2 log(2π)d|Σ| (4) +1 2(m −µ)T Σ−1(m −µ) = 1 2 tr(SΣ−1) −log |SΣ−1| −d + 1 2(m −µ)T Σ−1(m −µ) = 1 2B(S, Σ) + 1 2MΣ−1(m, µ), (5) where B(S, Σ) is the Burg matrix divergence and MΣ−1(m, µ) is the Mahalanobis distance, parameterized by the covariance matrix Σ. We now consider the problem of finding the optimal representative Gaussian for a set of c Gaussians with means m1, ..., mc and covariances S1, ..., Sc. For non-negative weights α1, ...αc such that i αi = 1, the optimal representative minimizes the cumulative differential relative entropy: p(x|µ∗, Σ∗) = arg min p(x|µ,Σ) i αiD(p(x|mi, Si)||p(x|µ, Σ)) (6) = arg min p(x|µ,Σ) i αi 1 2B(Si, Σ) + 1 2MΣ−1(mi, µ) . (7) The second term can be viewed as minimizing the Bregman information with respect to some fixed (albeit unknown) Bregman divergence (i.e. the Mahalanobis distance parameterized by some covariance matrix Σ). Consequently, it has a unique minimizer [1] of the form µ∗= i αimi. (8) Next, we note that equation (7) is strictly convex in Σ−1. Thus, we can derive the optimal Σ∗by setting the gradient of (7) with respect to Σ−1 to 0: ∂ ∂Σ−1 n i=1 αiD(p(x|mi, Si)||p(x|µ, Σ)) = n i=1 αi Si −Σ + (mi −µ∗)(mi −µ∗)T . Setting this to zero yields Σ∗= i αi Si + (mi −µ∗)(mi −µ∗)T . (9) Figure 1 illustrates optimal representatives of two 2-dimensional Gaussians with means marked by points A and B, and covariances outlined with solid lines. The optimal Gaussian representatives are denoted with dotted covariances; the representative on the left uses weights, (αA = 2 3, αB = 1 3), while the representative on the right uses weights (αA = 1 3, αB = 2 3). As we can see from equation (8), the optimal representative mean is the weighted average of the means of the constituent Gaussians. Interestingly, the optimal covariance turns out to be the average of the constituent covariances plus rank one updates. These rank-one changes account for the deviations from the individual means to the representative mean. 3.2 Algorithm Algorithm 1 presents our clustering algorithm for the case where each Gaussian has equal weight αi = 1 n. The method works in an EM-style framework. Initially, cluster assignments are chosen (these can be assigned randomly). The algorithm then proceeds iteratively, until convergence. First, the mean and covariance parameters for the cluster representative distributions are optimally computed given the cluster assignments. These parameters are updated as shown in (8) and (9). Next, the cluster assignments π are updated for each input Gaussian. This is done by assigning the ith Gaussian to the cluster j with representative Gaussian that is closest in differential relative entropy. 0 1 2 3 4 5 6 7 −2 −1 0 1 2 3 4 5 6 A B Figure 1: Optimal Gaussian representatives (shown with dotted lines) of two Gaussians centered at A and B (for two different sets of weights). While the optimal mean of each representative is the average of the individual means, the optimal covariance is the average of the individual covariances plus rank-one corrections. Since both of these steps are locally optimal, convergence of the algorithm to a local optima can be shown. Note that the problem is NP-hard, so convergence to a global optima cannot be guaranteed. We next consider the running time of Algorithm 1 when the input Gaussians are d-dimensional. Lines 6 and 9 compute the optimal means and covariances for each cluster, which requires O(nd) and O(nd2) total work, respectively. Line 12 computes the differential relative entropy between each input Gaussian and each cluster representative Gaussian. As only the arg min over all Σj is needed, we can reduce the Burg matrix divergence computation (equation (1)) to tr(SiΣ−1 j ) −log |Σ−1 j |. Once the inverse of each cluster covariance is computed (for a cost of O(kd3)), the first term can be computed in O(d2) time. The second term can similarly be computed once for each cluster for a total cost of O(kd3). Computing the Mahalanobis distance is an O(d2) operation. Thus, total cost of line 12 is O(kd3 + nkd2) and the total running time of the algorithm, given τ iterations, is O(τkd2(n + d)). Algorithm 1 Differential Entropic Clustering of Multivariate Gaussians 1: {m1, ..., mn} ←means of input Gaussians 2: {S1, ..., Sn} ←covariance matrices of input Gaussians 3: π ←initial cluster assignments 4: while not converged do 5: for j = 1 to k do {update cluster means} 6: µj ← 1 |{i:πi=j}| i:πi=j mi 7: end for 8: for j = 1 to k do {update cluster covariances} 9: Σj ← 1 |{i:πi=j}| i:πi=j Si + (mi −µj)(mi −µj)T 10: end for 11: for i = 1 to n do {assign each Gaussian to the closest cluster representative Gaussian} 12: πi ←argmin1≤j≤k B(Si, Σj) + MΣj −1(mi, µj) {B is the Burg matrix divergence and MΣ−1 j is the Mahalanobis distance parameterized by Σj} 13: end for 14: end while 4 Experiments We now present experimental results for our algorithm across three different domains: a synthetic dataset, sensor network data, and a statistical debugging application. 4.1 Synthetic Data Our synthetic datasets consist of a set of 200 objects, each of which consists of 30 samples drawn from one of k randomly generated d-dimensional multivariate Gaussians. The k Gaussians are 2 3 4 5 6 7 8 9 10 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Clusters Normalized Mutual Information Multivariate Gaussian Clustering K−means 4 5 6 7 8 9 10 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Number of Dimensions Normalized Mutual Information Multivariate Gaussian Clustering K−means Figure 2: Clustering quality of synthetic data. Traditional k-means clustering uses only first-order information (i.e. the mean), whereas our Gaussian clustering algorithm also incorporates second-order covariance information. Here, we see that our algorithm achieves higher clustering quality for datasets composed of fourdimensional Gaussians with varied number of clusters (left), as well as for varied dimensionality of the input Guassians with k = 5 (right). generated by choosing a mean vector uniformly at random from the unit simplex and randomly selecting a covariance matrix from the set of matrices with eigenvalues 1, 2, ..., d. In Figure 2, we compare our algorithm to the k-means algorithm, which clusters each object solely on the mean of the samples. Accuracy is quantified in terms of normalized mutual information (NMI) between discovered clusters and the true clusters, a standard technique for determining the quality of clusters. Figure 2 (left) shows the clustering quality as a function of the number of clusters when the dimensionality of the input Gaussians is fixed (d = 4). Figure 2 (right) gives clustering quality for five clusters across a varying number of dimensions. All results represent averaged NMI values across 50 experiments. As can be seen in Figure 2, our multivariate Gaussian clustering algorithm yields significantly higher NMI values than k-means for all experiments. 4.2 Sensor Networks Sensor networks are wireless networks composed of small, low-cost sensors that monitor their surrounding environment. An open question in sensor networks research is how to minimize communication costs between the sensors and the base station: wireless communication requires a relatively large amount of power, a limited resource on current sensor devices (which are usually battery powered). A recently proposed sensor network system, BBQ [4], reduces communication costs by modelling sensor network data at each sensor device using a time-varying multivariate Gaussian and transmitting only model parameters to the base station. We apply our multivariate Gaussian clustering algorithm to cluster sensor devices from the Intel Lab at Berkeley [8]. Clustering has been used in sensor network applications to determine efficient routing schemes, as well as for discovering trends between groups of sensor devices. The Intel sensor network consists of 52 working sensors, each of which monitors ambient temperature, humidity, light levels, and voltage every thirty seconds. Conditioned on time, the sensor readings can be fit quite well by a multivariate Gaussian. Figure 3 shows the results of our multivariate Gaussian clustering algorithm applied to this sensor network data. For each device, we compute the sample mean and covariance from sensor readings between noon and 2pm each day, for 36 total days. To account for varying scales of measurement, we normalize all variables to have unit variance. The second cluster (denoted by ‘2’ in figure 3) has the largest variance among all clusters: many of the sensors for this cluster are located in high traffic areas, including the large conference room at the top of the lab, and the smaller tables in the bottom of the lab. Since the measurements were taken during lunchtime, we expect higher traffic in these areas. Interestingly, this cluster shows very high co-variation between humidity and voltage. Cluster one is characterized by high temperatures, which is not surprising, as there are several windows on the left side of the lab. This window faces west and has an unobstructed view of the ocean. Finally, cluster three has a moderate level of total variation, with relatively low light levels. The cluster is primarily located in the center and the right of lab, away from outside windows. Figure 3: To reduce communication costs in sensor networks, each sensor device may be modelled by a multivariate Gaussian. The above plot shows the results of applying our algorithm to cluster sensors into three groups, denoted by labels ‘1’, ‘2’, and ‘3’. 4.3 Statistical Debugging Leveraging program runtime statistics for the purpose of software debugging has received recent research attention [12]. Here we apply our algorithm to cluster functional behavior patterns over software bugs in the LATEX document preparation program. The data is taken from the Navel system [7], a system that uses machine learning to provide better error messaging. The dataset contains four software bugs, each of which is caused by an unsuccessful LATEX compilation (e.g. specifying an incorrect number of columns in an array environment) with ambiguous or unclear error messages provided. LATEX has notoriously cryptic error messages for document compilation failures—for example, the message “LaTeX Error: There’s no line here to end” can be caused by numerous problems in the source document. Each function in the program’s source is measured by the frequency with which it is called across each of the four software bugs. We model this distribution as a 4-dimensional multivariate Gaussian, one dimension for each bug. The distributions are estimated from a set of samples; each sample corresponds to a single LATEX file drawn from a set of grant proposals and submitted computer science research papers. For each file and for each of the four bugs, the LATEX compiler is executed over a slightly modified version of the file that has been changed to exhibit the bug. During program execution, function counts are measured and recorded. More details can be found in [7]. Clustering these function counts can yield important debugging insight to assist a software engineer in understanding error dependent program behavior. Figure 4 shows three covariance matrices from a sample clustering of eight clusters. To capture the dependencies between bugs, we normalize each input Gaussian to have zero mean and unit variance. Cluster (a) represents functions that are highly error independent—i.e. the matrix shows high levels of covariation among all pairs of error classes. Conversely, clusters (b) and (c) show that some functions are highly error dependent. Cluster (b) shows a high dependency between bugs 1 and 4, while cluster (c) exhibits high covariation between bugs 1 and 3, and between bugs 2 and 4. ⎡ ⎢⎣ 1.00 0.94 0.94 0.94 0.94 1.00 0.94 0.94 0.94 0.94 1.00 0.94 0.94 0.94 0.94 1.00 ⎤ ⎥⎦ ⎡ ⎢⎣ 1.00 0.58 0.58 0.91 0.58 1.00 0.55 0.67 0.58 0.55 1.00 0.68 0.91 0.67 0.68 1.00 ⎤ ⎥⎦ ⎡ ⎢⎣ 1.00 0.58 0.95 0.58 0.58 1.00 0.58 0.95 0.95 0.58 1.00 0.58 0.58 0.95 0.58 1.00 ⎤ ⎥⎦ (a) (b) (c) Figure 4: Covariance matrices for three clusters discovered by clustering functional behavior of the LATEX document preparation program. Cluster (a) corresponds to functions which are error-independent, while clusters (b) and (c) represent two groups of functions that exhibit different types of error dependent behavior. 5 Related Work In this work, we showed that the differential relative entropy between two multivariate Gaussian distributions can be expressed as a convex combination of the Mahalanobis distance between their mean vectors and the Burg matrix divergence between their covariances. This is in contrast to information theoretic clustering [5], where each input is taken to be a probability distribution over some finite set. In [5], no parametric form is assumed, and the Kullback-Liebler divergence (i.e. discrete relative entropy) can be computed directly from the distributions. The differential entropy between two multivariate Gaussians wass considered in [10] in the context of solving Gaussian mixture models. Although an algebraic expression for this differential entropy was given in [10], no connection to the Burg matrix divergence was made there. Our algorithm is based on the standard expectation-maximization style clustering algorithm [6]. Although the closed-form updates used by our algorithm are similar to those employed by a Bregman clustering algorithm [1], we note that the computation of the optimal covariance matrix (equation (9)) involves the optimal mean vector. In [9], the problem of clustering Gaussians with respect to the symmetric differential relative entropy, D(f||g)+D(g||f) is considered in the context of learning HMM parameters for speech recognition. The resulting algorithm, however, is much more computationally expensive than ours; whereas in our method, the optimal means and covariance parameters can be computed via a simple closed form solution. In [9], no such solution is presented and an iterative method must instead be employed. The problem of finding the optimal Gaussian with respect to the first argument (note that equation (6) is minimized with respect to the second argument) is considered in [11] for the problem of speaker interpolation. Here, only one source is assumed, and thus clustering is not needed. 6 Conclusions We have presented a new algorithm for the problem of clustering multivariate Gaussian distributions. Our algorithm is derived in an information theoretic context, which leads to interesting connections with the differential entropy between multivariate Gaussians, and Bregman divergences. Unlike existing clustering algorithms, our algorithm optimizes both first and second order information in the data. We have demonstrated the use of our method on sensor network data and a statistical debugging application. References [1] A. Banerjee, S. Merugu, I. Dhillon, and S. Ghosh. Clustering with Bregman divergences. In Siam International Conference on Data Mining, pages 234–245, 2004. [2] L. Bregman. The relaxation method finding the common point of convex sets and its application to the solutions of problems in convex programming. In USSR Comp. of Mathematics and Mathematical Physics, volume 7, pages 200–217, 1967. [3] T. M. Cover and J. A. Thomas. Elements of information theory. Wiley Series in Telecommunications, 1991. [4] A. Deshpande, C. Guestrin, S. Madden, J. Hellerstein, and W. Hong. Model-based approximate querying in sensor networks. In International Journal of Very Large Data Bases, 2005. [5] I. Dhillon, S. Mallela, and R. Kumar. A divisive information-theoretic feature clustering algorithm for text classification. In Journal of Machine Learning Research, volume 3, pages 1265–1287, 2003. [6] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley and Sons, Inc., 2001. [7] J. Ha, H. Ramadan, J. Davis, C. Rossbach, I. Roy, and E. Witchel. Navel: Automating software support by classifying program behavior. Technical Report TR-06-11, University of Texas at Austin, 2006. [8] S. Madden. Intel lab data. http://berkeley.intel-research.net/labdata, 2004. [9] T. Myrvoll and F. Soong. On divergence based clustering of normal distributions and its application to HMM adaptation. In Eurospeech, pages 1517–1520, 2003. [10] Y. Singer and M. Warmuth. Batch and on-line parameter estimation of Gaussian mixtures based on the joint entropy. In Neural Information Processing Systems, 1998. [11] T. Yoshimura, T. Masuko, K. Tokuda, T. Kobayashi, and T. Kitamura. Speaker interpolation in HMMbased speech synthesis. In European Conference on Speech Communication and Technology, 1997. [12] A. Zheng, M. Jordan, B. Liblit, and A. Aiken. Statistical debugging of sampled programs. In Neural Information Processing Systems, 2004.
|
2006
|
104
|
2,927
|
Multi-Instance Multi-Label Learning with Application to Scene Classification Zhi-Hua Zhou Min-Ling Zhang National Laboratory for Novel Software Technology Nanjing University, Nanjing 210093, China {zhouzh,zhangml}@lamda.nju.edu.cn Abstract In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multiinstance learning and multi-label learning. Then, we propose the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification. 1 Introduction In traditional supervised learning, an object is represented by an instance (or feature vector) and associated with a class label. Formally, let X denote the instance space (or feature space) and Y the set of class labels. Then the task is to learn a function f : X →Y from a given data set {(x1, y1), (x2, y2), · · · , (xm, ym)}, where xi ∈X is an instance and yi ∈Y the known label of xi. Although the above formalization is prevailing and successful, there are many real-world problems which do not fit this framework well, where a real-world object may be associated with a number of instances and a number of labels simultaneously. For example, an image usually contains multiple patches each can be represented by an instance, while in image classification such an image can belong to several classes simultaneously, e.g. an image can belong to mountains as well as Africa. Another example is text categorization, where a document usually contains multiple sections each of which can be represented as an instance, and the document can be regarded as belonging to different categories if it was viewed from different aspects, e.g. a document can be categorized as scientific novel, Jules Verne’s writing or even books on travelling. Web mining is a further example, where each of the links can be regarded as an instance while the web page itself can be recognized as news page, sports page, soccer page, etc. In order to deal with such problems, in this paper we formalize multi-instance multi-label learning (abbreviated as MIML). In this learning framework, a training example is described by multiple instances and associated with multiple class labels. Formally, let X denote the instance space and Y the set of class labels. Then the task is to learn a function fMIML : 2X →2Y from a given data set {(X1, Y1), (X2, Y2), · · · , (Xm, Ym)}, where Xi ⊆X is a set of instances {x(i) 1 , x(i) 2 , · · · , x(i) ni }, x(i) j ∈X (j = 1, 2, · · · , ni), and Yi ⊆Y is a set of labels {y(i) 1 , y(i) 2 , · · · , y(i) li }, y(i) k ∈Y (k = 1, 2, · · · , li). Here ni denotes the number of instances in Xi and li the number of labels in Yi. After analyzing the relationship between MIML and the frameworks of traditional supervised learning, multi-instance learning and multi-label learning, we propose two MIML algorithms, MIMLBOOST and MIMLSVM. Application to scene classification shows that, solving some real-world problems in the MIML framework can achieve better performance than solving them in existing frameworks such as multi-instance learning and multi-label learning. 2 Multi-Instance Multi-Label Learning We start by investigating the relationship between MIML and the frameworks of traditional supervised learning, multi-instance learning and multi-label learning, and then we develop some solutions. Multi-instance learning [4] studies the problem where a real-world object described by a number of instances is associated with one class label. Formally, the task is to learn a function fMIL : 2X → {−1, +1} from a given data set {(X1, y1), (X2, y2), · · · , (Xm, ym)}, where Xi ⊆X is a set of instances {x(i) 1 , x(i) 2 , · · · , x(i) ni }, x(i) j ∈X (j = 1, 2, · · · , ni), yi ∈{−1, +1} is the label of Xi.1 Multi-instance learning techniques have been successfully applied to diverse applications including scene classification [3, 7]. Multi-label learning [8] studies the problem where a real-world object described by one instance is associated with a number of class labels. Formally, the task is to learn a function fMLL : X →2Y from a given data set {(x1, Y1), (x2, Y2), · · · , (xm, Ym)}, where xi ∈X is an instance and Yi ⊆Y a set of labels {y(i) 1 , y(i) 2 , · · · , y(i) li }, y(i) k ∈Y (k = 1, 2, · · · , li).2 Multi-label learning techniques have also been successfully applied to scene classification [1]. In fact, the multi- learning frameworks result from the ambiguity in representing real-world objects. Multi-instance learning studies the ambiguity in the input space (or instance space), where an object has many alternative input descriptions, i.e. instances; multi-label learning studies the ambiguity in the output space (or label space), where an object has many alternative output descriptions, i.e. labels; while MIML considers the ambiguity in the input and output spaces simultaneously. We illustrate the differences among these learning frameworks in Figure 1. (a) Traditional supervised learning (b) Multi-instance learning (c) Multi-label learning (d) Multi-instance multi-label learning Figure 1: Four different learning frameworks Traditional supervised learning is evidently a degenerated version of multi-instance learning as well as a degenerated version of multi-label learning, while traditional supervised learning, multi-instance learning and multi-label learning are all degenerated versions of MIML. Thus, we can tackle MIML by identifying its equivalence in the traditional supervised learning framework, using multi-instance learning or multi-label learning as the bridge. 1According to notions used in multi-instance learning, (Xi, yi) is a labeled bag while Xi an unlabeled bag. 2Although most works on multi-label learning assume that an instance can be associated with multiple valid labels, there are also works assuming that only one of the labels associated with an instance is correct [6]. We adopt the former assumption in this paper. Solution 1: Using multi-instance learning as the bridge: We can transform a MIML learning task, i.e. to learn a function fMIML : 2X →2Y, into a multi-instance learning task, i.e. to learn a function fMIL : 2X × Y →{−1, +1}. For any y ∈Y, fMIL(Xi, y) = +1 if y ∈Yi and −1 otherwise. The proper labels for a new example X∗can be determined according to Y ∗= {y| argy∈Y[fMIL(X∗, y) = +1]}. We can transform this multi-instance learning task further into a traditional supervised learning task, i.e. to learn a function fSISL : X × Y →{−1, +1}, under a constraint specifying how to derive fMIL(Xi, y) from fSISL(x(i) j , y) (j = 1, · · · , ni). For any y ∈Y, fSISL(x(i) j , y) = +1 if y ∈Yi and −1 otherwise. Here the constraint can be fMIL(Xi, y) = sign[Pni j=1 fSISL(x(i) j , y)] which has been used in transforming multi-instance learning tasks into traditional supervised learning tasks [9].3 Note that other kinds of constraint can also be used here. Solution 2: Using multi-label learning as the bridge: We can also transform a MIML learning task, i.e. to learn a function fMIML : 2X →2Y, into a multi-label learning task, i.e. to learn a function fMLL : Z →2Y. For any zi ∈Z, fMLL(zi) = fMIML(Xi) if zi = φ(Xi), φ : 2X →Z. The proper labels for a new example X∗can be determined according to Y ∗= fMLL(φ(X∗)). We can transform this multi-label learning task further into a traditional supervised learning task, i.e. to learn a function fSISL : Z × Y →{−1, +1}. For any y ∈Y, fSISL(zi, y) = +1 if y ∈Yi and −1 otherwise. That is, fMLL(zi) = {y| argy∈Y[fSISL(zi, y) = +1]}. Here the mapping φ can be implemented with constructive clustering which has been used in transforming multi-instance bags into traditional single-instances [11]. Note that other kinds of mapping can also be used here. 3 Algorithms In this section, we propose two algorithms for solving MIML problems: MIMLBOOST works along the first solution described in Section 2, while MIMLSVM works along the second solution. 3.1 MIMLBOOST Given any set Ω, let |Ω| denote its size, i.e. the number of elements in Ω; given any predicate π, let [[π]] be 1 if π holds and 0 otherwise; given (Xi, Yi), for any y ∈Y, let Ψ(Xi, y) = +1 if y ∈Yi and −1 otherwise, where Ψ is a function Ψ : 2X × Y →{−1, +1}. The MIMLBOOST algorithm is presented in Table 1. In the first step, each MIML example (Xu, Yu) (u = 1, 2, · · · , m) is transformed into a set of |Y| number of multi-instance bags, i.e. {[(Xu, y1), Ψ(Xu, y1)], [(Xu, y2), Ψ(Xu, y2)], · · · , [(Xu, y|Y|), Ψ(Xu, y|Y|)]}. Note that [(Xu, yv), Ψ(Xu, yv)] (v = 1, 2, · · · , |Y|) is a labeled multi-instance bag where (Xu, yv) is a bag containing nu number of instances, i.e. {(x(u) 1 , yv), (x(u) 2 , yv), · · · , (x(u) nu , yv)}, and Ψ(Xu, yv) ∈{+1, −1} is the label of this bag. Thus, the original MIML data set is transformed into a multi-instance data set containing m × |Y| number of bags, i.e. {[(X1, y1), Ψ(X1, y1)], · · · , [(X1, y|Y|), Ψ(X1, y|Y|)], [(X2, y1), Ψ(X2, y1)], · · · , [(Xm, y|Y|), Ψ(Xm, y|Y|)]}. Let [(X(i), y(i)), Ψ(X(i), y(i))] denote the ith of these m × |Y| number of bags, that is, (X(1), y(1)) denotes (X1, y1), · · · , (X(|Y|), y(|Y|)) denotes (X1, y|Y|), · · · , (X(m×|Y|), y(m×|Y|)) denotes (Xm, y|Y|), where (X(i), y(i)) contains ni number of instances, i.e. {(x(i) 1 , y(i)), (x(i) 2 , y(i)), · · · , (x(i) ni , y(i))}. Then, from the data set a multi-instance learning function fMIL can be learned, which can accomplish the desired MIML function because fMIML(X∗) = {y| argy∈Y(sign[fMIL (X∗, y)] = +1)}. Here we use MIBOOSTING [9] to implement fMIL. For convenience, let (B, g) denote the bag [(X, y), Ψ(X, y)]. Then, here the goal is to learn a function F(B) minimizing the bag-level exponential loss EBEG|B[exp(−gF(B))], which ultimately 3This constraint assumes that all instances contribute equally and independently to a bag’s label, which is different from the standard multi-instance assumption that there is one ‘key’ instance in a bag that triggers whether the bag’s class label will be positive or negative. Nevertheless, it has been shown that this assumption is reasonable and effective [9]. Note that the standard multi-instance assumption does not always hold, e.g. the label Africa of an image is usually triggered by several patches jointly instead of by only one patch. Table 1: The MIMLBOOST algorithm 1 Transform each MIML example (Xu, Yu) (u = 1, 2, · · · , m) into |Y| number of multiinstance bags {[(Xu, y1), Ψ(Xu, y1)], · · · , [(Xu, y|Y|), Ψ(Xu, y|Y|)]}. Thus, the original data set is transformed into a multi-instance data set containing m × |Y| number of multi-instance bags, denoted by {[(X(i), y(i)), Ψ(X(i), y(i))]} (i = 1, 2, · · · , m × |Y|). 2 Initialize weight of each bag to W (i) = 1 m×|Y| (i = 1, 2, · · · , m × |Y|). 3 Repeat for t = 1, 2, · · · , T iterations: 3a Set W (i) j = W (i)/ni (i = 1, 2, · · · , m × |Y|), assign the bag’s label Ψ(X(i), y(i)) to each of its instances (x(i) j , y(i)) (j = 1, 2, · · · , ni), and build an instance-level predictor ht[(x(i) j , y(i))] ∈{−1, +1}. 3b For the ith bag, compute the error rate e(i) ∈[0, 1] by counting the number of misclassified instances within the bag, i.e. e(i) = Pni j=1[[ht[(x(i) j ,y(i))]̸=Ψ(X(i),y(i))]] ni . 3c If e(i) < 0.5 for all i ∈{1, 2, · · · , m × |Y|}, go to Step 4. 3d Compute ct = arg minct Pm×|Y| i=1 W (i) exp[(2e(i) −1)ct]. 3e If ct ≤0, go to Step 4. 3f Set W (i) = W (i) exp[(2e(i) −1)ct] (i = 1, 2, · · · , m × |Y|) and re-normalize such that 0 ≤W (i) ≤1 and Pm×|Y| i=1 W (i) = 1. 4 Return Y ∗= {y| argy∈Y sign ³P j P t ctht[(x∗ j, y)] ´ = +1} (x∗ j is X∗’s jth instance). estimates the bag-level log-odds function 1 2 log P r(g=1|B) P r(g=−1|B). In each boosting round, the aim is to expand F(B) into F(B) + cf(B), i.e. adding a new weak classifier, so that the exponential loss is minimized. Assuming all instances in a bag contribute equally and independently to the bag’s label, f(B) = 1 nB P j h(bj) can be derived, where h(bj) ∈{−1, +1} is the prediction of the instance-level classifier h(·) for the jth instance in bag B, and nB is the number of instances in B. It has been shown by [9] that the best f(B) to be added can be achieved by seeking h(·) which maximizes P i Pni j=1[ 1 ni W (i)g(i)h(b(i) j )], given the bag-level weights W = exp(−gF(B)). By assigning each instance the label of its bag and the corresponding weight W (i)/ni, h(·) can be learned by minimizing the weighted instance-level classification error. This actually corresponds to the Step 3a of MIMLBOOST. When f(B) is found, the best multiplier c > 0 can be got by directly optimizing the exponential loss: EBEG|B[exp(−gF(B) + c(−gf(B)))] = X iW (i) exp[c à − g(i) P j h(b(i) j ) ni ! ] = X iW (i) exp[(2e(i) −1)c] where e(i) = 1 ni P j[[(h(b(i) j ) ̸= g(i))]] (computed in Step 3b). Minimization of this expectation actually corresponds to Step 3d, where numeric optimization techniques such as quasi-Newton method can be used. Finally, the bag-level weights are updated in Step 3f according to the additive structure of F(B). 3.2 MIMLSVM Given (Xi, Yi) and zi = φ(Xi) where φ : 2X →Z, for any y ∈Y, let Φ(zi, y) = +1 if y ∈Yi and −1 otherwise, where Φ is a function Φ : Z × Y →{−1, +1}. The MIMLSVM algorithm is presented in Table 2. In the first step, the Xu of each MIML example (Xu, Yu) (u = 1, 2, · · · , m) is collected and put into a data set Γ. Then, in the second step, k-medoids clustering is performed on Γ. Since each Table 2: The MIMLSVM algorithm 1 For MIML examples (Xu, Yu) (u = 1, 2, · · · , m), Γ = {Xu|u = 1, 2, · · · , m}. 2 Randomly select k elements from Γ to initialize the medoids Mt (t = 1, 2, · · · , k), repeat until all Mt do not change: 2a Γt = {Mt} (t = 1, 2, · · · , k). 2b Repeat for each Xu ∈(Γ −{Mt|t = 1, 2, · · · , k}): index = arg mint∈{1,···,k} dH(Xu, Mt), Γindex = Γindex ∪{Xu}. 2c Mt = arg min A∈Γt P B∈Γt dH(A, B) (t = 1, 2, · · · , k). 3 Transform (Xu, Yu) into a multi-label example (zu, Yu) (u = 1, 2, · · · , m), where zu = (zu1, zu2, · · · , zuk) = (dH(Xu, M1), dH(Xu, M2), · · · , dH(Xu, Mk)). 4 For each y ∈Y, derive a data set Dy = {(zu, Φ (zu, y)) |u = 1, 2, · · · , m}, and then train an SVM hy = SV MTrain(Dy). 5 Return Y ∗= {arg max y∈Y hy(z∗)} ∪{y|hy(z∗) ≥0, y ∈Y}, where z∗= (dH(X∗, M1), dH(X∗, M2), · · · , dH(X∗, Mk)). data item in Γ, i.e. Xu, is an unlabeled multi-instance bag instead of a single instance, we employ Hausdorff distance [5] to measure the distance. In detail, given two bags A = {a1, a2, · · · , anA} and B = {b1, b2, · · · , bnB}, the Hausdorff distance between A and B is defined as dH(A, B) = max{max a∈A min b∈B ∥a −b∥, max b∈B min a∈A ∥b −a∥} where ∥a −b∥measures the distance between the instances a and b, which takes the form of Euclidean distance here. After the clustering process, we divide the data set Γ into k partitions whose medoids are Mt (t = 1, 2, · · · , k), respectively. With the help of these medoids, we transform the original multi-instance example Xu into a k-dimensional numerical vector zu, where the ith (i = 1, 2, · · · , k) component of zu is the distance between Xu and Mi, that is, dH(Xu, Mi). In other words, zui encodes some structure information of the data, that is, the relationship between Xu and the ith partition of Γ. This process reassembles the constructive clustering process used by [11] in transforming multiinstance examples into single-instance examples except that in [11] the clustering is executed at the instance level while here we execute it at the bag level. Thus, the original MIML examples (Xu, Yu) (u = 1, 2, · · · , m) have been transformed into multi-label examples (zu, Yu) (u = 1, 2, · · · , m), which corresponds to the Step 3 of MIMLSVM. Note that this transformation may lose information, nevertheless the performance of MIMLSVM is still good. This suggests that MIML is a powerful framework which has captured more original information than other learning frameworks. Then, from the data set a multi-label learning function fMLL can be learned, which can accomplish the desired MIML function because fMIML(X∗) = fMLL(z∗). Here we use MLSVM [1] to implement fMLL. Concretely, MLSVM decomposes the multi-label learning problem into multiple independent binary classification problems (one per class), where each example associated with the label set Y is regarded as a positive example when building SVM for any class y ∈Y , while regarded as a negative example when building SVM for any class y /∈Y , as shown in the Step 4 of MIMLSVM. In making predictions, the T-Criterion [1] is used, which actually corresponds to the Step 5 of the MIMLSVM algorithm. That is, the test example is labeled by all the class labels with positive SVM scores, except that when all the SVM scores are negative, the test example is labeled by the class label which is with the top (least negative) score. 4 Application to Scene Classification The data set consists of 2,000 natural scene images belonging to the classes desert, mountains, sea, sunset, and trees, as shown in Table 3. Some images were from the COREL image collection while some were collected from the Internet. Over 22% images belong to multiple classes simultaneously. Table 3: The image data set (d: desert, m: mountains, s: sea, su: sunset, t: trees) label # images label # images label # images label # images d 340 d + m 19 m + su 19 d + m + su 1 m 268 d + s 5 m + t 106 d + su + t 3 s 341 d + su 21 s + su 172 m + s + t 6 su 216 d + t 20 s + t 14 m + su + t 1 t 378 m + s 38 su + t 28 s + su + t 4 4.1 Comparison with Multi-Label Learning Algorithms Since the scene classification task has been successfully tackled by multi-label learning algorithms [1], we compare the MIML algorithms with established multi-label learning algorithms ADABOOST.MH [8] and MLSVM [1]. The former is the core of a successful multi-label learning system BOOSTEXTER [8], while the latter has achieved excellent performance in scene classification [1]. For MIMLBOOST and MIMLSVM, each image is represented as a bag of nine instances generated by the SBN method [7]. Here each instance actually corresponds to an image patch, and better performance can be expected with better image patch generation method. For ADABOOST.MH and MLSVM, each image is represented as a feature vector obtained by concatenating the instances of MIMLBOOST or MIMLSVM. Gaussian kernel LIBSVM [2] is used to implement MLSVM, where the cross-training strategy is used to build the classifiers while the T-Criterion is used to label the images [1]. The MIMLSVM algorithm is also realized with a Gaussian kernel, while the parameter k is set to be 20% of the number of training images.4 Note that the instance-level predictor used in Step 3a of MIMLBOOST is also a Gaussian kernel LIBSVM (with default parameters). Since ADABOOST.MH and MLSVM make multi-label predictions, here the performance of the compared algorithms are evaluated according to five multi-label evaluation metrics, as shown in Tables 4 to 7, where ‘↓’ indicates ‘the smaller the better’ while ‘↑’ indicates ‘the bigger the better’. Details of these evaluation metrics can be found in [8]. Tenfold cross-validation is performed and ‘mean ± std’ is presented in the tables, where the best performance achieved by each algorithm is bolded. Note that since in each boosting round MIMLBOOST performs more operations than ADABOOST.MH does, for fair comparison, the boosting rounds used by ADABOOST.MH are set to ten times of that used by MIMLBOOST such that the time cost of them are comparable. Table 4: The performance of MIMLBOOST with different boosting rounds boosting evaluation metric rounds hamm. loss ↓ one-error ↓ coverage ↓ rank. loss ↓ ave. prec. ↑ 5 .202±.011 .373±.045 1.026±.093 .208±.028 .764±.027 10 .197±.010 .362±.040 1.013±.109 .191±.027 .770±.026 15 .195±.009 .361±.034 1.004±.101 .186±.025 .772±.023 20 .193±.008 .355±.037 .996±.102 .183±.025 .775±.024 25 .189±.009 .351±.039 .989±.103 .181±.026 .777±.025 Table 5: The performance of ADABOOST.MH with different boosting rounds boosting evaluation metric rounds hamm. loss ↓ one-error ↓ coverage ↓ rank. loss ↓ ave. prec. ↑ 50 .228±.013 .473±.031 1.299±.099 .263±.022 .695±.022 100 .234±.019 .465±.042 1.292±.138 .259±.030 .698±.033 150 .233±.020 .465±.053 1.279±.140 .255±.032 .700±.033 200 .232±.012 .453±.031 1.269±.107 .253±.022 .706±.020 250 .231±.018 .451±.046 1.258±.137 .250±.031 .708±.030 4In preliminary experiments, several percentage values have been tested ranging from 20% to 100% with an interval of 20%. The results show that these values do not significantly affect the performance of MIMLSVM. Table 6: The performance of MIMLSVM with different γ used in Gaussian kernel Gaussian evaluation metric kernel hamm. loss ↓ one-error ↓ coverage ↓ rank. loss ↓ ave. prec. ↑ γ = .1 .181±.017 .332±.036 1.024±.089 .187±.018 .780±.021 γ = .2 .180±.017 .327±.033 1.022±.085 .187±.018 .783±.020 γ = .3 .188±.016 .344±.032 1.065±.094 .196±.020 .772±.020 γ = .4 .193±.014 .358±.030 1.080±.099 .202±.022 .764±.021 γ = .5 .196±.014 .370±.033 1.109±.101 .209±.023 .757±.023 Table 7: The performance of MLSVM with different γ used in Gaussian kernel Gaussian evaluation metric kernel hamm. loss ↓ one-error ↓ coverage ↓ rank. loss ↓ ave. prec. ↑ γ = 1 .200±.014 .379±.032 1.125±.115 .214±.020 .751±.022 γ = 2 .196±.013 .368±.032 1.115±.122 .211±.023 .756±.022 γ = 3 .195±.015 .370±.034 1.129±.113 .214±.022 .754±.023 γ = 4 .196±.016 .372±.034 1.151±.122 .220±.024 .751±.023 γ = 5 .202±.015 .388±.032 1.181±.128 .229±.026 .741±.023 Comparing Tables 4 to 7 we can find that both MIMLBOOST and MIMLSVM are apparently better than ADABOOST.MH and MLSVM. Impressively, pair-wise t-tests with .05 significance level reveal that the worst performance of MIMLBOOST (with 5 boosting rounds) is even significantly better than the best performance of ADABOOST.MH (with 250 boosting rounds) on all the evaluation metrics, and is significantly better than the best performance of MLSVM (with γ = 2) in terms of coverage while comparable on the remaining metrics; the worse performance of MIMLSVM (with γ = .5) is even comparable to the best performance of MLSVM and is significantly better than the best performance of ADABOOST.MH on all the evaluation metrics. These observations confirm that formalizing the scene classification task as a MIML problem to solve by MIMLBOOST or MIMLSVM is better than formalizing it as a multi-label learning problem to solve by ADABOOST.MH or MLSVM. 4.2 Comparison with Multi-Instance Learning Algorithms Since the scene classification task has been successfully tackled by multi-instance learning algorithms [7], we compare the MIML algorithms with established multi-instance learning algorithms DIVERSE DENSITY [7] and EM-DD [10]. The former is one of the most influential multi-instance learning algorithm and has achieved excellent performance in scene classification [7], while the latter has achieved excellent performance on multi-instance benchmark tests [10]. Here all the compared algorithms use the same input representation. That is, each image is represented as a bag of nine instances generated by the SBN method [7]. The parameters of DIVERSE DENSITY and EM-DD are set according to the settings that resulted in the best performance [7, 10]. The MIMLBOOST and MIMLSVM algorithms are implemented as described in Section 4.1, with 25 boosting rounds for MIMLBOOST while γ = .2 for MIMLSVM. Since DIVERSE DENSITY and EM-DD make single-label predictions, here the performance of the compared algorithms are evaluated according to predictive accuracy, i.e. classification accuracy on test set. Note that for MIMLBOOST and MIMLSVM, the top ranked class is regarded as the single-label prediction. Tenfold cross-validation is performed and ‘mean ± std’ is presented in Table 8, where the best performance on each image class is bolded. Note that besides the predictive accuracies on each class, the overall accuracy is also presented, which is denoted by ‘overall’. We can find from Table 8 that MIMLBOOST achieves the best performance on image classes desert and trees while MIMLSVM achieves the best performance on the remaining image classes. Overall, MIMLSVM achieves the best performance. Pair-wise t-tests with .05 significance level reveal that the overall performance of MIMLSVM is comparable to that of MIMLBOOST, both are significantly better than that of DIVERSE DENSITY and EM-DD. These observations confirm that formalizing the scene classification task as a MIML problem to solve by MIMLBOOST or MIMLSVM is better than formalizing it as a multi-instance learning problem to solve by DIVERSE DENSITY or EM-DD. Table 8: Compare predictive accuracy of MIMLBOOST, MIMLSVM, DIVERSE DENSITY and EM-DD Image Compared algorithms class MIMLBOOST MIMLSVM DIVERSE DENSITY EM-DD desert .869±.014 .868±.026 .768±.037 .751±.047 mountains .791±.024 .820±.022 .721±.030 .717±.036 sea .729±.026 .730±.030 .587±.038 .639±.063 sunset .864±.033 .883±.023 .841±.036 .815±.063 trees .801±.015 .798±.017 .781±.028 .632±.060 overall .811±.022 .820±.024 .739±.034 .711±.054 5 Conclusion In this paper, we formalize multi-instance multi-label learning where an example is associated with multiple instances and multiple labels simultaneously. Although there were some works investigating the ambiguity of alternative input descriptions or alternative output descriptions associated with an object, this is the first work studying both these ambiguities simultaneously. We show that an MIML problem can be solved by identifying its equivalence in the traditional supervised learning framework, using multi-instance learning or multi-label learning as the bridge. The proposed algorithms, MIMLBOOST and MIMLSVM, have achieved good performance in the application to scene classification. An interesting future issue is to develop MIML versions of other popular machine learning algorithms. Moreover, it remains an open problem that whether MIML can be tackled directly, possibly by exploiting the connections between the instances and the labels. It is also interesting to discover the relationship between the instances and labels. By unravelling the mixed connections, maybe we can get deeper understanding of ambiguity. Acknowledgments This work was supported by the National Science Foundation of China (60325207, 60473046). References [1] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9):1757–1771, 2004. [2] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, 2001. [3] Y. Chen and J. Z. Wang. Image categorization by learning and reasoning with regions. Journal of Machine Learning Research, 5:913–939, 2004. [4] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P´erez. Solving the multiple-instance problem with axisparallel rectangles. Artificial Intelligence, 89(1-2):31–71, 1997. [5] G. A. Edgar. Measure, Topology, and Fractal Geometry. Springer, Berlin, 1990. [6] R. Jin and Z. Ghahramani. Learning with multiple labels. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 897–904. MIT Press, Cambridge, MA, 2003. [7] O. Maron and A. L. Ratan. Multiple-instance learning for natural scene classification. In Proceedings of the 15th International Conference on Machine Learning, pages 341–349, Madison, MI, 1998. [8] R. E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2-3):135–168, 2000. [9] X. Xu and E. Frank. Logistic regression and boosting for labeled bags of instances. In H. Dai, R. Srikant, and C. Zhang, editors, Lecture Notes in Artificial Intelligence 3056, pages 272–281. Springer, Berlin, 2004. [10] Q. Zhang and S. A. Goldman. EM-DD: An improved multi-instance learning technique. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 1073–1080. MIT Press, Cambridge, MA, 2002. [11] Z.-H. Zhou and M.-L. Zhang. Solving multi-instance problems with classifier ensemble based on constructive clustering. Knowledge and Information Systems, in press.
|
2006
|
105
|
2,928
|
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {celiu,billf,adelson}@csail.mit.edu Abstract A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours. 1 Introduction Humans can reliably analyze visual motion under a diverse set of conditions, including textured as well as featureless objects. Computer vision algorithms have focussed on conditions of texture, where junction or corner-like image structures are assumed to be reliable features for tracking [5, 4, 17]. But under other conditions, these features can generate spurious motions. T-junctions caused by occlusion can move in an image very differently than either of the objects involved in the occlusion event [11]. To properly analyze motions of featureless objects requires a different approach. The spurious matching of T-junctions has been explained in [18] and [9]. We briefly restate it using the simple two bar stimulus in Figure 1 (from [18]). The gray bar is moving rightward in front of the leftward moving black bar, (a). If we analyze the motion locally, i.e. match to the next frame in a local circular window, the flow vectors of the corner and line points are as displayed in Figure 1 (b). The T-junctions located at the intersections of the two bars move downwards, but there is no such a motion by the depicted objects. One approach to handling the spurious motions of corners or T-junctions has been to detect such junctions and remove them from the motion analysis [18, 12]. However, T-junctions are often very difficult to detect in a static image from local, bottom-up information [9]. Motion at occluding boundaries has been studied, for example in [1]. The boundary motion is typically analyzed locally, (a) (b) (c) Figure 1: Illustration of the spurious T-junction motion. (a) The front gray bar is moving to the right and the black bar behind is moving to the left [18]. (b) Based on a local window matching, the eight corners of the bars show the correct motion, whereas the T-junctions show spurious downwards motion. (c) Using the boundarybased representation our system is able to correctly estimate the motion and generate the illusory boundary as well. which can again lead to spurious junction trackings. We are not aware of an existing algorithm that can properly analyze the motions of featureless objects. In this paper, we use a boundary-based approach which does not rely on motion estimates at corners or junctions. We develop a graphical model which integrates local information and assigns probabilities to candidate contour groupings in order to favor motion interpretations corresponding to the motions of the underlying objects. Boundary completion and discounting the motions of spurious features result from optimizing the graphical model states to explain the contours and their motions. Our system is able to automatically detect and group the boundary fragments, analyze the motion correctly, as well as exploit both static and dynamic cues to synthesize the illusory boundaries (c). We represent the boundaries at three levels of grouping: edgelets, boundary fragments and contours, where a fragment is a chain of edgelets and a contour is a chain of fragments. Each edgelet within a boundary fragment has a position and an orientation and carries local evidence for motion. The main task of our model is then to group the boundary fragments into contours so that the local motion uncertainties associated with the edgelets are disambiguated and occlusion or other spurious feature events are properly explained. The result is a specialized motion tracking algorithm that properly analyzes the motions of textureless objects. Our system consists of four conceptual steps, discussed over the next three sections (the last two steps happen together while finding the optimal states in the graphical model): (a) Boundary fragment extraction: Boundary fragments are detected in the first frame. (b) Edgelet tracking with uncertainties: Boundary fragments are broken into edgelets, and, based on local evidence, the probability distribution is found for the motion of each edgelet of each boundary fragment. (c) Grouping boundary fragments into contours: Boundary fragments are grouped, using both temporal and spatial cues. (d) Motion estimation: The final fragment groupings disambiguate motion uncertainties and specify the final inferred motions. We restrict the problem to two-frame motion analysis though the algorithm can easily be extended to multiple frames. 2 Boundary Fragment Extraction Extracting boundaries from images is a nontrivial task by itself. We use a simple algorithm for boundary extraction, analyzing oriented energy using steerable filters [3] and tracking the boundary in a manner similar to that of the Canny edge detector [2]. A more sophisticated boundary detector can be found in [8]; occluding boundaries can also be detected using special cameras [13]. However, for our motion algorithm designed to handle the special case of textureless objects, we find that our simple boundary detection algorithm works well. Mathematically, given an image I, we seek to obtain a set of fragments B = {b i}, where each fragment bi is a chain of edgelets bi = {eik}ni k=1. Each edgelet eik = {pik, θik} is a particle which embeds both location pik ∈R2 and orientation θik ∈[0, 2π) information. (a) (b) (c) (d) Figure 2: The local motion vector is estimated for each contour in isolation by selectively comparing orientation energies across frames. (a) A T-junction of the two bar example showing the contour orientation for this motion analysis. (b) The other frame. (c) The relevant orientation energy along the boundary fragment, both for the 2nd frame. A Gaussian pdf is fit to estimate flow, weighted by the oriented energy. (d) Visualization of the Gaussian pdf. The possible contour motions are unaffected by the occluding contour at a different orientation and no spurious motion is detected at this junction. We use H4 and G4 steerable filters [3] to filter the image and obtain orientation energy per pixel. These filters are selected because they describe the orientation energies well even at corners. For each pixel we find the maximum energy orientation and check if it is local maximum within a slice perpendicular to this orientation. If that is true and the maximum energy is above a threshold T 1 we call this point a primary boundary point. We collect a pool of primary boundary points after running this test for all the pixels. We find the primary boundary point with the maximum orientation energy from the pool and do bidirectional contour tracking, consisting of prediction and projection steps. In the prediction step, the current edgelet generates a new one by following its orientation with a certain step size. In the projection step, the orientation is locally maximized both in the orientation bands and within a small spatial window. The tracking is stopped if the energy is below a threshold T 2 or if the turning angle is above a threshold. The primary boundary points that are close to the tracked trajectory are removed from the pool. This process is repeated until the pool is empty. The two thresholds T 1 and T2 play the same roles as those in Canny edge detection [2]. While the boundary tracker should stop at sharp corners, it can turn around and continue tracking. We run a postprocess to break the boundaries by detecting points of curvature local maxima which exceed a curvature threshold. 3 Edgelet Tracking with Uncertainties We next break the boundary contours into very short edgelets and obtain the probabilities, based on local motion of the boundary fragment, for the motion vector at each edgelet. We cannot use conventional algorithms, such as Lucas-Kanade [5], for local motion estimation since they rely on corners. The orientation θik for each edgelet was obtained during boundary fragment extraction. We obtain the motion vector by finding the spatial offsets of the edgelet which match the orientation energy along the boundary fragment in this orientation. We fit a Gaussian distribution N(μ ik, Σik) of the flow weighted by the orientation energy in the window. The mean and covariance matrix is added to the edgelet: eik ={pik, θik, μik, Σik}. This procedure is illustrated in Figure 2. Grouping the boundary fragments allows the motion uncertainties to be resolved. We next discuss the mathematical model of grouping as well as the computational approach. 4 Boundary Fragment Grouping and Motion Estimation 4.1 Two Equivalent Representations for Fragment Grouping The essential part of our model is to find the connection between the boundary fragments. There are two possible representations for grouping. One representation is the connection of each end of the boundary fragment. We formulate the probability of this connection to model the local saliency of contours. The other equivalent representation is a chain of fragments that forms a contour, on which global statistics are formulated, e.g. structural saliency [16]. Similar local and global modeling of contour saliency was proposed in [14]; in [7], both edge saliency and curvilinear continuity were used to extract closed contours from static images. In [15], contour ends are grouped using loopy belief propagation to interpret contours. The connections between fragment ends are modeled by switch variables. For each boundary fragment bi, we use a binary variable {0, 1} to denote the two ends of the fragment, i.e. b (0) i =ei1 and b(1) i = ei,ni. Let switch variable S(i, ti) = (j, tj) denote the connection from b(ti) i to b(tj) j . This 1 b 2 b 3 b (a) (b) (c) (d) (e) ) 0 ( 1 b 1 b 2 b 3 b 1 b 2 b 3 b 1 b 2 b 3 b 1 b 2 b 3 b Figure 3: A simple example illustrating switch variables, reversibility and fragment chains. The color arrows show the switch variables. The empty circle indicates end 0 and the filled indicates end 1. (a) Shows three boundary fragments. Theoretically b(0) 1 can connect to any of the other ends including itself, (b). However, the switch variable is exclusive, i.e. there is only one connection to b(0) 1 , and reversible, i.e. if b(0) 1 connects to b(0) 3 , then b(0) 3 should also connect to b(0) 1 , as shown in (c). Figures (d) and (e) show two of the legal contour groupings for the boundary fragments: two open contours and a closed loop contour. connection is exclusive, i.e. each end of the fragment should either connect to one end of the other fragment, or simply have no connection. An exclusive switch is further called reversible, i.e. if S(i, ti) = (j, tj), then S(j, tj) = (i, ti), or in a more compact form S(S(i, ti)) = (i, ti). (1) When there is no connection to b(ti) i , we simply set S(i, ti) = (i, ti). We use the binary function δ[S(i, ti)−(j, tj)] to indicate whether there is a connection between b(ti) i and b(tj) j . The set of all the switches are denoted as S={S(i, ti)|i=1:N, ti =0, 1}. We say S is reversible if every switch variable satisfies Eqn. (1). The reversibility of switch variables is shown in Figure 3 (b) and (c). From the values of the switch variables we can obtain contours, which are chains of boundary fragments. A fragment chain is defined as a series of the end points c = {(b(x1) i1 , b(x1) i1 ), · · · , (b(xm) im , b(xm) im )}. The chain is specified by fragment label {i1, · · · , im} and end label {x1, · · · , xm}. It can be either open or closed. The order of the chain is determined by the switch variable. Each end appears in the chain at most once. The notation of a chain is not unique. Two open chains are identical if the fragment and end labels are reversed. Two closed chains are identical if they match each other by rotating one of them. These identities are guaranteed from the reversibility of the switch variables. A set of chains C = {ci} can be uniquely extracted based on the values of the reversible switch variables, as illustrated in Figure 3 (d) and (e). 4.2 The Graphical Model Given the observation O, the two images, and the boundary fragments B, we want to estimate the flow vectors V={vi} and vi ={vik}, where each vik associates with edgelet eik, and the grouping variables S (switches) or equivalently C (fragment chains). Since the grouping variable S plays an essential role in the problem, we shall first infer S and then infer V based on S. 4.2.1 The Graph for Boundary Fragment Grouping We use two equivalent representations for boundary grouping, switch variables and chains. We use δ[S(S(i, ti)) −(i, ti)] for each end to enforce the reversibility. Suppose otherwise S(i 1, ti1) = S(i2, ti2) = (j, tj) for i1 ̸= i2. Let S(j, tj) = (i1, ti1) without loss of generality, then δ[S(S(i2, ti2)) −(i2, ti2)]=0, which means that the switch variables are not reversible. We use a function λ(S(i, ti); B, O) to measure the distribution of S(i, ti), i.e. how likely b(ti) i connects to the end of other fragments. Intuitively, two ends should be connected if ⋄Motion similarity the distributions of the motion of the two end edgelets are similar; ⋄Curve smoothness the illusory boundary to connect the two ends is smooth; ⋄Contrast consistency the brightness contrast at the two ends consistent with each other. We write λ(·) as a product of three terms, one enforcing each criterion. We shall follow the example in Figure 4 to simplify the notation, where the task is to compute λ(S(1, 0)=(2, 0)). The first term 1 b 2 b ) 0 ( 1 b ) 0 ( 2 b 1 b 2 b ) , ( 11 11 Σ μ ) , ( 21 21 Σ μ 1 b r 2 b 1 b 2 b 11 h 12 h 21 h 22 h (a) (b) (c) (d) Figure 4: An illustration of local saliency computation. (a) Without loss of generalization we assume the two ends to be b(0) 1 and b(0) 2 . (b) The KL divergence between the distributions of flow vectors are used to measure the motion similarity. (c) An illusory boundary γ is generated by minimizing the energy of the curve. The sum of square curvatures are used to measure the curve smoothness. (d) The means of the local patches located at the two ends are extracted, i.e. h11 and h12 from b(0) 1 , h21 and h22 from b(0) 2 , to compute contrast consistency. is the KL divergence between the two Gaussian distributions of the flow vectors exp{−αKLKL(N(μ11, Σ11), N(μ21, Σ21))}, (2) where αKL is a scaling factor. The second term is the local saliency measure on the illusory boundary γ that connects the two ends. The illusory boundary is simply generated by minimizing the energy of the curve. The saliency is defined as exp −αγ γ dθ ds 2 ds , (3) where θ(s) is the slope along the curve, and dθ ds is local curvature [16]. αγ is a scaling factor. The third term is computed by extracting the mean of local patches located at the two ends exp −dmax 2σ2max −dmin 2σ2 min , (4) where d1 = (h11 −h21)2, d2 = (h12 −h22)2, and dmax = max(d1, d2), dmin = min(d1, d2). σmax > σmin are the scale parameters. h11, h12, h21, h22 are the means of the pixel values of the four patches located at the two end points. For self connection we simply set a constant value: λ(S(i, ti)=(i, ti))=τ. We use a function ψ(ci; B, O) to model the structural saliency of contours. It was discovered in [10] that convex occluding contours are more salient, and additional T-junctions along the contour may increase or decrease the occlusion perception. Here we simply enforce that a contour should have no self-intersection. ψ(ci; B, O)=1 if there is no self intersection and ψ(ci; B, O)=0 otherwise. Thus, the (discrete) graphical model favoring the desired fragment grouping is Pr(S; B, O) = 1 ZS N i=1 1 ti=0 λ(S(i, ti); B, O)δ[S(S(i, ti)) −(i, ti)] · M j=1 ψ(cj; B, O), (5) where ZS is a normalization constant. Note that this model measures both the switch variables S(i, ti) for local saliency and the fragment chains ci to enforce global structural saliency. 4.2.2 Gaussian MRF on Flow Vectors Given the fragment grouping, we model the flow vectors V as a Gaussian Markov random field (GMRF). The edgelet displacement within each boundary fragment should be smooth and match the observation along the fragment. The probability density is formulated as ϕ(vi; bi) = ni k=1 exp{−(vik −μik)T Σ−1 ik (vik −μik)} ni−1 k=1 exp{−1 2σ2 ∥vik −vi,k+1∥2}, (6) where μik and Σik are the motion parameters of each edgelet estimated in Sect 3. We use V(i, ti) to denote the flow vector of end ti of fragment bi. We define V(S(i, ti))=V(j, tj) if S(i, ti) = (j, tj). Intuitively the flow vectors of the two ends should be similar if they are connected, or mathematically φ(V(i, ti), V(S(i, ti))) = 1 ifS(i, ti) = (i, ti), exp{−1 2σ2 ∥V(i, ti)−V(S(i, ti))∥2} otherwise. (7) The (continuous) graphical model of the flow vectors is therefore defined as Pr(V|S; B) = 1 ZV N i=1 ϕ(vi; bi) 1 ti=0 φ(V(i, ti), V(S(i, ti))) (8) where ZV is a normalization constant. When S is given it is a GMRF which can be solved by least squares. 4.3 Inference Having defined the graphical model to favor the desired motion and grouping interpretations, we need to find the state parameters that best explain the image observations. The natural decomposition of S and V in our graphical model Pr(V, S; B, O)=Pr(S; B, O) · Pr(V|S; B, O), (9) (where Pr(S; B, O) and Pr(V|S; B, O) are defined in Eqn. (5) and (8) respectively) lends itself to performing two-step inference. We first infer the boundary grouping B, and then infer V based on B. The second step is simply to solve least square problem since Pr(V|S; B, O) is a GMRF. This approach does not globally optimize Eqn. (9) but results in reasonable solution because V strongly depends on S. The density function Pr(S; B, O) is not a random field, so we use importance sampling [6] to obtain the marginal distribution Pr(S(i, t i); B, O). The proposal density of each switch variable is set to be q (S(i, ti)=(j, tj)) ∝1 Zq λ (S(i, ti)=(j, tj)) λ (S(j, tj)=(i, ti)) (10) where λ(·) has been normalized to sum to 1 for each end. We found that this bidirectional measure is crucial to take valid samples. To sample the proposal density, we first randomly select a boundary fragment, and connect to other fragments based on q(S(i, t i)) to form a contour (a chain of boundary fragments). Each end is sampled only once, to ensure reversibility. This procedure is repeated until no fragment is left. In the importance step we run the binary function ψ(c i) to check that each contour has no self-intersection. If ψ(ci) = 0 then this sample is rejected. The marginal distributions are estimated from the samples. Lastly the optimal grouping is obtained by replacing random sampling with selecting the maximum-probability connection over the estimated marginal distributions. The number of samples needed depends on the number of the fragments. In practice we find that n2 samples are sufficient for n fragments. 5 Experimental Results Figure 6 shows the boundary extraction, grouping, and motion estimation results of our system for both real and synthetic examples1. All the results are generated using the same parameter settings. The algorithm is implemented in MATLAB, and the running time varies from ten seconds to a few minutes, depending on the number of the boundary fragments found in the image. The two-bar examples in Figure 1(a) yields fourteen detected boundary fragments in Figure 6(a) and two contours in (b). The estimated motion matches the ground truth at the T-junctions. The fragments belonging to the same contour are plotted in the same color and the illusory boundaries are synthesized as shown in (c). The boundaries are warped according to the estimated flow and displayed in (d). The hallucinated illusory boundaries in frame 1 (c) and 2 (d) are plausible amodal completions. The second example is the Kanizsa square where the frontal white square moves to the right bottom. Twelve fragments are detected in (a) and five contours are grouped in (b). The estimated motion and generated illusory boundary also match the ground truth and human perception. Notice that the arcs tend to connect to other ones if we do not impose the structural saliency ψ(·). We apply our system to a video of a dancer (Figure 5 (a) and (b)). In this stimulus the right leg moves downwards, but there is weak occluding boundary at the intersection of the legs. Eleven 1The results can be viewed online http://people.csail.mit.edu/celiu/contourmotions/ (a) Dancer frame 1 (b) Dancer frame 2 (c) Chair frame 1 (d) Chair frame 2 Figure 5: Input images for the non-synthetic examples of Figure 6. The dancer’s right leg is moving downwards and the chair is rotating (note the changing space between the chair’s arms). boundary fragments are extracted in (a) and five contours are extracted in (b). The estimated motion (b) matches the ground truth. The hallucinated illusory boundary in (c) and (d) correctly connect the occluded boundary of the right leg and the invisible boundary of the left leg. The final row shows challenging images of a rotating chair (Figure 5 (c) and (d)), also showing proper contour completion and motion analysis. Thirty-seven boundary fragments are extracted and seven contours are grouped. To complete the occluded contours of this image would be nearly impossible working only from a static image. Exploiting motion as well as static information, our system is able to complete the contours properly. Note that the traditional motion analysis algorithms fail at estimating motion for these examples (see supplementary videos) and would thus also fail at correctly grouping the objects based on the motion cues. 6 Conclusion We propose a novel boundary-based representation to estimate motion under the challenging visual conditions of moving textureless objects. Ambiguous local motion measurements are resolved through a graphical model relating edgelets, boundary fragments, completed contours, and their motions. Contours are grouped and their motions analyzed simultaneously, leading to the correct handling of otherwise spurious occlusion and T-junction features. The motion cues help the contour completion task, allowing completion of contours that would be difficult or impossible using only low-level information in a static image. A motion analysis algorithm such as this one that correctly handles featureless contour motions is an essential element in a visual system’s toolbox of motion analysis methods. References [1] M. J. Black and D. J. Fleet. Probabilistic detection and tracking of motion boundaries. International Journal of Computer Vision, 38(3):231–245, 2000. [2] J. Canny. A computational approach to edge detection. IEEE Trans. Pat. Anal. Mach. Intel., 8(6):679–698, Nov 1986. [3] W. T. Freeman and E. H. Adelson. The design and use of steerable filters. IEEE Trans. Pat. Anal. Mach. Intel., 13(9):891–906, Sep 1991. [4] B. K. P. Horn and B. G. Schunck. Determing optical flow. Artificial Intelligence, 17:185–203, 1981. [5] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 674–679, 1981. [6] D. Mackay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. [7] S. Mahamud, L. Williams, K. Thornber, and K. Xu. Segmentation of multiple salient closed contours from real images. IEEE Trans. Pat. Anal. Mach. Intel., 25(4):433–444, 2003. [8] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pat. Anal. Mach. Intel., 26(5):530–549, May 2004. [9] J. McDermott. Psychophysics with junctions in real images. Perception, 33:1101–1127, 2004. [10] J. McDermott and E. H. Adelson. The geometry of the occluding contour and its effect on motion interpretation. Journal of Vision, 4(10):944–954, 2004. [11] J. McDermott and E. H. Adelson. Junctions and cost functions in motion interpretation. Journal of Vision, 4(7):552–563, 2004. (a) Extracted boundaries (b) Estimated flow (c) Frame 1 (d) Frame 2 Figure 6: Experimental results for some synthetic and real examples. The same parameter settings were used for all examples. Column (a): Boundary fragments are extracted using our boundary tracker. The red dots are the edgelets and the green ones are the boundary fragment ends. Column (b): Boundary fragments are grouped into contours and the flow vectors are estimated. Each contour is shown in its own color. Columns (c): the illusory boundaries are generated for the first and second frames. The gap between the fragments belonging to the same contour are linked exploiting both static and motion cues in Eq. (5). [12] S. J. Nowlan and T. J. Sejnowski. A selection model for motion processing in area mt primates. The Journal of Neuroscience, 15(2):1195–1214, 1995. [13] R. Raskar, K.-H. Tan, R. Feris, J. Yu, and M. Turk. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. ACM Trans. Graph. (SIGGRAPH), 23(3):679–688, 2004. [14] X. Ren, C. Fowlkes, and J. Malik. Scale-invariant contour completion using conditional random fields. In Proceedings of International Conference on Computer Vision, pages 1214–1221, 2005. [15] E. Saund. Logic and MRF circuitry for labeling occluding and thinline visual contours. In Advances in Neural Information Processing Systems 18, pages 1153–1160, 2006. [16] A. Shahua and S. Ullman. Structural saliency: the detection of globally salient structures using a locally connected network. In Proceedings of International Conference on Computer Vision, pages 321–327, 1988. [17] J. Shi and C. Tomasi. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition, pages 593–600, 1994. [18] Y. Weiss and E. H. Adelson. Perceptually organized EM: A framework for motion segmentaiton that combines information about form and motion. Technical Report 315, M.I.T Media Lab, 1995.
|
2006
|
106
|
2,929
|
Learning to be Bayesian without Supervision Martin Raphan Courant Inst. of Mathematical Sciences New York University raphan@cims.nyu.edu Eero P. Simoncelli Center for Neural Science, and Courant Inst. of Mathematical Sciences New York University eero.simoncelli@nyu.edu Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from data since one does not have access to uncorrupted samples of the variable being estimated. We show that for a wide variety of observation models, the Bayes least squares (BLS) estimator may be formulated without explicit reference to the prior. Specifically, we derive a direct expression for the estimator, and a related expression for the mean squared estimation error, both in terms of the density of the observed measurements. Each of these prior-free formulations allows us to approximate the estimator given a sufficient amount of observed data. We use the first form to develop practical nonparametric approximations of BLS estimators for several different observation processes, and the second form to develop a parametric family of estimators for use in the additive Gaussian noise case. We examine the empirical performance of these estimators as a function of the amount of observed data. 1 Introduction Bayesian methods are widely used throughout engineering for estimating quantities from corrupted measurements. Those that minimize the mean squared error (known as Bayes least squares, or BLS) are particularly widespread. These estimators are usually derived assuming explicit knowledge of the observation process (expressed as the conditional density of the observation given the quantity to be estimated), and the prior density over that quantity. Despite its appeal, this approach is often criticized for the reliance on knowledge of the prior distribution, since the true prior is usually not known, and in many cases one does not have data drawn from this distribution with which to approximate it. In this case, it must be learned from the same observed measurements that are available in the estimation problem. In general, learning the prior distribution from the observed data presents a difficult, if not impossible task, even when the observation process is known. In the commonly used ”empirical Bayesian” approach [1], one assumes a parametric family of densities, whose parameters are obtained by fitting the data. This prior is then used to derive an estimator that may be applied to the data. If the true prior is not a member of the assumed parametric family, however, such estimators can perform quite poorly. An estimator may also be obtained in a supervised setting, in which one is provided with many pairs containing a corrupted observation along with the true value of the quantity to be estimated. In this case, selecting an estimator is a classic regression problem: find a function that best maps the observations to the correct values, in a least squares sense. Given a large enough number of training samples, this function will approach the BLS estimate, and should perform well on new samples drawn from the same distribution as the training samples. In many real-world situations, however, one does not have access to such training data. In this paper, we examine the BLS estimation problem in a setting that lies between the two cases described above. Specifically, we assume the observation process (but not the prior) is known, and we assume unsupervised training data, consisting only of corrupted observations (without the correct values). We show that for many observation processes, the BLS estimator may be written directly in terms of the observation density. We also show a dual formulation, in which the BLS estimator may be obtained by minimizing an expression for the mean squared error that is written only in terms of the observation density. A few special cases of the first formulation appear in the empirical Bayes literature [2], and of the second formulation in another branch of the statistical literature concerned with improvement of estimators [3, 4, 5]. Our work serves to unify these prior-free methods within a linear algebraic framework, and to generalize them to a wider range of cases. We develop practical nonparametric approximations of estimators for several different observation processes, demonstrating empirically that they converge to the BLS estimator as the amount of observed data increases. We also develop a parametric family of estimators for use in the additive Gaussian case, and examine their empirical convergence properties. We expect such BLS estimators, constructed from corrupted observations without explicit knowledge of, assumptions about, or samples from the prior, to prove useful in a variety real-world estimation problems faced by machine or biological systems that must learn from examples. 2 Bayes least squares estimation Suppose we make an observation, Y , that depends on a hidden variable X, where X and Y may be scalars or vectors. Given this observation, the BLS estimate of X is simply the conditional expectation of the posterior density, E{X|Y = y}. If the prior distribution on X is PX, and the likelihood function is PY |X then this can be written using Bayes’ rule as E{X|Y = y} = x PX|Y (x|y) dx = x PY |X(y|x) PX(x) dx PY (y) , (1) where the denominator is the distribution of the observed data: PY (y) = PY |X(y|x) PX(x) dx . (2) If we know PX and PY |X, we can calculate this explicitly. Alternatively, if we do not know PX or PY |X, but are given independent identically distributed (i.i.d.) samples (Xn, Yn) drawn from the joint distribution of (X, Y ), then we can solve for the estimator f(y) = E{X|Y = y} nonparametrically, or we could choose a parametric family of estimators {fθ}, and choose θ to minimize the empirical squared error: ˆθ = arg min θ 1 N N n=1 |fθ(Yn) −Xn|2. However, in many situations, one does not have access to PX, or to samples drawn from PX. 2.1 Prior-free reformulation of the BLS estimator In many cases, the BLS estimate may be written without explicit reference to the prior distribution. We begin by noting that in Eq. (1), the prior appears only in the numerator N(y) = PY |X(y|x) x PX(x) dx. This equation may be viewed as a composition of linear transformations of the function PX(x) N(y) = (A ◦X){PX}(y), where X{f}(x) = xf(x), and the operator A computes an inner product with the likelihood function A{f}(y) = PY |X(y|x) f(x) dx. Similarly, Eq. (2) may be viewed as the linear transformation A applied to PX(x). If the linear transformation A is 1-1, and we restrict PY to lie in the range of A, then we can then write the numerator as a linear transformation on PY alone, without explicit reference to PX: N(y) = (A ◦X ◦A−1){PY }(y) = L{PY }(y). (3) In the discrete case, PY (y) and N(y) are each vectors, A is a matrix containing PY |X, X is a diagonal matrix containing values of x, and ◦is matrix multiplication. This allows us to write the BLS estimator as E{X|Y = y} = L{PY }(y) PY (y) . (4) Note that if we wished to calculate E{Xn|Y }, then Eq. (3) would be replaced by (A ◦Xn ◦ A−1){PY } = (A ◦X ◦A−1)n{PY } = Ln{PY } . By linearity of the conditional expectation, we may extend this to any polynomial function (and thus to any function that can be approximated with a polynomial): E M k=−N ckXk|Y = y = M k=−N ckLk{PY }(y) PY (y) . In the definition of the operator L, A−1 effectively inverts the observation process, recovering PX from PY . In many situations, this operation will not be well-behaved. For example, in the case of additive Gaussian noise, A−1 is a deconvolution operation which is inherently unstable for high frequencies. The usefulness of Eq. (4) comes from the fact that in many cases, the composite operation L may be written explicitly, even when the inverse operation is poorly defined or unstable. In section 3, we develop examples of operators L for a variety of observation processes. 2.2 Prior-free reformulation of the mean squared error In some cases, developing a stable nonparametric approximation of the ratio in Eq. (4) may be difficult. However, the linear operator formulation of the BLS estimator also leads to a dual expression for the mean squared error that does not depend explicitly on the prior, and this may be used to select an optimal estimator from a parametric family of estimators. Specifically, for any estimator fθ(Y ) parameterized by θ, the mean squared error may be decomposed into two orthogonal terms: E |fθ(Y ) −X|2 = E |fθ(Y ) −E(X|Y )|2 + E |E(X|Y ) −X|2 . The second term is the minimum possible MSE, obtained when using the optimal estimator. Since it does not depend on fθ, it is irrelevant for optimizing θ. The first term may be expanded as E |fθ(Y ) −E(X|Y )|2 = E |fθ(Y )|2 −2fθ(Y )E(X|Y ) + E |E(X|Y )|2 . Again, the second expectation does not depend on fθ. Using the prior-free formulation of the previous section, the second component of the first expectation may be written as E {fθ(Y )E(X|Y )} = E fθ(Y )L{PY }(Y ) PY (Y ) = fθ(y)L{PY }(y) PY (y) PY (y)dy = fθ(y) L{PY }(y)dy = L∗{fθ}(y)PY (y)dy = E {L∗{fθ}(Y )} , where L∗is the dual operator of L (in the discrete case, L∗is the matrix transpose of L). Combining all of the above, we have: arg min θ E |fθ(Y ) −X|2 = arg min θ E |fθ(Y )|2 −2L∗{fθ}(Y ) . (5) where the expectation on the right is over the observation variable, Y . In practice, we can solve for an optimal θ by minimizing the sample mean of this quantity: ˆθ = arg min θ 1 N N n=1 |fθ(Yn)|2 −2L∗{fθ}(Yn) . (6) where {Yn} is a set of observed data. Again this does not require any knowledge of, or samples drawn from, the prior PX. 3 Example estimators In general, it can be difficult to obtain the operator L directly from the definition in Eq. (3), because inversion of the operator A could be unstable or undefined. Instead, a solution may often be obtained by noting that the definition implies that L ◦A = A ◦X, or, equivalently L{PY |X(y|x)} = xPY |X(y|x). This is an eigenfunction equation: for each value of x, the conditional density PY |X(y|x) must be an eigenfunction (eigenvector, for discrete variables) of eoperator L, with associated eigenvalue x. Consider a standard example, in which the variable of interest is corrupted by independent additive noise: Y = X + W. The conditional density is PY |X(y|x) = PW (y −x). We wish to find an operator which when applied to this conditional density (viewed as a function of y) will give L{PW (y −x)} = x PW (y −x) (7) for all x. Subtracting y PW (y −x) from both sides gives M {PW (y −x)} = −(y −x) PW (y −x). (8) where M {f} (y) = L{f}(y) −y f(y) is a linear shift-invariant operator (acting in y). Taking Fourier transforms and using the convolution and differentiation properties gives: M(ω) PW (ω) = − (yPW )(ω) = −i∇ω PW (ω), (9) so that M(ω) = −i∇ω PW (ω) PW (ω) = −i∇ω ln
PW (ω) . (10) This gives us the linear operator L{f}(y) = y f(y) −F−1 i∇ω ln
PW (ω) f(ω) (y), (11) where F−1 denotes the inverse Fourier transform. Note that throughout this discussion X and W played symmetric roles. Thus, in cases with known prior density and unknown additive noise density, one can formulate the estimator entirely in terms of the prior. Our prior-free estimator methodology is quite general, and can often be applied to more complicated observation processes. In order to give some sense of the diversity of forms that can arise, Table 1 provides additional examples. References for the specific cases that we have found in the statistics literature are provided in table. Obs. process Obs. density: PY |X(y|x) Numerator: N(y) = L{PY }(y) Discrete A (A ◦X ◦A−1)PY (y) Gen. add. PW (y −x) yPY −F−1 i∇ω ln
PW (ω) PY (ω) Add. Gaussian [6]/[4]* exp{ −1 2 (y−x−µ)T Λ−1(y−x−µ)} √ |2πΛ| (y −µ)PY (y) + Λ∇yPY (y) Add. Poisson λke−λ k! δ(y −x −ks) yPY (y) −λsPY (y −s) Add. Laplacian 1 2αe−|(y−x)/α| yPY (y) + 2α2{P ′ W ⋆PY }(y) Add. Cauchy 1 π( α (α(y−x))2+1) yPY (y) −{ 1 2παy ⋆PY }(y) Add. uniform 1 2a, |y −x| ≤a 0, |y −x| > a yPY (y) + a sgn(k)PY (y −ak) −1 2 PY (˜y)sgn(y −˜y)d˜y Add. random # of components PW (y −x), where: W ∼K k=0 Wk, Wk i.i.d. (Pc), K ∼Poiss(λ) yPY (y) −λ{(yPc) ⋆PY }(y) Disc. exp. [2]/[5]* h(x)g(n)xn g(n) g(n+1)PY (n + 1) Disc. inv. exp. [5]* h(x)g(n)x−n g(n) g(n−1)PY (n −1) Cnt. exp. [2]/[3]* h(x)g(y)eT (y)x g(y) T ′(y) d dy{ PY (y) g(y) } Cnt. inv. exp. [3]* h(x)g(y)eT (y)/x g(y) y −∞ T ′(˜y) g(˜y) PY (˜y)d˜y Poisson [7]/[5]* xne−x n! (n + 1)PY (n + 1) Gauss. scale mixture 1 √ 2πxe−y2 2x −EY {Y ; Y < y} Lapl. scale mixture 1 xe−y x ; x, y > 0 PY {Y > y} Table 1: Prior-free estimation formulas. Functions written with hats or in terms of ω represent multiplication in the Fourier Domain. n replaces y for discrete distributions. Bracketed numbers are references for operators L, with * denoting references for the parametric (dual) operator, L∗. 4 Simulations 4.1 Non-parametric examples Since each of the prior-free estimators discussed above relies on approximating values from the observed data, the behavior of such estimators should approach the BLS estimator as the number of data samples grows. In Fig. 1, we examine the behavior of three non-parametric prior-free estimators based on Eq. (4). The first case corresponds to data drawn independently from a binary source, which are observed through a process in which bits are switched with probability 1 4. The estimator does not know the binary distribution of the source (which was a “fair coin” for our simulation), but does know the bit-switching probability. For this estimator we approximate PY using a simple histogram, and then use the matrix version of the linear operator in Eq. (3). We characterize the behavior of this estimator as a function of the number of data points, N, by running many Monte Carlo simulations for each N and indicating the mean improvement in MSE (compared with the ML estimator, which is the identity function), the mean improvement using the conventional BLS estimation function, E{X|Y = y} assuming the prior density is known, and the standard deviations of the improvements taken over our simulations. Figure 1b shows similar results for additive Gaussian noise, with SNR replacing MSE. Signal density is a generalized Gaussian with exponent 0.5. In this case, we compute Eq. (4) using a more 10 0 10 1 10 2 10 3 10 4 −0.3 −0.2 −0.1 0 0.1 0.2 # samples MSE improvement 10 2 10 3 10 4 10 5 0 0.5 1 1.5 2 2.5 3 # samples SNR improvement 10 2 10 3 10 4 10 5 10 6 −3 −2 −1 0 1 2 3 4 5 # samples SNR improvement (a) (b) (c) Fig. 1: Empirical convergence of prior-free estimator to optimal BLS solution, as a function number of observed samples of Y . For each number of observations, each estimator is simulated many times. Black dashed lines show the improvement of the prior-free estimator, averaged over simulations, relative to the ML estimator. White line shows the mean improvement using the conventional BLS solution, E{X|Y = y}, assuming the prior density is known. Gray regions denote ± one standard deviation. (a) Binary noise (10,000 simulations for each number of observations); (b) additive Gaussian noise (1,000 simulations); (c) Poisson noise (1,000 simulations). sophisticated approximation method, as described in [8]. We fit a local exponential model similar to that used in [9] to the data in bins, with binwidth adaptively selected so that the product of the number of points in the bin and the squared binwidth is constant. This binwidth selection procedure, analogous to adaptive binning procedures developed in the density estimation literature [10], provides a reasonable tradeoff between bias and variance, and converges to the correct answer for any well-behaved density [8]. Note that in this case, convergence is substantially slower than for the binary case, as might be expected given that we are dealing with a continuous density rather than a single scalar probability. But the variance of the estimates is very low. Figure 1c shows the case of estimating a randomly varying rate parameter that governs an inhomogeneous Poisson process. The prior on the rate (unknown to the estimator) is exponential. The observed values Y are the (integer) values drawn from the Poisson process. In this case the histogram of observed data was used to obtain a naive approximation of PY (n). It should be noted that improved performance for this estimator is expected if we were to use a more sophisticated approximation of the ratio of densities. 4.2 Parametric examples In this section we discuss the empirical behavior of the parametric approach applied to the additive Gaussian case. From the derivation in section 3, and restricting to the scalar case, we have L∗= y −σ2 d dy . In this particular case,, it is easier to represent the estimator as f(y) = y + g(y). Substituting into Eq. (5) gives E{|f(Y ) −X|2} = E{|g(Y )|2 + σ2g′(Y )} + const, where the constant does not depend on g. Therefore, if we have a parametric family {gθ} of such g, and are given data {Yn} we can try and minimize 1 N N n=1 {|gθ(Yn)|2 + σ2g′ θ(Yn)}. (12) This expression, known as Stein’s unbiased risk estimator (SURE) [4], favors estimators gθ that have small amplitude, and highly negative derivatives at the data values. This is intuitively sensible: the resulting estimators will “shrink” the data toward regions of high probability. Recently, an expression similar to Eq. (12) was used as a criterion for density estimation in cases where the normalizing constant, or partition function, is difficult to obtain [11]. The prior-free Fig. 2: Example bump functions, used for linear parameterization of estimators in Figs. 3(a) and 3(b). 10 2 10 3 10 4 10 5 0 0.5 1 1.5 2 2.5 3 # samples SNR improvement 10 2 10 3 10 4 10 5 0 0.5 1 1.5 2 2.5 3 # samples SNR improvement 10 2 10 3 10 4 10 5 0 0.5 1 1.5 2 2.5 3 # samples SNR improvement (a) (b) (c) Fig. 3: Empirical convergence of parametric prior-free method to optimal BLS solution, as a function number of data observations, for three different parameterized estimators. (a)3 bump; (b)15 bumps; (c) Soft thresholding. All cases use a generalized Gaussian prior (exponent 0.5), and assume additive Gaussian noise. approach we are discussing provides an interpretation for this procedure: the optimal density is the one which, when converted into an estimator using the formula in Table 1 for the additive Gaussian case, gives the best MSE. This may be extended to any of the linear operators in Table 1. As an example, we parametrize g as a linear combination of nonlinear “bump” functions gθ(y) = k θkgk(y) (13) where the functions gk are of the form gk(y) = y cos2 1 αsgn(y) log2 (|y|/σ + 1) −kπ 2 , as illustrated in Fig. 2. Recently, linear parameterizations have been used in conjunction with Eq. (12) for image denoising in the wavelet domain [12]. We can substitute Eq. (13) into Eq. (12) and pick coefficients {θk} to minimize this criteria, which is a quadratic function of the coefficients. For our simulations, we used a generalized Gaussian prior, with exponent 0.5. Figure 3 shows the empirical behavior of these “SURE-bump” estimators when using three bumps ( Fig. 3a) and fifteen bumps (Fig. 3b), illustrating the bias-variance tradeoff inherent in the fixed parameterization. Three bumps behaves fairly well, though the asymptotic behavior for large amounts of data is biased and thus falls short of ideal. Fifteen bumps asymptotes correctly but has very large variance for small amounts of data (overfitting). For comparison purposes, we have included the behavior of SURE thresholding [13], in which Eq. (4.2) is used to choose an optimal threshold, θ, for the function fθ(y) = sgn(y)(|y| −θ)+. As can be seen, SURE thresholding shows significant asymptotic bias although the variance behavior is nearly ideal. 5 Discussion We have reformulated the Bayes least squares estimation problem for a setting in which one knows the observation process, and has access to many observations. We do not assume the prior density is known, nor do we assume access to samples from the prior. Our formulation thus acts as a bridge between a conventional Bayesian setting in which one derives the optimal estimator from known prior and likelihood functions, and a data-oriented regression setting in which one learns the optimal estimator from samples of the prior paired with corrupted observations of those samples. In many cases, the prior-free estimator can be written explicitly, and we have shown a number of examples to illustrate the diversity of estimators that can arise under different observation processes. For three simple cases, we developed implementations and demonstrated that these converge to optimal BLS estimators as the amount of data grows. We also have derived a prior-free formulation of the MSE, which allows selection of an estimator from a parametric family. We have shown simulations for a linear family of estimators in the additive Gaussian case. These simulations serve to demonstrate the potential of this approach, which holds particular appeal for real-world systems (machine or biological) that must learn the priors from environmental observations. Both methods can be enhanced by using data-adaptive parameterizations or fitting procedures in order to properly trade off bias and variance (see, for example [8]). It is of particular interest to develop incremental implementations, which would update the estimator based on incoming observations. This would further enhance the applicability of this approach for systems that must learn to do optimal estimation from corrupted observations. Acknowledgments This work was partially funded by the Howard Hughes Medical Institute, and by New York University through a McCracken Fellowship to MR. References [1] G. Casella, “An introduction to empirical Bayes data analysis,” Amer. Statist., vol. 39, pp. 83– 87, 1985. [2] J. S. Maritz and T. Lwin, Empirical Bayes Methods. Chapman & Hall, 2nd ed., 1989. [3] J. Berger, “Improving on inadmissible estimators in continuous exponential families with applications to simultaneous estimation of gamma scale parameters,” The Annals of Staistics, vol. 8, pp. 545–571, 1980. [4] C. M. Stein, “Estimation of the mean of a multivariate normal distribution,” Annals of Statistics, vol. 9, pp. 1135–1151, November 1981. [5] J. T. Hwang, “Improving upon standard estimators in discrete exponential families with applications to poisson and negative binomial cases,” The Annals of Staistics, vol. 10, pp. 857–867, 1982. [6] K. Miyasawa, “An empirical bayes estimator of the mean of a normal population,” Bull. Inst. Internat. Statist., vol. 38, pp. 181–188, 1956. [7] H. Robbins, “An empirical bayes approach to statistics,” Proc. Third Berkley Symposium on Mathematcal Statistics, vol. 1, pp. 157–163, 1956. [8] M. Raphan and E. P. Simoncelli, “Empirical Bayes least squares estimation without an explicit prior.” NYU Courant Inst. Tech. Report, 2007. [9] C. R. Loader, “Local likelihood density estimation,” Annals of Statistics, vol. 24, no. 4, pp. 1602–1618, 1996. [10] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization. John Wiley, 1992. [11] A. Hyvarinen, “Estimation of non-normalized statistical models by score matching,” Journal of Machine Learning Research, vol. 6, pp. 695–709, 2005. [12] F. Luisier, T. Blu, and M. Unser, “SURE-based wavelet thresholding integrating inter-scale dependencies,” in Proc IEEE Int’l Conf on Image Proc, (Atlanta GA, USA), pp. 1457–1460, October 2006. [13] D. Donoho and I. Johnstone, “Adapting to unknown smoothness via wavelet shrinkage,” J American Stat Assoc, vol. 90, December 1995.
|
2006
|
107
|
2,930
|
Causal inference in sensorimotor integration Konrad P. K¨ording Department of Physiology and PM&R Northwestern University Chicago, IL 60611 konrad@koerding.com Joshua B. Tenenbaum Massachusetts Institute of Technology Cambridge, MA 02139 jbt@mit.edu Abstract Many recent studies analyze how data from different modalities can be combined. Often this is modeled as a system that optimally combines several sources of information about the same variable. However, it has long been realized that this information combining depends on the interpretation of the data. Two cues that are perceived by different modalities can have different causal relationships: (1) They can both have the same cause, in this case we should fully integrate both cues into a joint estimate. (2) They can have distinct causes, in which case information should be processed independently. In many cases we will not know if there is one joint cause or two independent causes that are responsible for the cues. Here we model this situation as a Bayesian estimation problem. We are thus able to explain some experiments on visual auditory cue combination as well as some experiments on visual proprioceptive cue integration. Our analysis shows that the problem solved by people when they combine cues to produce a movement is much more complicated than is usually assumed, because they need to infer the causal structure that is underlying their sensory experience. 1 Introduction Our nervous system is constantly integrating information from many different sources into a unified percept. When we interact with objects for example we see them and feel them and often enough we can also hear them. All these pieces of information need to be combined into a joint percept. Traditionally, cue combination is formalized as a simple weighted combination of estimates coming from each modality (Fig 1A). According to this view the nervous system acquires these weights through some learning process [1]. Recently many experiments have shown that various manipulations, such as degrading the quality of the feedback from one modality, can vary the weights. Recently, these experiments have been phrased in a Bayesian framework, assuming that all the cues are about one given variable. Research often focuses on exploring in which coordinate system the problem is being solved [2, 3] and how much weight is given to each variable as a function of the uncertainty in each modality and the prior[4, 5, 6, 7, 8]. Throughout this paper we consider cue combination to estimate a position. Cue combination may, however, be equally important when estimating many other variables such as the nature of material, the weight of an object or the relevant aspects of a social situation. These studies focus on the way information is combined and assume that is known that there is just one cause for the cues. However, in many cases people can not be certain of the causal structure. If two cues share a common cause (as in Fig 1B) they should clearly be combined. In general, however, there may either be one common cause – or two separate causes(Fig 1C). In such cases people can not know which of the two models to use and have to estimate the causal structure of the problem along with the parameter values. The issue of causal inference has long been an exciting question in x vis x real x aud B: common cause +σ +σ visual aud x vis x est x aud A:traditional view Wvis Waud C: uncertainty about causes x vis x real x 1 x aud x vis x aud +σ +σ visual aud +σ +σ visual aud x 2 or Figure 1: Different causal structures of two cues. Bold circles indicate the variables the subjects are interested in. A) The traditional view is sketched where the estimate is a weighted combination of the estimates of each modality. B) One cause can be responsible for both cues. In this case cues should be combined to infer about the single cause. C) In many cases people will be unable to know if one common cause or two independent causes are responsible for the cues. In that case people will have to estimate which causal structure is present from the properties of their sensory stimuli. the psychological community [9, 10, 11, 12]. Here we derive a rigorous model of causal inference in the context of psychophysical experiments. 2 Cue combination: one common cause A large number of recent studies have interpreted the results from cue combination studies in a Bayesian framework[13]. We discuss the case of visuoauditory integration as the statistical relations are identical in other cue combination cases. A statistical generative model for the data is formulated (see figure 1B). It is acknowledged that if a signal is coming from a specific position the signal received by the nervous system in each modality will be noisy. If the real position of a stimulus is xreal then the nervous system will not be able to directly know this variable but the visual modality will obtain a noisy estimate thereof xvis. Typically it is assumed that in the process that the visually perceived position is a noisy version of the real position x vis = xreal+noise. A statistical treatment thus results in p(xvis|xreal) = N(xreal −xvis, σvis) where σvis is the variance introduced by the visual modality and N(μ, σ) stands for a Gaussian distribution with mean μ and standard deviation σ. If two cues are available, for example vision and audition then it is assumed that both cues x vis and xaud provide noisy observations of the relevant variable x real. Using the assumption that each modality provides an independent measurement of x real Bayes rule yields: p(xreal|xvis, xaud) ∝ p(xreal)p(xvis, xaud|xreal) (1) = p(xreal)p(xvis|xreal)p(xaud|xreal) (2) The estimate that minimizes the mean squared error is then: ˆx = γxvis + (1 −γ)xaud (3) where γ = σ2 aud/(σ2 aud + σ2 vis). The optimal solution is thus a weighing of the estimates from both modalities but the weighing is a function of the variances. Given the variances of the cues, it is possible to predict the weighing people should optimally use. Over the last couple of years various studies have described this approach. These papers assumed that we have two sources of information about one and the same variable and have shown that in psychophysical experiments people often show this kind of optimal integration and that the weights can be predicted from the variances [13, 14, 15, 4, 16]. However, in all these cases there is ample of evidence provided to the subjects that just one single variable is involved in the experiment. For example in [4] a stimulus is felt and seen at exactly the same position. 3 Combination of visual and auditory cues: uncertainty about the causal structure Here we consider the range of experiments where people hear a tone and simultaneously see a visual stimulus that may or may not come from the same position. Subjects are asked to estimate which direction the tone is coming from and point to that direction – placing this experiment in the realm of sensorimotor integration. Subjects are asked to estimate which direction the tone is coming from and do so with a motor response. To optimally estimate where the tone is coming from people need to infer the causal structure (Fig 1 C) and decide if they should assume a single cause or two causes. Based on this calculation they can proceed to estimate where the tone is coming from. The Schirillo group has extensively tested human behavior in such a situation [17, 18]. For different distances between the visual and the auditory stimulus they analyzed the strategies people use to estimate the position of the auditory stimuli (see figure 2). It has long been realized that integration of different cues should only occur if the cues have the same cause [9, 10, 8, 19]. 3.1 Loss function and probabilistic model To model this choice phenomenon we assume that the estimate should be as precise as possible and that this error function is minimized: E(xestimated) = p(xtrue|cues)(xtrue −xestimated)2dxtrue (4) We assume that subjects have obtained a prior estimate psame of how likely it is that a visual and an auditory signal that appear near instantaneously have the same cause. In everyday life this will not be constant but depend on temporal delays, visual experience, context and many other factors. In the experiments we consider all these factors are held constant so we can use a constant p same. We assume that positions are drawn from a Gaussian distribution with a width σ pos. 3.2 Inference The probability that the two signals are from the same source will only weakly depend on the spatial prior but mostly depend on the distance Δav = xaud −xvis between visually and auditory perceived positions. We thus obtain: p(same|Δav) p(different|Δav) = psamep(Δav|same) (1 −psame)p(Δav|different) (5) Using p(same|Δav) + p(different|Δav) = 1 we can readily calculate the probability p(same|Δav) of the two signals coming from the same source. Using Equation 4 we can then calculate the optimal solution which is: ˆx = p(same|Δav)ˆxsame + (1 −p(same|Δav))ˆxdifferent (6) We know the optimal estimates in the same case already from equation 3 and in the different case the optimal estimate exclusively relies on the auditory signal. We furthermore assume that the position sensed by the sensory system is a noisy version of xobserved = ˆx + ϵ where ϵ is drawn from a Gaussian with zero mean and a standard deviation of σ motor. We are thus able to calculate the optimal estimate and the expected uncertainty given our assumptions. 3.3 Model parameter estimation The prior psame characterizes how likely given the temporal delay and other experimental parameters it is a priori that two signals have the same source. As this characterizes a property of everyday life we can not readily estimate this parameter but instead fit it to the gain (α) data. To compare the predictions of our model with the experimental data we need to know the values of the variables that characterize our model. Vision is much more precise than audition in such situations. We estimate the relevant uncertainties as follows. In both auditory and visual trials the noise will have two sources, motor noise and sensory noise. Even if people knew perfectly where a stimulus was coming from they would make small errors at pointing because their motor system is not perfect. We assume that visual only trials are dominated by motor noise, stemming from motor errors and memory errors and that the noise in the visual trials is essentially exclusively motor noise (σvis = 0.01). Choosing a smaller σvis does not change the results to any meaningful degree. From figure 2 of the experiments by Hairston et al [17] where movements are made towards unimodally presented cues we obtain σmotor = 2.5 deg and because variances are added linearly σaud = √ 82 −2.52 = 7.6 deg. Bayes unoptimized Bayes MAP Combination Experiment Wallace et al Estimated one cause Estimated two causes Position [deg] A C B Gain α [%] -5 -5 0 Visual stimulus Auditory stimulus One cause Two causes mean Gain>0 Gain<0 Figure 2: Uncertainty if one or two causes are relevant. Experimental data reprinted with permission from [18]. A) The gain, the relative weight of vision for the estimation of the position of the auditory signal is shown. It is plotted as a function of the spatial disparity, the distance between visual and auditory stimulus. A gain value of α = 100% implies that subjects only use visual information. A negative α means that on average subjects point away from the visual stimulus. Different models of human behavior make different preditictions. B) A sketch explaining the finding of negative gains. The visual stimulus is always at -5 deg (solid line) and the auditory stimulus is always straight ahead at 0deg(dotted line). Visual perception is very low noise and the perceived position x vis is shown as red dots (each dot is one trial). Auditory perception is noisy and the perceived auditory position x aud is shown as black dots. In the white area where the subject perceive two causes, the average position of perceived auditory signals is further to the right. This explains the negative bias in reporting: when perceiving two causes, subjects are more likely to have heard a signal to the right. Those trials that are not unified thus exhibit a selection bias that confers the negative gain. C) The measured standard deviation of the human pointing behavior are shown as a function of the spatial disparity. The standard deviations predicted by the model are shown as well. Same colors as in A) We want to remark that this estimation is only approximate because people can use priors and combine them with likelihoods and objective functions for making their estimates even in the unimodal case. We also want to emphasize that we in no way tried to tune these parameters to lead to better fits of the data. From the specifications of the experiments we know that the distribution of auditory sources has a width of 20deg relative to the fixation point and we assume that this width is known to the subjects from repeated trials. 3.4 Comparison of the model with the experimental data Figure 2A shows a comparison between the gains (α) measured in the experiment of [17] with the gains (alpha) predicted by the Bayesian model. psame = 0.57 was fitted to the data. We assume that the model reports identical whenever one source is a posteriori more probable than two sources. The model predicts the counterintuitive finding that the trials where people inferred two causes exhibit negative gain. Figure 2B explains why negative gains are found. The model explains 99% of the variance of the gain with just one free parameter p same. Very similar effects are found if we fix psame at 0.5 assuming that fusion and segregation are equally likely and this parameter free model still explains 98% of the variance. The simple full combination model (shown in green) that does not allow for two sources completely fails to predict any of these effects even when fitting all the standard deviations and thus explains 0% of the variance of the gains. The results clearly rule out a strategy in which all cues are always combined. On some trials noise in the auditory signal will make it appear as if the auditory signal is very close to the visual signal. In this case the system will infer that both have the same source and part of the reported high gain for the fused cases will be because noise already perturbed the auditory signal towards the visual. However, on some trials the auditory signal will be randomly perturbed away from the visual signal. In this case the system will infer that very likely the two signals have different sources. Because both estimation of position and the estimation of identity are based on the same noisy signal the two processes are not independent of one another. This lack of independence is causing the difference between the fusion and the no-fusion case. 3.5 Maximum A Posteriori over causal structure In the derivations above we assumed that people are fully Bayesian, in the sense that they consider both possible structures for cue-integration and integrate over them to arrive at an optimal decision. An alternative would be a Maximum A Posteriori (MAP) approach: people could first choose the appropriate structure one source or two and then use only that structure for subsequent decisions. Figure 2A shows that this model (we fitted psame = 0.72) also well predicts the main effect and explains 98% of the variance of the gains. To test how the two models compare we looked at the standard deviations that had also been measured in the [17] experiment. The fully Bayesian model explains 65% of the variance of the standard deviation plot and the MAP model explains 0% of the variance of that plot. This difference is observed because the MAP model strongly underestimates the uncertainty in the single cause case and strongly overestimates the uncertainty in the dual cause case (Fig 2C). The Bayesian model on the other hand always considers that it could be wrong, leading to more variance in the single cause case and less in the dual cause case. Even the Bayesian system tends to predict overly large standard deviations in the case of two causes. This effect goes away if we assume that people underestimate the variance of the auditory source relative to the fixation spot (data not shown). A deeper analysis taking into account all the available data and its variance over subjects will be necessary to test if a MAP strategy can be ruled out. The present analysis may lead to an understanding of the inference algorithm used by the nervous system. In summary, the problem of crossmodal integration is much more complicated than it seems as it necessitates inference of the causal structure. People still solve this complicated problem in a way that can be understood as being close to optimal. 4 Combination of visual and proprioceptive cues Typical experiments in movement psychophysics where a virtual reality display is used to disturb the perceived position of the hand lead to an analogous problem. In these experiments subjects proprioceptively feel their hand somewhere, but they cannot see their hand; at the same time, they visually perceive a cursor somewhere. Subjects again can not be sure if the seen cursor position and the felt hand position are cues about the same variable (hand=cursor) or if each of them are independent and the experiment is just cheating them leading to the same causal structure inference problem described above. In this section we extend the model to also explain such sensorimotor integration. We model the studies by Sober and Sabes [5, 6] that inspired this work. In these experiments one hand is moving a cursor to either the position of a visually displayed (v) target or the position of the other hand (p). People need to estimate two distinct variables: (1) the direction in which they are to move their arm, a visually perceived variable, the so-called movement vector (MV) and (2) a proprioceptively perceived variable, the configuration of their joints (J). Subjects obtain visual information about the position of the cursor and they obtain proprioceptive information from feeling the position of their hand. Traditionally it would have been assumed that the seen cursor position and the proprioceptively felt hand position are cues caused by one single variable, the hand. As a result, the position of the cursor uniquely defines the configuration of the joints and vice versa. As in the cue combination case above there should not be full cue combination but instead each variable (MV) and (J) should be estimated separately. In this experiment a situation is produced where the visual position of a cursor is different from the actual position of the right hand. Subjects are then asked to move their hand towards targets that are in 8 concentric directions. The estimate of the movement vector affects movements direction in a way that is specific to the target direction. The estimate of the joint configuration affects movement direction irrespective of the target direction. The experimental studies then report the gain α, the linear weight α of vision on the estimate of (MV) and (J) in both the visual and the proprioceptive target conditions(figure 3A and B). If people only inferred about one common cause then the weight of vision should always be the same, indicating that more than just one cause is assumed by the subjects. 4.1 Coordinate systems The probabilistic model that we use is identical to the model introduced above with one exception. In the sensorimotor integration case there is uncertainty about the alignment of the two coordinate systems. For example if we hold an arm under a table and where asked to show where the other arm is under the table we would have significant alignment errors. When using information from one coordinate system for an estimation in a different coordinate system there is uncertainty about the alignment of the coordinate systems. This means that when we use visual information to estimate the position of a joint in joint space our visual system appears to be more noisy and vice versa. As we are only interested in estimates along one dimension and can model the uncertainty about the alignment of the coordinate systems as a one dimensional Gaussian with width σ trans. When using information from one modality for estimations of a variable in the other coordinate system we need to use σ2 effective = σ2 modality + σ2 trans. The two target conditions in the experiments, moving the cursor to a visual target (v) and moving the cursor to the position of the other hand (p) produce two different estimation problems. When we try to move a cursor to a visually displayed target we must compute MV in visual space. If to the contrary we try to move a cursor with one hand to the position of the other hand then we must calculate MV in joint space. Loss functions and therefore necessary estimates are thus defined in different spaces. Altogether people are faced with 4 problems, they have to estimate (MV) and (J) in both the visual (v) condition and the proprioceptive (p) condition. 4.2 Probabilistic model As above we assume that visual and proprioceptive uncertainty lead to probability distributions in the respective space that are characterized by Gaussians of width σ vis and σprop. These variables are now defined in terms of position not in terms of direction. Subjects are not asked if they experience one or two causes. Under these circumstances it is only important how likely on average people find that the two percepts are unified (punified = psamep(Δpv|same)). We assume that when moving the cursor to a visual target the average squared deviation of the cursor and the target in visual space is minimized. We assume that when moving the cursor to a proprioceptive target the average squared deviation of the cursor and the target in proprioceptive space is minimized. Apart from this difference the whole derivation of the equations is identical to the one above for the auditory visual integration. However, the results are not analyzed conditional on the inference of one or two causes but averaged over these. 4.3 Tool use Above we assumed that cursor and hand either have the same cause (the position of the hand, or different causes and are therefore unrelated. Another way of thinking about the Sober and Sabes experiments could be in terms of tool use. The cursor could be seen as a tool that is seen displaced relative to our hand. The tip of the tool will move with our hand. As tools are typically short the probability is largest that the tip of a tool is at the position of the hand and this probability will decay with increasing distance between the hand and the position of the tool. The distance between the tip of the tool and the hand is thus another random variable that is assumed to be Gaussian with width σtool (see fig. 3E). The minimal end point error solutions of this are: αMV,v = (σ2 prop + σ2 trans + σ2 tool)/(σ2 prop + σ2 vis + σ2 trans + σ2 tool) (7) αJ,v = (σ2 prop + σ2 trans)/(σ2 prop + σ2 vis + σ2 trans + σ2 tool) (8) αMV,p = (σ2 prop + σ2 tool)/(σ2 prop + σ2 vis + σ2 trans + σ2 tool) (9) αJ,p = (σ2 prop)/(σ2 prop + σ2 vis + σ2 trans + σ2 tool) (10) We are thus able to predict the weights that people should use if they assume a causal relationship deriving from tool use. 4.4 Comparison of the model with the data We add to the Bayesian model introduced above a part for modeling the uncertainty about the alignment of the coordinate systems, and compare the results from this modified model with the data. The 0 100 50 gain α [%] 0 100 50 gain α [%] 0 100 50 gain α [%] 0 100 50 gain α [%] Model Sober and Sabes 2005 Combination or not Gaussian 0 free parameters hand position estimated hand position cursor position estimated cursor position Visual condition Proprioceptive condition A B C E F Full Combination G D α MV,vα MV,p α J,v α J,p α MV,vα MV,p α J,v α J,p α MV,vα MV,p α J,v α J,p α MV,vα MV,p α J,v α J,p +σ +σ visual +σtool prop x visual x aud MV J Figure 3: Cue combination in motor control, experiments from [6] A) The estimated quantities. B) The two experimental conditions. C)The predictions from the model. D)The predictions obtained when using the estimate of a specialist. E) The tool use model. The cursor will be close to the position of the hand. F)The predictions from a tool use model. G)The predictions from a full combination model. model has several parameters, important the uncertainties of proprioception and of the coordinate transformation compared to the visual uncertainty. Another parameter is the probability of unification. All parameters are fit to the data. The model explains the data that have a standard deviation of 0.32 with a standard deviation of only 0.08 (Figure 3C). Fitting 3 parameters to 4 data points can be seen as some major overfitting. To avoid overfitting we guessed p unified = 0.5 and asked one of our colleagues,, Daniel Wolpert, for estimates. He estimated σvis = 1cm,σprop = 3cm,σtrans = 5cm. With these values we explain the data with a standard deviation of 0.13 capturing all main effects(Figure 3D). Another experimental modification in [6] deserves mentioning. The image of an arm is rendered on top of the cursor position. The experiment finds that this has the effect that people rely much more on vision for estimating their joint configuration. In our interpretation, the rendering of the arm makes the probability much higher that actually the position of the visual display is the position of the hand and punified would be much higher. 4.5 Analysis if subjects view a cursor as a tool Another possible model that seemed very likely to us was assuming that the cursor should appear somewhere close to the hand modeling the cursor hand relationship as another Gaussian variable (Fig 3E). We fit the 3 parameters of this model, the uncertainty of proprioception and the coordinate transformation relative to the visual uncertainty as well as the width of the Gaussian describing the tool. Figure 3F shows that this model too can fit the main results of the experiment. With a standard deviation of the residual of 0.14 however it does worse than the parameter free model above. If we take the values given by Daniel Wolpert (see above) and fit the value of σ tool we obtain a standard deviation of 0.28. This model of tool use seems to thus be doing poorer than the model we introduced earlier. Sober and Sabes [5, 6] explain the finding that two variables are estimated by the finding that cortex exhibits two important streams of information processing, one for visual processing and the other for motor tasks [20]. The model we present here gives a reason for the estimation of distinct variables. If people see a cursor close to their hand they do not assume that they actually see their hand. The models that we introduced can be understood as special instantiations of a model where the cursor position relative to the hand is drawn from a general probability distribution. 5 Discussion An impressive range of recent studies show that people do not just estimate one variable in situations of cue combination [5, 6, 17, 18]. Here we have shown that the statistical problem that people solve in such situations involves an inference about the causal structure. People have uncertainty about the identity and number of relevant variables. The problem faced by the nervous system is similar to cognitive problems that occur in the context of causal induction. Many experiments show that people and in particular infants interpret events in terms of cause and effect [11, 21, 22]. The results presented here show that sensorimotor integration exhibits some of the factors that make human cognition difficult. Carefully studying and analyzing seemingly simple problems such as cue combination may provide a fascinating way of studying the human cognitive system in a quantitative fashion. References [1] Q. Haijiang, J. A. Saunders, R. W. Stone, and B. T. Backus. Demonstration of cue recruitment: change in visual appearance by means of pavlovian conditioning. Proc Natl Acad Sci U S A, 103(2):483–8, 2006. 0027-8424 (Print) Journal Article. [2] J. W. Krakauer, M. F. Ghilardi, and C. Ghez. Independent learning of internal models for kinematic and dynamic control of reaching. Nat Neurosci, 2(11):1026–31, 1999. [3] R. Shadmehr and F. A. Mussa-Ivaldi. Adaptive representation of dynamics during learning of a motor task. J Neurosci, 14(5 Pt 2):3208–24, 1994. [4] M. O. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870):429–33, 2002. [5] S. J. Sober and P. N. Sabes. Multisensory integration during motor planning. J Neurosci, 23(18):6982–92, 2003. [6] S. J. Sober and P. N. Sabes. Flexible strategies for sensory integration during motor planning. Nat Neurosci, 8(4):490–7, 2005. [7] K. P. Koerding and D. M. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427(6971):244– 7, 2004. [8] L. Shams, W. J. Ma, and U. Beierholm. Sound-induced flash illusion as an optimal percept. Neuroreport, 16(17):1923–7, 2005. [9] E Hirsch. The concept of Identity. Oxford University Press, Oxford, 1982. [10] A. Leslie, F. Xu, P. Tremoulet, and B. Scholl. Indexing and the object concept: ”what” and ”where” in infancy. Trends in Cognitive Sciences, 2:10–18, 1998. [11] A. Gopnik, C. Glymour, D. M. Sobel, L. E. Schulz, T. Kushnir, and D. Danks. A theory of causal learning in children: causal maps and bayes nets. Psychol Rev, 111(1):3–32, 2004. [12] T. L. Griffiths and J. B. Tenenbaum. From mere coincidences to meaningful discoveries. Cognition, 2006. 0010-0277 (Print) Journal article. [13] Z. Ghahramani. Computational and psychophysics of sensorimotor integration. PhD thesis, Massachusetts Institute of Technology, 1995. [14] R. A. Jacobs. Optimal integration of texture and motion cues to depth. Vision Res, 39(21):3621–9, 1999. [15] R. J. van Beers, A. C. Sittig, and J. J. Gon. Integration of proprioceptive and visual position-information: An experimentally supported model. J Neurophysiol, 81(3):1355–64, 1999. [16] D. Alais and D. Burr. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol, 14(3):257–62, 2004. [17] W. D. Hairston, M. T. Wallace, J. W. Vaughan, B. E. Stein, J. L. Norris, and J. A. Schirillo. Visual localization ability influences cross-modal bias. J Cogn Neurosci, 15(1):20–9, 2003. [18] M. T. Wallace, G. E. Roberson, W. D. Hairston, B. E. Stein, J. W. Vaughan, and J. A. Schirillo. Unifying multisensory signals across time and space. Exp Brain Res, 158(2):252–8, 2004. [19] Shams L Beierholm U, Quartz S. Bayesian inference as a unifying model of auditory-visual integration and segregation. In Proceedings of the society of neuroscience, 2005. [20] M. A. Goodale, G. Kroliczak, and D. A. Westwood. Dual routes to action: contributions of the dorsal and ventral streams to adaptive behavior. Prog Brain Res, 149:269–83, 2005. [21] R. Saxe, J. B. Tenenbaum, and S. Carey. Secret agents: inferences about hidden causes by 10- and 12-month-old infants. Psychol Sci, 16(12):995–1001, 2005. [22] T. L. Griffiths and J. B. Tenenbaum. Structure and strength in causal induction. Cognit Psychol, 51(4):334–84, 2005. 0010-0285 (Print) Journal Article.
|
2006
|
108
|
2,931
|
A Kernel Subspace Method by Stochastic Realization for Learning Nonlinear Dynamical Systems Yoshinobu Kawahara∗ Takehisa Yairi Kazuo Machida Dept. of Aeronautics & Astronautics Research Center for Advanced Science and Technology The University of Tokyo The University of Tokyo Komaba 4-6-1, Meguro-ku, Tokyo, 153-8904 JAPAN {kawahara,yairi,machida}@space.rcast.u-tokyo.ac.jp Abstract In this paper, we present a subspace method for learning nonlinear dynamical systems based on stochastic realization, in which state vectors are chosen using kernel canonical correlation analysis, and then state-space systems are identified through regression with the state vectors. We construct the theoretical underpinning and derive a concrete algorithm for nonlinear identification. The obtained algorithm needs no iterative optimization procedure and can be implemented on the basis of fast and reliable numerical schemes. The simulation result shows that our algorithm can express dynamics with a high degree of accuracy. 1 Introduction Learning dynamical systems is an important problem in several fields including engineering, physical science and social science. The objectives encompass a spectrum ranging from the control of target systems to the analysis of dynamic characterization, and for several decades, system identification for acquiring mathematical models from obtained input-output data has been researched in numerous fields, such as system control. Dynamical systems are learned by, basically, two different approaches. The first approach is based on the principles of minimizing suitable distance functions between data and chosen model classes. Well-known and widely accepted examples of such functions are likelihod functions [1] and the average squared prediction-errors of observed data. For multivariate models, however, this approach is known to have several drawbacks. First, the optimization tends to lead to an ill-conditioned estimation problem because of the over-parameterization, i.e., minimum parameters (called canonical forms) do not exist in multivariate systems. Second, the minimization, except in trivial cases, can only be carried out numerically using iterative algorithms. This often leads to there being no guarantee of reaching a global minimum and high computational costs. The second approach is a subspace method which involves geometric operations on subspaces spanned by the column or row vectors of certain block Hankel matrices formed by input-output data [2,3]. It is well known that subspace methods require no a priori choice of identifiable parameterizations and can be implemented by fast and reliable numerical schemes. The subspace method has been actively researched throughout the last few decades and several algorithms have been proposed, which are, for representative examples, based on the orthogonal decomposition of input-output data [2,4] and on stochastic realization using canonical correlation analysis [5]. Recently, nonlinear extensions have begun to be discussed for learning systems that cannot be modeled sufficiently with linear expressions. However, the nonlinear algorithms that ∗URL: www.space.rcast.u-tokyo.ac.jp/kawahara/index e.html have been proposed to date include only those in which models with specific nonlinearities are assumed [6] or those which need complicated nonlinear regression [7,8]. In this study, we extend the stochastic-realization-based subspace method [5] to the nonlinear regime by developing it on reproducing kernel Hilbert spaces [9], and derive a nonlinear subspace identification algorithm which can be executed by a procedure similar to that in the linear case. The outline of this paper is as follows. Section 2 gives some theoretical materials for the subspace identification of dynamical systems with reproducing kernels. In section 3, we give some approximations for deriving a practical algorithm, then describe the algorithm specifically in section 4. Finally, an empirical result is presented in section 5, and we give conclusions in section 6. Notation Let x, y and z be random vectors, then denote the covariance matrix of x and y by Σxy and the conditional covariance matrix of x and y conditioned on z by Σxy|z. Let a be a vector in a Hilbert space, and B, C Hilbert spaces. Then, denote the orthogonal projection of a onto B by a/B and the oblique projection of a onto B along C by a/C B. Let A be an [m × n] matrix, then L{A} := {Aα|α ∈Rn} will be referred to as the column space and L{A′} := {A′α|α ∈Rm} the row space of A. •′ denotes the transpose of a matrix •, and Id ∈Rd×d is the identity matrix. 2 Rationales 2.1 Problem Description and Some Definitions Consider two discrete-time wide-sense stationary vector processes {u(t), y(t), t = 0, ±1, · · · } with dimensions nu and ny, respectively. The first component u(t) models the input signal while the second component y(t) models the output of the unknown stochastic system, which we want to construct from observed input-output data, as a nonlinear state-space system: x(t + 1) = g(x(t), u(t)) + v y(t) = h(x(t), u(t)) + w, (1) where x(t) ∈Rn is the state vector, and v and w are the system and observation noises. Throughout this paper, we shall assume that the joint process (u, y) is a stationary and purely nondeterministic full rank process [3,5,10]. It is also assumed that the two processes are zero-mean and have finite joint covariance matrices. A basic step in solving this realization problem, which is also the core of the subspace identification algorithm presented later, is the construction of a state space of the system. In this paper, we will derive a practical algorithm for this problem based on stochastic realization with reproducing kernel Hilbert spaces. We denote the joint input-output process w(t)′ = [y(t)′, u(t)′] ∈Rnw(nw = nu + ny) and feature maps φu : Ut →Fu ∈Rnφu, φy : Yt →Fy ∈Rnφy and φw : Wt →Fw ∈Rnφw with the Mercer kernels ku, ky and kw, where Ut, Yt and Wt are the Hilbert spaces generated by the secondorder random variables u(t), y(t) and w(t), and Fy, Fu and Fw are the respective feature spaces. Moreover, we define the future output, input and the past input-output vectors in the feature spaces as f φ(t) := φy(y(t))′, φy(y(t + 1))′, · · · , φy(y(t + l −1))′′ ∈Rlnφ y , uφ +(t) := [φu(u(t))′, φu(u(t + 1))′, · · · , φu(u(t + l −1))′]′ ∈Rlnφ u, pφ(t) := [φw(w(t −1))′, φw(w(t −2))′, · · · ]′ ∈R∞, (2) and the Hilbert spaces generated by these random variables as: Pφ t = span{φ(w(τ))|τ < t}, U φ+ t = span{φ(u(τ))|τ ≥t}, Y φ+ t = span{φ(y(τ))|τ ≥t}. (3) U φ− t and Y φ− t are defined similarly. These spaces are assumed to be closed with respect to the root-mean-square norm ∥ξ∥:= [E{ξ2}]1/2, where E{·} denotes the expectation value, and thus are thought of as Hilbert subspaces of an ambient Hilbert space H φ := U φ ∨Y φ containing all linear functionals of the joint process in the feature spaces (φu(u), φy(y)). 2.2 Optimal Predictor in Kernel Feature Space First, we require the following technical assumptions [3,5]. !! f φ(t) U φ+ t Ψuφ +(t)) ˆf φ(t) 0 Pφ t Πpφ(t) Figure 1: Optimal predictor ˆf φ(t) of future output in feature space based on Pφ t ∨U +φ t . ASSUMPTION 1. The input u is ‘exogenous’, i.e., no feedback from the output y to the input u. ASSUMPTION 2. The input process is ‘sufficiently rich’. More precisely, at each time t, the input space Ut has the direct sum decomposition Ut = U − t + U + t U − t ∩U + t = {0} . Note that assumption 2 implies that the input process is purely nondeterministic and admits a spectral density matrix without zeros on the unit circle (i.e., coercive). This is too restrictive in many practical situations and we can instead assume only a persistently exciting (PE) condition of sufficiently high order and finite dimensionality for an underlying “true” system from the outset. Then, we can give the following proposition which enables us to develop a subspace method in feature space, as in the linear case. PROPOSITION 1. If assumptions 1 and 2 are satisfied, then similar conditions in the feature spaces described below are fulfilled: (1) There is no feedback from φy(y) to φu(u). (2) U φ t has the direct sum decomposition U φ t = U φ− t + U φ+ t (U φ− t ∩U φ+ t = {0}) PROOF. Condition (2) is shown straightforwardly from assumption 2 and the properties of the reproducing kernel Hilbert spaces. As U + t ⊥Y − t |U − t (derived from assumption 1) and Y −/U + t ∨U − t = Y − t /U − t are equivalent, if the orthogonal complement of Ut is denoted byU ⊥ t , we can obtain Y − t = U − t + U ⊥ t . Now, when representing Y −φ t using the input space on feature space U φ t and the orthogonal complement U ⊥φ t , we can write Y −φ t = U −φ t + U ⊥φ t because U φ t = U −φ t +U +φ t from condition (2), U + t ⊥U ⊥ t , and owing to the properties of the reproducing kernel Hilbert spaces. Therefore, U +φ t ⊥Y −φ t |U −φ t can be shown by tracing inversely. Using proposition 1, we now obtain the following representation result. THEOREM 1. Under assumptions 1 and 2, the optimal predictor ˆf φ(t) of the future output vector in feature space f φ(t) based on Pφ t ∨Uφ+ t is uniquely given by the sum of the oblique projections: ˆf φ(t) = f φ (t) /Pφ t ∨Uφ+ t = Πpφ(t) + Ψuφ +(t), (4) in which Π and Ψ satisfy the discrete Wiener-Hopf-type equations ΠΣφpφp|φu = Σφf φp|φu, ΨΣφuφu|φp = Σφf φu|φp. (5) PROOF. From proposition 1, the proof can be carried out as in the linear case (cf. [3,5]). 2.3 Construction of State Vector Let Lf, Lp be the square root matrices of Σφf φf |φu, Σφpφp|φu, i.e., Σφf φf |φu = LfL′ f, Σφpφp|φu = LpL′ p, and assume that the SVD of the normalized conditional covariance is given by L−1 f Σφf φp|φu(L−1 p )′ = USV ′, (6) where S ∈Rlnφy ×nφp is the matrix with all entries being zero, except the leading diagonal, which has the entries ρi satisfying ρ1 ≥· · · ≥ρn > 0 for n = min(lnφy, nφp), and U, V are square orthogonal. We define the extended observability and controllability matrices O := LfUS1/2, C := S1/2V ′L′ p, (7) where rank(O) = rank(R) = n. Then, from the SVD of Eq. (6), the block Hankel matrix Σφf φp|φu has the classical rank factorization Σφf φp|φu = OC. If a ’state vector’ is now defined to be the n-dimensional vector x (t) = CΣ−1 φpφp|φupφ(t) = S1/2V ′L−1 p pφ(t), (8) it is readily seen that x(t) is a basis for the stationary oblique predictor space Xt := Y φ+ t /U φ+ t Pφ t , which, on the basis of general geometric principles, can be shown to be a minimal state space for the process φy(y), as in the linear case [3,5]. This is also assured by the fact that the oblique projection of f φ(t) onto U φ+ t along Pφ t can be expressed, using Eqs. (5), (7) and (8), as f φ(t)/U φ+ t Pφ t = Πpφ(t) = Σφf φp|φuΣ−1 φpφp|φupφ(t) = Ox(t) (9) and rank(O) = n, and the variance matrix of x(t) is nonsingular. In terms of x(t), the optimal predictor ˆf φ(t) in Eq. (4) has the form ˆf φ = Ox(t) + Ψuφ +(t). (10) It is seen that x(t) is a conditional minimal sufficient statistic carrying exactly all the information contained in Pφ t that is necessary for estimating the future outputs, given the future inputs. In analogy with the linear case [3,5], the output process in feature space φy(y(t)) now admits a minimal stochastic realization with the state vector x(t) of the form x(t + 1) = Aφx(t) + Bφφu(u(t)) + Kφe(t), φy(y(t)) = Cφx(t) + Dφφu(u(t)) + e(t), (11) where Aφ ∈Rn×n, Bφ ∈Rn×nφu , Cφ ∈Rnφy ×n, Dφ ∈Rnφy ×nφu and Kφ ∈Rn×nφy are constant matrices and e(t) := φ(y(t)) −(φ(y(t))|Pφ t ∨U φ t ) is the prediction error. 2.4 Preimage In this section, we describe the state-space model for the output y(t) while the state-space model (11), derived in the previous section, represents the output in feature space φy(y(t)). At first, we define the feature maps φx : Xt 7→Fx ∈Rnφx , φ¯u := Ut 7→F¯u ∈Rnφ¯u and the linear space X φ t , ¯ U φ t generated by φx(x(t)), φ¯u(u(t)). Then, the product of X φ t and ¯ U φ t satisfies X φ t ∩¯ U φ t = 0 because Xt ∩U φ t = 0 and φx, φ¯u are bijective. Therefore, the output y(t) is represented as the direct sum of the oblique projections as y(t)/X φ t ∨¯ U φ t = ¯Cφφx(x(t)) + ¯Dφφ¯u(u(t)). (12) As a result, we can obtain the following theorem. THEOREM 2. Under assumptions 1 and 2, if rank Σφf φp|φu = n, then the output y can be represented in the following state-space model: x(t + 1) = Aφx(t) + Bφφu(u(t)) + ¯Kφφe(¯e(t)), y(t) = ¯Cφφx(x(t)) + ¯Dφφ¯u(u(t)) + ¯e(t), (13) where ¯e(t) := y(t) −y(t)/X φ t ∨¯ U φ t is the prediction error and ¯Kφ := KφA¯e, in which A¯e is the coefficient matrix of the nonlinear regression from ¯e(t) to e(t) 1. 1Let f be a map from ¯e(t) to e and minimize a regularized risk c((¯e1, e1, f(¯e1)), · · · , (¯em, em, f(¯em)))+ Ω(∥f∥H ), where Ω: [0, ∞) →R is a strictly monotonically increasing function and c : ( ¯E × R2)m → R ∪{∞} ( ¯E ∈span{¯e}) is an arbitrary loss function; then, from the representer theorem[9], f satisfies f ∈span{ffie(¯e(t))}, where ffie is a feature map with the associated Mercer kernel ke. Therefore, we can represent nonlinear regression from ¯e(t) to e(t) as A¯effie(¯e(t)). 3 Approximations 3.1 Realization with Finite Data In practice, the state vector and associated state-space model should be constructed with available finite data. Let the past vector pφ(t) be truncated to finite length, i.e., pφ T (t) := [φw(w(t − 1))′, φw(w(t −2))′, · · · , φw(w(t −T))′]′ ∈RT (nφy +nφu), where T > 0, and define P[t−T,t) := span{pφ T (τ)|τ < t}. Then, the following theorem describes the construction of the state vector and the corresponding state-space system which form the finite-memory predictor ˆf φ T (t) := f φ(t)/U φ+ t ∩Pφ [t−T,t). THEOREM 3. Under assumptions 1 and 2, if rank(Σφf φp|φu) = n, then the process φy(y) is expressed by the following nonstationary state-space model: ˆxT (t + 1) = AφˆxT (t) + Bφφu(u(t)) + Kφ(t)ˆeT (t), φy(y(t)) = CφˆxT (t) + Dφφu(u(t)) + ˆeT (t). (14) where the state vector ˆxT (t) is a basis on the finite-memory predictor space Y φ+ t /U φ+ t Pφ [t−T,t), and ˆeT (t) := φy(y(t)) −(φy(y(t))|Pφ [T,t) ∨U φ+ t ) is the prediction error. The proof can be carried out as in the linear case (cf. [3,5]). In other words, we can obtain the approximated state vector ˆxT by applying the facts in Section 2 to finite data. This state vector differs from x(t) in Eq. (8); however, when T →∞, the difference between ˆxT (t) and x(t) converges to zero and the covariance matrix of the estimation error P φ also converges to the stabilizing solution of the following Algebra Riccati Equation (ARE): P φ = AφP φAφ′+Σφ wΣφ w ′−(AφP φCφ′+Σφ wΣφ w ′)(CφP φCφ′+Σφ e Σφ e ′)−1(AφP φCφ′+Σφ wΣφ w ′)′. (15) Moreover, the Kalman gain Kφ converges to Kφ = (AφP φCφ′ + Σφ wΣφ w ′)(CφP φCφ′ + Σφ e Σφ e ′)−1, (16) where Σφ w and Σφ e are the covariance matrices of errors in the state and observation equations, respectively. 3.2 Using Kernel Principal Components Let z be a random variable, kz a Mercer kernel with a feature map φz and a feature space Fz, and denote Φz := [φz(z1), · · · , φz(zm)]′ and the associated Gram matrix Gz := ΦzΦ′ z. The first ith principal components uz,i ∈L{Φ′ z}(i = 1, · · · , dz) combined in a matrix Uz = [uz,1, · · · , uz,dz] form an orthonormal basis of a dz-dimensional subspace L{Uz} ⊆L{Φ′ z}, and can therefore also be described as the linear combination Uz = Φ′ zAz, where the matrix Az ∈Rm×dz holds the expansion coefficients. Az is found by, for example, the eigendecomposition Gz = ΓzΛzΓ−z′ such that Az consists of the first dz columns of ΓzΛ−1/2 z . Then, Φz with respect to the principal components is given by Cz := ΦzUz = ΦzΦ′ zAz = GzAz [11]. From the orthogonality of Γz (i.e., Γ′ zΓz = ΓzΓ′ z = Im), we can derive the following equation: (A′ zGzGzAz)−1 = (ΓzΛ−1/2 z,d )′(ΓzΛzΓ′ z)(ΓzΛzΓ′ z)(ΓzΛ−1/2 z,d ) −1 = ¯A′ zG−1 z G−1 z ¯Az, (17) where Λz,d is the matrix which consists of the first dz columns of Λz, and ¯Az := ΓzΛ1/2 z,d satisfying ¯A′ zAz = A′ z ¯Az = Idz and ¯AzA′ z = Az ¯A′ z = Im. This property of kernel principal components enables us to approximate matters described in the previous sections in computable forms. First, using Eq. (17), the conditional covariance matrix Σφf φf |φu can be expressed as Σφf φf |φu = Σφf φf −Σφf φuΣ−1 φuφuΣφuφf ≈A′ fGfGfAf −(A′ fGfGuAu)(A′ uGuGuAu)−1(A′ uGuGfAf) = A′ f GfGf −GfGu(GuGu)−1GuGf Af(:= A′ f ˆΣff|uAf), (18) where ˆΣff|u may be called the empirical conditional covariance operators, and the regularized variant can be obtained by replacing GfGf, GuGu with (Gf +ϵIm)2, (Gu+ϵIm)2 (ϵ > 0) (cf.[12,13]). Σφpφp|φu and Σφf φp|φu can be approximated as well. Moreover, using L−1 ∗ = ˆL−1 ∗ ¯A∗, where ˆL∗is the square root matrix of ˆΣφ∗φ∗|u (∗= p, f) 2, we can represent Eqs. (6) and (8) approximately as L−1 f Σφf φp|φu(L−1 p )′ ≈(ˆL−1 f ¯Af)(A′ f ˆΣfp|uAp) ¯A′ p(ˆL−1 p )′ = ˆL−1 f ˆΣfp|u(ˆL−1 p )′ = ˆU ˆS ˆV ′, (19) x (t) = S1/2V ′L−1 p pφ(t) ≈ˆS1/2 ˆV ′(ˆL−1 p ¯Ap)(A′ pk(p(t))) = ˆS1/2 ˆV ′ ˆL−1 p k(p(t)), (20) where k(p(t)) := Φppφ(t) = [kp(p1(t), p(t)), · · · , kp(pm(t), p(t))]′. In addition, we can apply this approximation with the kernel PCA to the state-space models derived in the previous sections. First, Eq. (11) can be approximated as x(t + 1) = Aφx(t) + BφA′ uku(u(t)) + Kφe(t), A′ yky(y(t)) = Cφx(t) + DφA′ uku(u(t)) + e(t), (21) where Au and Ay are the expansion coefficient matrices found by the eigendecomposition of Gu and Gy, respectively. Also, using the coefficient matrices Ax, Ae and A¯u, Eq.(13) can be written as x(t + 1) = Aφx(t) + BφA′ uku(u(t)) + ¯KφA′ eke(¯e(t)), y(t) = ¯CφA′ xkx(x(t)) + ¯DφA′ ¯uku(u(t)) + ¯e(t). (22) 4 Algorithm In this section, we give a subspace identification algorithm based on the discussions in the previous sections. Denote the finite input-output data as {u(t), y(t), t = 1, 2, · · · , N + 2l −1}, where l > 0 is an integer larger than the dimension of system n and N is the sufficient large integer, and assume that all data is centered. First, using the Gram matrices Gu, Gy and Gw associated with the input, the output, and the input-output, repectively, we must to calculate the Gram matrices GU, GY and GW corresponding to the past input, the future output, and the past input-output defined as GU := 2l P i=l+1 Gu,ii 2l P i=l+1 Gu,i(i+1) · · · 2l P i=l+1 Gu,i(i+N−1) 2l P i=l+1 Gu,(i+1)i 2l P i=l+1 Gu,(i+1)(i+1) · · · 2l P i=l+1 Gu,(i+1)(i+N−1) ... ... ... ... 2l P i=l+1 Gu,(i+N−1)i 2l P i=l+1 Gu,(i+N−1)(i+1) · · · 2l P i=l+1 Gu,(i+N−1)(i+N−1) , (23) GW := lP i=1 Gw,ii lP i=1 Gw,i(i+1) · · · lP i=1 Gw,i(i+N−1) lP i=1 Gw,(i+1)i lP i=1 Gw,(i+1)(i+1) · · · lP i=1 Gw,(i+1)(i+N−1) ... ... ... ... lP i=1 Gw,(i+N−1)i lP i=1 Gw,(i+N−1)(i+1) · · · lP i=1 Gw,(i+N−1)(i+N−1) , (24) and GY is defined analogously to GU. Now the procedure is given as follows. Step 1 Calculate the regularized empirical covariance operators and their square root matrices as ˆΣff|u = (GY + ϵIN)2 −GY GU(GU + ϵIN)−2GUGY = ˆLf ˆL′ f, ˆΣpp|u = (GW + ϵIN)2 −GW GU(GU + ϵIN)−2GUGW = ˆLp ˆL′ p, ˆΣfp|u = GY GW −GY GU(GU + ϵIN)−2GUGW . (25) 2This is given by (L−1 ∗)′L−1 ∗ = Σ−1 φ∗φ∗|φu ≈(A′ ∗ˆΣ∗∗|uA∗)−1 = ¯A′ ∗ˆΣ−1 ∗∗|u ¯A∗= ¯A′ ∗(ˆL−1 ∗)′ ˆL−1 ∗ ¯A∗. Step 2 Calculate the SVD of the normalized covariance matrix (cf. Eq. (19)) L−1 f ˆΣfp|u(ˆL−1 p )′ = ˆU ˆS ˆV ′ ≈U1S1V1, (26) where S1 is obtained by neglecting the small singular values so that the dimension of the state vector n equals the dimension of S1. Step 3 Estimate the state sequence as (cf. Eq. (20)) Xl := [x(l), x(l + 1), · · · , x(l + N −1)] = S1/2 1 V ′ 1 ˆL−1 p GW , (27) and define the following matrices consisting of N −1 columns: ˆXl+1 = ¯Xl(:, 2 : N), ˆXl = ¯Xl(:, 1 : N −1). (28) Step 4 Calculate the eigendecomposition of the Gram matrices Gu, G¯u, Gy and Gx and the corresponding expansion coefficient matrices Au, A¯u, Ay and Ax. Then, determine the system matrices Aφ, Bφ, Cφ, Dφ, ¯Cφ and ¯Dφ by applying regularized least square regressions to the following equations (cf. Eqs. (21) and (22)): ˆXk+1 A′ yGy(:, 2, N) = Aφ Bφ Cφ Dφ ˆXk A′ uGu(:, 1, N −1) + ρw ρe , (29) Yl|l = ¯Cφ(A′ xGx(:, 2, N)) + ¯Dφ( ¯A′ uGu(:, 2, N)) + ¯ρe, (30) where the matrices ρw, ρe and ¯ρe are the residuals. Step 5 Calculate the covariance matrices of the residuals Σw Σwe Σew Σe = 1 N −1 ρwρ′ w ρwρ′ e ρeρ′ w ρeρ′ e , (31) solve ARE (15), and, using the stabilizing solution, calculate the Kalman gain KΦ in Eq. (16). 5 Simulation Result In this section, we illustrate the proposed algorithm for learning nonlinear dynamical systems with synthetic data. The data was generated by simulating the following system [8] using the 4th- and 5th-order Runge-Kutta method with a sampling time of 0.05 seconds: ˙x1(t) = x2(t) −0.1 cos(x1(t))(5x1(t) −4x3 1(t) + x5 1(t)) −0.5 cos(x1(t))u(t), ˙x2(t) = −65x1(t) + 50x3 1(t) −15x5 1(t) −x2(t) −100u(t), y(t) = x1(t), (32) where the input was a zero-order-hold white noise signal uniformly distributed between −0.5 and 0.5. We applied our algorithm on a set of 600 data points, and then validated the obtained model using a fresh data set of 400 points. As a kernel function, we used the RBF Gaussian kernel k(zi, zj) = exp(−∥zi −zj∥2/2σz). The parameters to be tuned for our method are thus the widths of the kernels σ for u, y, w and x, the regularization degree ϵ, and the row-block number l of the Hankel matrix. In addition, we must select the order of the system and the number of kernel principal components npc ∗for u, y and e. Figure 2 shows free-run simulation results of the model acquired by our algorithm, in which the parameters were set as σu = 2.5, σy = 3.5, σw = 4.5, σx = 1.0, npc u = npc y = 4, npc x = 9 and ϵ = 0.05, and, for comparison, by the linear subspace identification [5]. The row-block number l was set as 10 in both identifications. The simulation errors [2] ϵ = 100 ny ny X c=1 sPm i=1((yi)c −(ys i )c)2 Pm j=1((yi)c)2 , (33) where ys i are simulated values and the used initial state is a least square estimation with the initial few points, were improved to 40.2 for our algorithm, from 44.1 for the linear method. The accuracy was improved by about 10 percent. The system orders are 8 for our algorithm, whle 10 for the linear method, in this case. We can see that our method can estimate the state sequence with more information and yield the model capturing the dynamics more precisely. However, the parameters involved much time and effort for tuning. 0 50 100 150 200 250 300 350 400 -3 -2 -1 0 1 2 3 Data Point Observation Simulation 0 50 100 150 200 250 300 350 400 -3 -2 -1 0 1 2 3 Data Point Observation Simulation Figure 2: Comparison of simulated outputs. Left: Kernel subspace identification method (proposed method). Right: Linear subspace identification method [5]. The broken lines represent the observations and the solid lines represent the simulated values. 6 Conclusion A new subspace method for learning nonlinear dynamical systems using reproducing kernel Hilbert spaces has been proposed. This approach is based on approximated solutions of two discrete WienerHopf equations by covariance factorization in kernel feature spaces. The algorithm needs no iterative optimization procedures, and hence, solutions can be obtained in a fast and reliable manner. The comparative empirical results showed the high performance of our method. However, the parameters involved much time and effort for tuning. In future work, we will develop the idea for closed-loop systems for the identification of more realistic applications. Moreover, it should be possible to extend other established subspace identification methods to nonlinear frameworks as well. Acknowledgments The present research was supported in part through the 21st Century COE Program, ”Mechanical Systems Innovation,” by the Ministry of Education, Culture, Sports, Science and Technology. References [1] Roweis, S. & Ghahramani, Z. (1999) “A Unifying Review of Linear Gaussian Models” Neural Computation, 11 (2) : 305-345. [2] Van Overschee, P. & De Moor, B. (1996) “Subspace Identification for Linear Systems: Theory, Implementation, Applications” Kluwer Academic Publishers, Dordrecht, Netherlands. [3] Katayama, T. (2005) “Subspace Methods for System Identification: A Realization Approach” Communications and Control Engineering, Springer Verlag, 2005. [4] Moonen, M. & Moor, B. D. & Vandenberghe, L. & Vandewalle, J. (1989) “On- and Off-line Identification of Linear State Space Models” International Journal of Control, 49 (1) : 219-232. [5] Katayama, T. & Picci, G. (1999) “Realization of Stochastic Systems with Exogenous Inputs and Subspace Identification Methods” Automatica, 35 (10) : 1635-1652. [6] Goethals, I. & Pelckmans, K. & Suykens, J. A. K. & Moor, B. D. (2005) “Subspace Identification of Hammerstein Systems Using Least Squares Support Vector Machines” IEEE Trans. on Automatic Control, 50 (10) : 1509-1519. [7] Ni, X. & Verhaegen, M. & Krijgsman, A. & Verbruggen, H. B. (1996) “A New Method for Identification and Control of Nonlinear Dynamic Systems” Engineering Application of Artificial Intelligence, 9 (3) : 231-243. [8] Verdult, V. & Suykens, J. A. K. & Boets, J. & Goethals, I. & Moor, B. D. (2004) “Least Squares Support Vector Machines for Kernel CCA in Nonlinear State-Space Identification” Proceedings of the 16th International Symposium on Mathematical Theory of Networks and Systems, (MTNS2004). [9] Sch¨olkopf, B. & Smola, A. (2002) “Learning with Kernels” MIT Press. [10] Rozanov, N. I. (1963) “Stationary Random Processes” Holden-Day, San Francisco, CA. [11] Kuss, M. & Graepel, T. (2003) “The Geometry of Kernel Canonical Correlation Analysis” Technical Report, Max Planck Institute for Biological Cybernetics, Tubingen, Germany (108). [12] Bach, F. R., & Jordan, M. I. (2002) “Kernel Independent Component Analysis” Journal of Machine Learning Research (JMLR), 3 : 1-48. [13] Fukumizu, K. & Bach, F. R., & Jordan, M. I. (2004) “Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces” Journal of Machine Learning Research (JMLR), 5 : 73-99.
|
2006
|
109
|
2,932
|
Learning from Multiple Sources Koby Crammer, Michael Kearns, Jennifer Wortman Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Abstract We consider the problem of learning accurate models from multiple sources of “nearby” data. Given distinct samples from multiple data sources and estimates of the dissimilarities between these sources, we provide a general theory of which samples should be used to learn models for each source. This theory is applicable in a broad decision-theoretic learning framework, and yields results for classification and regression generally, and for density estimation within the exponential family. A key component of our approach is the development of approximate triangle inequalities for expected loss, which may be of independent interest. 1 Introduction We introduce and analyze a theoretical model for the problem of learning from multiple sources of “nearby” data. As a hypothetical example of where such problems might arise, consider the following scenario: For each web user in a large population, we wish to learn a classifier for what sites that user is likely to find “interesting.” Assuming we have at least a small amount of labeled data for each user (as might be obtained either through direct feedback, or via indirect means such as clickthroughs following a search), one approach would be to apply standard learning algorithms to each user’s data in isolation. However, if there are natural and accessible measures of similarity between the interests of pairs of users (as might be obtained through their mutual labelings of common web sites), an appealing alternative is to aggregate the data of “nearby” users when learning a classifier for each particular user. This alternative is intuitively subject to a trade-off between the increased sample size and how different the aggregated users are. We treat this problem in some generality and provide a bound addressing the aforementioned tradeoff. In our model there are K unknown data sources, with source i generating a distinct sample Si of ni observations. We assume we are given only the samples Si, and a disparity1 matrix D whose entry D(i, j) bounds the difference between source i and source j. Given these inputs, we wish to decide which subset of the samples Sj will result in the best model for each source i. Our framework includes settings in which the sources produce data for classification, regression, and density estimation (and more generally any additive-loss learning problem obeying certain conditions). Our main result is a general theorem establishing a bound on the expected loss incurred by using all data sources within a given disparity of the target source. Optimization of this bound then yields a recommended subset of the data to be used in learning a model of each source. Our bound clearly expresses a trade-off between three quantities: the sample size used (which increases as we include data from more distant models), a weighted average of the disparities of the sources whose data is used, and a model complexity term. It can be applied to any learning setting in which the underlying loss function obeys an approximate triangle inequality, and in which the class of hypothesis models under consideration obeys uniform convergence of empirical estimates of loss to expectations. 1We avoid using the term distance since our results include settings in which the underlying loss measures may not be formal distances. For classification problems, the standard triangle inequality holds. For regression we prove a 2approximation to the triangle inequality, and for density estimation for members of the exponential family, we apply Bregman divergence techniques to provide approximate triangle inequalities. We believe these approximations may find independent applications within machine learning. Uniform convergence bounds for the settings we consider may be obtained via standard data-independent model complexity measures such as VC dimension and pseudo-dimension, or via more recent datadependent approaches such as Rademacher complexity. The research described here grew out of an earlier paper by the same authors [1] which examined the considerably more limited problem of learning a model when all data sources are corrupted versions of a single, fixed source, for instance when each data source provides noisy samples of a fixed binary function, but with varying levels of noise. In the current work, each source may be entirely unrelated to all others except as constrained by the bounds on disparities, requiring us to develop new techniques. Wu and Dietterich studied similar problems experimentally in the context of SVMs [2]. The framework examined here can also be viewed as a type of transfer learning [3, 4]. In Section 2 we introduce a decision-theoretic framework for probabilistic learning that includes classification, regression, density estimation and many other settings as special cases, and then give our multiple source generalization of this model. In Section 3 we provide our main result, which is a general bound on the expected loss incurred by using all data within a given disparity of a target source. Section 4 then applies this bound to a variety of specific learning problems. In Section 5 we briefly examine data-dependent applications of our general theory using Rademacher complexity. 2 Learning models Before detailing our multiple-source learning model, we first introduce a standard decision-theoretic learning framework in which our goal is to find a model minimizing a generalized notion of empirical loss [5]. Let the hypothesis class H be a set of models (which might be classifiers, real-valued functions, densities, etc.), and let f be the target model, which may or may not lie in the class H. Let z be a (generalized) data point or observation. For instance, in (noise-free) classification and regression, z will consist of a pair ⟨x, y⟩where y = f(x). In density estimation, z is the observed value x. We assume that the target model f induces some underlying distribution Pf over observations z. In the case of classification or regression, Pf is induced by drawing the inputs x according to some underlying distribution P, and then setting y = f(x) (possibly corrupted by noise). In the case of density estimation f simply defines a distribution Pf over observations x. Each setting we consider has an associated loss function L(h, z). For example, in classification we typically consider the 0/1 loss: L(h, ⟨x, y⟩) = 0 if h(x) = y, and 1 otherwise. In regression we might consider the squared loss function L(h, ⟨x, y⟩) = (y−h(x))2. In density estimation we might consider the log loss L(h, x) = log(1/h(x)). In each case, we are interested in the expected loss of a model g2 on target g1, e(g1, g2) = Ez∼Pg1 [L(g2, z)]. Expected loss is not necessarily symmetric. In our multiple source model, we are presented with K distinct samples or piles of data S1, ..., SK, and a symmetric K × K matrix D. Each pile Si contains ni observations that are generated from a fixed and unknown model fi, and D satisfies e(fi, fj), e(fj, fi) ≤D(i, j). 2 Our goal is to decide which piles Sj to use in order to learn the best approximation (in terms of expected loss) to each fi. While we are interested in accomplishing this goal for each fi, it suffices and is convenient to examine the problem from the perspective of a fixed fi. Thus without loss of generality let us suppose that we are given piles S1, ..., SK of size n1, . . . , nK from models f1, . . . , fK such that ǫ1 ≡D(1, 1) ≤ǫ2 ≡D(1, 2) ≤· · · ≤ǫK ≡D(1, K), and our goal is to learn f1. Here we have simply taken the problem in the preceding paragraph, focused on the problem for f1, and reordered the other models according to their proximity to f1. To highlight the distinguished role of the target f1 we shall denote it f. We denote the observations in Sj byzj 1, . . . , zj nj. In all cases we will analyze, for any k ≤K, the hypothesis ˆhk minimizing the empirical loss ˆek(h) on the first k piles S1, . . . , Sk, i.e. 2While it may seem restrictive to assume that D is given, notice that D(i, j) can be often be estimated from data, for example in a classification setting in which common instances labeled by both fi and fj are available. ˆhk = argmin h∈H ˆek(h) = argmin h∈H 1 n1:k k X j=1 nj X i=1 L(h, zj i ) where n1:k = n1 + · · · + nk. We also denote the expected error of function h with respect to the first k piles of data as ek(h) = E [ˆek(h)] = k X i=1 ni n1:k e(fi, h). 3 General theory In this section we provide the first of our main results: a general bound on the expected loss of the model minimizing the empirical loss on the nearest k piles. Optimization of this bound leads to a recommended number of piles to incorporate when learning f = f1. The key ingredients needed to apply this bound are an approximate triangle inequality and a uniform convergence bound, which we define below. In the subsequent sections we demonstrate that these ingredients can indeed be provided for a variety of natural learning problems. Definition 1 For α ≥1, we say that the α-triangle inequality holds for a class of models F and expected loss function e if for all g1, g2, g3 ∈F we have e(g1, g2) ≤α(e(g1, g3) + e(g3, g2)). The parameter α ≥1 is a constant that depends on F and e. The choice α = 1 yields the standard triangle inequality. We note that the restriction to models in the class F may in some cases be quite weak — for instance, when F is all possible classifiers or real-valued functions with bounded range — or stronger, as in densities from the exponential family. Our results will require only that the unknown source models f1, . . . , fK lie in F, even when our hypothesis models are chosen from some possibly much more restricted class H ⊆F. For now we simply leave F as a parameter of the definition. Definition 2 A uniform convergence bound for a hypothesis space H and loss function L is a bound that states that for any 0 < δ < 1, with probability at least 1 −δ for any h ∈H |ˆe(h) −e(h)| ≤β(n, δ) where ˆe(h) = 1 n Pn i=1 L(h, zi) for n observations z1, . . . , zn generated independently according to distributions P1, . . . Pn, and e(h) = E [ˆe(h)] where the expectation is taken over z1, . . . , zn. β is a function of the number of observations n and the confidence δ, and depends on H and L. This definition simply asserts that for every model in H, its empirical loss on a sample of size n and the expectation of this loss will be “close.” In general the function β will incorporate standard measures of the complexity of H, and will be a decreasing function of the sample size n, as in the classical O( p d/n) bounds of VC theory. Our bounds will be derived from the rich literature on uniform convergence. The only twist to our setting is the fact that the observations are no longer necessarily identically distributed, since they are generated from multiple sources. However, generalizing the standard uniform convergence results to this setting is straightforward. We are now ready to present our general bound. Theorem 1 Let e be the expected loss function for loss L, and let F be a class of models for which the α-triangle inequality holds with respect to e. Let H ⊆F be a class of hypothesis models for which there is a uniform convergence bound β for L. Let K ∈N, f = f1, f2, . . . , fK ∈F, {ǫi}K i=1, {ni}K i=1, and ˆhk be as defined above. For any δ such that 0 < δ < 1, with probability at least 1 −δ, for any k ∈{1, . . . , K} e(f, ˆhk) ≤(α + α2) k X i=1 ni n1:k ǫi + 2αβ(n1:k, δ/2K) + α2 min h∈H {e(f, h)} Before providing the proof, let us examine the bound of Theorem 1, which expresses a natural and intuitive trade-off. The first term in the bound is a weighted sum of the disparities of the k ≤K models whose data is used with respect to the target model f = f1. We expect this term to increase as we increase k to include more distant piles. The second term is determined by the uniform convergence bound. We expect this term to decrease with added piles due to the increased sample size. The final term is what is typically called the approximation error — the residual loss that we incur simply by limiting our hypothesis model to fall in the restricted class H. All three terms are influenced by the strength of the approximate triangle inequality that we have, as quantified by α. The bounds given in Theorem 1 can be loose, but provide an upper bound necessary for optimization and suggest a natural choice for the number of piles k∗to use to estimate the target f: k∗= argmin k (α + α2) k X i=1 ni n1:k ǫi + 2αβ(n1:k, δ/2K) ! . Theorem 1 and this optimization make the implicit assumption that the best subset of piles to use will be a prefix of the piles — that is, that we should not “skip” a nearby pile in favor of more distant ones. This assumption will generally be true for typical data-independent uniform convergence such as VC dimension bounds, and true on average for data-dependent bounds, where we expect uniform convergence bounds to improve with increased sample size. We now give the proof of Theorem 1. Proof: (Theorem 1) By Definition 1, for any h ∈H, any k ∈{1, . . . K}, and any i ∈{1, . . . , k}, ni n1:k e(f, h) ≤ ni n1:k (αe(f, fi) + αe(fi, h)) Summing over all i ∈{1, . . . , k}, we find e(f, h) ≤ k X i=1 ni n1:k (αe(f, fi) + αe(fi, h)) = α k X i=1 ni n1:k e(f, fi) + α k X i=1 ni n1:k e(fi, h) ≤α k X i=1 ni n1:k ǫi + αek(h) In the first line above we have used the α-triangle inequality to deliberately introduce a weighted summation involving the fi. In the second line, we have broken up the summation. Notice that the first summation is a weighted average of the expected loss of each fi, while the second summation is the expected loss of h on the data. Using the uniform convergence bound, we may assert that with high probability ek(h) ≤ˆek(h) + β(n1:k, δ/2K), and with high probability ˆek(ˆhk) = min h∈H{ˆek(h)} ≤min h∈H ( k X i=1 ni n1:k e(fi, h) + β(n1:k, δ/2K) ) Putting these pieces together, we find that with high probability e(f, ˆhk) ≤ α k X i=1 ni n1:k ǫi + 2αβ(n1:k, δ/2K) + α min h∈H ( k X i=1 ni n1:k e(fi, h) ) ≤ α k X i=1 ni n1:k ǫi + 2αβ(n1:k, δ/2K) + α min h∈H ( k X i=1 ni n1:k αe(fi, f) + k X i=1 ni n1:k αe(f, h) ) = (α + α2) k X i=1 ni n1:k ǫi + 2αβ(n1:k, δ/2K) + α2 min h∈H {e(f, h)} 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA MAX DATA 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 sample size Figure 1: Visual demonstration of Theorem 2. In this problem there are K = 100 classifiers, each defined by 2 parameters represented by a point fi in the unit square, such that the expected disagreement rate between two such classifiers equals the L1 distance between their parameters. (It is easy to create simple input distributions and classifiers that generate exactly this geometry.) We chose the 100 parameter vectors fi uniformly at random from the unit square (the circles in the left panel). To generate varying pile sizes, we let ni decrease with the distance of fi from a chosen “central” point at (0.75, 0.75) (marked “MAX DATA” in the left panel); the resulting pile sizes for each model are shown in the bar plot in the right panel, where the origin (0, 0) is in the near corner, (1, 1) in the far corner, and the pile sizes clearly peak near (0.75, 0.75). Given these fi, ni and the pairwise distances, the undirected graph on the left includes an edge between fi and fj if and only if the data from fj is used to learn fi and/or the converse when Theorem 2 is used to optimize the distance of the data used. The graph simultaneously displays the geometry implicit in Theorem 2 as well as its adaptivity to local circumstances. Near the central point the graph is quite sparse and the edges quite short, corresponding to the fact that for such models we have enough direct data that it is not advantageous to include data from distant models. Far from the central point the graph becomes dense and the edges long, as we are required to aggregate a larger neighborhood to learn the optimal model. In addition, decisions are affected locally by how many models are “nearby” a given model. 4 Applications to standard learning settings In this section we demonstrate the applicability of the general theory given by Theorem 1 to several standard learning settings. We begin with the most straightforward application, classification. 4.1 Binary classification In binary classification, we assume that our target model is a fixed, unknown and arbitrary function f from some input set X to {0, 1}, and that there is a fixed and unknown distribution P over the X. Note that the distribution P over input does not depend on the target function f. The observations are of the form z = ⟨x, y⟩where y ∈{0, 1}. The loss function L(h, ⟨x, y⟩) is defined as 0 if y = h(x) and 1 otherwise, and the corresponding expected loss is e(g1, g2) = E⟨x,y⟩∼Pg1 [L(g2, ⟨x, y⟩)] = Prx∼P [g1(x) ̸= g2(x)]. For 0/1 loss it is well-known and easy to see that the (standard) 1-triangle inequality holds, and classical VC theory [6] provides us with uniform convergence. The conditions of Theorem 1 are thus easily satisfied, yielding the following. Theorem 2 Let F be the set of all functions from an input set X into {0,1} and let d be the VC dimension of H ⊆F. Let e be the expected 0/1 loss. Let K ∈N, f = f1, f2, . . . , fK ∈F, {ǫi}K i=1, {ni}K i=1, and ˆhk be as defined above in the multi-source learning model. For any δ such that 0 < δ < 1, with probability at least 1 −δ, for any k ∈{1, . . . , K} e(f, ˆhk) ≤2 k X i=1 ni n1:k ǫi + min h∈H {e(f, h)} + 2 s d log (2en1:k/d) + log (16K/δ) 8n1:k In Figure 1 we provide a visual demonstration of the behavior of Theorem 1 applied to a simple classification problem. 4.2 Regression We now turn to regression with squared loss. Here our target model f is any function from an input class X into some bounded subset of R. (Frequently we will have X ⊆Rd, but this is not required.) We again assume a fixed but unknown distribution P (that does not depend on f) on the inputs. Our observations are of the form z = ⟨x, y⟩. Our loss function is L(h, ⟨x, y⟩) = (y −h(x))2, and the expected loss is thus e(g1, g2) = E⟨x,y⟩∼Pg1 [L(g2, ⟨x, y⟩)] = Ex∼P (g1(x) −g2(x))2 . For regression it is known that the standard 1-triangle inequality does not hold. However, a 2-triangle inequality does hold and is stated in the following lemma. The proof is given in Appendix A. 3 Lemma 1 Given any three functions g1, g2, g3 : X →R, a fixed and unknown distribution P on the inputs X, and the expected loss e(g1, g2) = Ex∼P (g1(x) −g2(x))2 , e(g1, g2) ≤2 (e(g1, g3) + e(g3, g1)) . The other required ingredient is a uniform convergence bound for regression with squared loss. There is a rich literature on such bounds and their corresponding complexity measures for the model class H, including the fat-shattering generalization of VC dimension [7], ǫ-nets and entropy [6] and the combinatorial and pseudo-dimension approaches beautifully surveyed in [5]. For concreteness here we adopt the latter approach, since it serves well in the following section on density estimation. While a detailed exposition of the pseudo-dimension dim(H) of a class H of real-valued functions exceeds both our space limitations and scope, it suffices to say that it generalizes the VC dimension for binary functions and plays a similar role in uniform convergence bounds. More precisely, in the same way that the VC dimension measures the largest set of points on which a set of classifiers can exhibit “arbitrary” behavior (by achieving all possible labelings of the points), dim(H) measures the largest set of points on which the output values induced by H are “full” or “space-filling.” (Technically we ask whether {⟨h(x1), . . . , h(xd)⟩: h ∈H} intersects all orthants of Rd with respect to some chosen origin.) Ignoring constant and logarithmic factors, uniform convergence bounds can be derived in which the complexity penalty is p dim(H)/n. As with the VC dimension, dim(H) is ordinarily closely related to the number of free parameters defining H. Thus for linear functions in Rd it is O(d) and for neural networks with W weights it is O(W), and so on. Careful application of pseudo-dimension results from [5] along with Lemma 1 and Theorem 1 yields the following. A sketch of the proof appears in Appendix A. Theorem 3 Let F be the set of functions from X into [−B, B] and let d be the pseudo-dimension of H ⊆F under squared loss. Let e be the expected squared loss. Let K ∈N, f = f1, f2, . . . , fK ∈ F, {ǫi}K i=1, {ni}K i=1, and ˆhk be as defined in the multi-source learning model. Assume that n1 ≥ d/16e. For any δ such that 0 < δ < 1, with probability at least 1 −δ, for any k ∈{1, . . . , K} e(f, ˆhk) ≤6 k X i=1 ni n1:k ǫi +4 min h∈H {e(f, h)}+128B2 r d n1:k + s ln(16K/δ) n1:k r ln 16e2n1:k d ! 4.3 Density estimation We turn to the more complex application to density estimation. Here our models are no longer functions, but densities P. The loss function for an observation x is the log loss L(P, x) = log (1/P(x)). The expected loss is then e(P1, P2) = Ex∼P1 [L(P2, x)] = Ex∼P1 [log(1/P2(x))]. As we are not aware of an α-triangle inequality that holds simultaneously for all density functions, we provide general mathematical tools to derive specialized α-triangle inequalities for specific classes of distributions. We focus on the exponential family of distributions, which is quite general and has nice properties which allow us to derive the necessary machinery to apply Theorem 1. We start by defining the exponential family and explaining some of its properties. We proceed by deriving an α-triangle inequality for Kullback-Liebler divergence in exponential families that implies 3A version of this paper with the appendix included can be found on the authors’ websites. an α-triangle inequality for our expected loss function. This inequality and a uniform convergence bound based on pseudo-dimension yield a general method for deriving error bounds in the multiple source setting which we illustrate using the example of multinomial distributions. Let x ∈X be a random variable, in either a continuous space (e.g. X ⊆Rd) or a discrete space (e.g. X ⊆Zd). We define the exponential family of distributions in terms of the following components. First, we have a vector function of the sufficient statistics needed to compute the distribution, denoted Ψ : Rd →Rd′. Associated with Ψ is a vector of expectation parameters µ ∈Rd′ which parameterizes a particular distribution. Next we have a convex vector function F : Rd′ →R (defined below) which is unique for each family of exponential distributions, and a normalization function P0(x). Using this notation we define a probability distribution (in the expectation parameters) to be PF (x | µ) = e∇F (µ)·(Ψ(x)−µ)+F (µ)P0(x) . (1) For all distributions we consider it will hold that Ex∼PF (·|µ) [Ψ(x)] = µ. Using this fact and the linearity of expectation, we can derive the Kullback-Liebler (KL) divergence between two distributions of the same family (which use the same functions F and Ψ) and obtain KL (PF (x | µ1) ∥PF (x | µ2)) = F(µ1) −[F(µ2) + ∇F(µ2) · (µ1 −µ2)] . (2) We define the quantity on the right to be the Bregman divergence between the two (parameter) vectors µ1 and µ2, denoted BF (µ1 ∥µ2). The Bregman divergence measures the difference between F and its first-order Taylor expansion about µ2 evaluated at µ1. Eq. (2) states that the KL divergence between two members of the exponential family is equal to the Bregman divergence between the two corresponding expectation parameters. We refer the reader to [8] for more details about Bregman divergences and to [9] for more information about exponential families. We will use the above relation between the KL divergence for exponential families and Bregman divergences to derive a triangle inequality as required by our theory. The following lemma shows that if we can provide a triangle inequality for the KL function, we can do so for expected log loss. Lemma 2 Let e be the expected log loss, i.e. e(P1, P2) = Ex∼P1 [log(1/P2(x))]. For any three probability distributions P1, P2, and P3, if KL (P1 ∥P2) ≤α(KL (P1 ∥P3) + KL (P3 ∥P2)) for some α ≥1 then e(P1, P2) ≤α(e(P1, P3) + e(P3, P2)). The proof is given in Appendix B. The next lemma gives an approximate triangle inequality for the KL divergence. We assume that there exists a closed set P = {µ} which contains all the parameter vectors. The proof (again see Appendix B) uses Taylor’s Theorem to derive upper and lower bounds on the Bregman divergence and then uses Eq. (2) to relate these bounds to the KL divergence. Lemma 3 Let P1, P2, and P3 be distributions from an exponential family with parameters µ and function F. Then KL (P1 ∥P2) ≤α (KL (P1 ∥P3) + KL (P3 ∥P2)) where α = 2 supξ∈P λ1(H(F(ξ)))/ infξ∈P λd′(H(F(ξ))). Here λ1(·) and λd′(·) are the highest and lowest eigenvalues of a given matrix, and H(·) is the Hessian matrix. The following theorem, which states bounds for multinomial distributions in the multi-source setting, is provided to illustrate the type of results that can be obtained using the machinery described in this section. More details on the application to the multinomial distribution are given in Appendix B. Theorem 4 Let F ≡H be the set of multinomial distributions over N values with the probability of each value bounded from below by γ for some γ > 0, and let α = 2/γ. Let d be the pseudodimension of H under log loss, and let e be the expected log loss. Let K ∈N, f = f1, f2, . . . , fK ∈ F, {ǫi}K i=1, 4 {n}K i=1, and ˆhk be as defined above in the multi-source learning model. Assume that n1 ≥d/16e. For any 0 < δ < 1, with probability at least 1 −δ for any k ∈{1, . . . , K}, e(f, ˆhk) ≤ (α + α2) k X i=1 ni n1:k ǫi + α min h∈H {e(f, h)} 4Here we can actually make the weaker assumption that the ǫi bound the KL divergences rather than the expected log loss, which avoids our needing upper bounds on the entropy of each source distribution. + 128 log2α 2 r d n1:k + s ln(16K/δ) n1:k r ln 16e2n1:k d ! 5 Data-dependent bounds Given the interest in data-dependent convergence methods (such as maximum margin, PAC-Bayes, and others) in recent years, it is natural to ask how our multi-source theory can exploit these modern bounds. We examine one specific case for classification here using Rademacher complexity [10, 11]; analogs can be derived in a similar manner for other learning problems. If H is a class of functions mapping from a set X to R, we define the empirical Rademacher complexity of H on a fixed set of observations x1, . . . , xn as ˆRn(H) = E " sup h∈H 2 n n X i=1 σih(xi) x1, . . . , xn # where the expectation is taken over independent uniform {±1}-valued random variables σ1, . . . , σn. The Rademacher complexity for n observations is then defined as Rn(H) = E h ˆRn(H) i where the expectation is over x1, . . . , xn. We can apply Rademacher-based convergence bounds to obtain a data-dependent multi-source bound for classification. A proof sketch using techniques and theorems of [10] is in Appendix C. Theorem 5 Let F be the set of all functions from an input set X into {-1,1} and let ˆRn1:k be the empirical Rademacher complexity of H ⊆F on the first k piles of data. Let e be the expected 0/1 loss. Let K ∈N, f = f1, f2, . . . , fK ∈F, {ǫi}K i=1, {ni}K i=1, and ˆhk be as defined in the multisource learning model. Assume that n1 ≥d/16e. For any δ such that 0 < δ < 1, with probability at least 1 −δ, for any k ∈{1, . . . , K} e(f, ˆhk) ≤2 k X i=1 ni n1:k ǫi + min h∈H {e(f, h)} + ˆRn1:k(H) + 4 s 2 ln(4K/δ) n1:k While the use of data-dependent complexity measures can be expected to yield more accurate bounds and thus better decisions about the number k∗of piles to use, it is not without its costs in comparison to the more standard data-independent approaches. In particular, in principle the optimization of the bound of Theorem 5 to choose k∗may actually involve running the learning algorithm on all possible prefixes of the piles, since we cannot know the data-dependent complexity term for each prefix without doing so. In contrast, the data-independent bounds can be computed and optimized for k∗without examining the data at all, and the learning performed only once on the first k∗piles. References [1] K. Crammer, M. Kearns, and J. Wortman. Learning from data of variable quality. In NIPS 18, 2006. [2] P. Wu and T. Dietterich. Improving SVM accuracy by training on auxiliary data sources. In ICML, 2004. [3] J. Baxter. Learning internal representations. In COLT, 1995. [4] S. Ben-David. Exploiting task relatedness for multiple task learning. In COLT, 2003. [5] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 1992. [6] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [7] M. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. JCSS, 1994. [8] Y. Censor and S.A. Zenios. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, New York, NY, USA, 1997. [9] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Technical Report 649, Department of Statistics, University of California, Berkeley, 2003. [10] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 2002. [11] V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Trans. Info. Theory, 2001.
|
2006
|
11
|
2,933
|
Approximate inference using planar graph decomposition Amir Globerson Tommi Jaakkola Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 gamir,tommi@csail.mit.edu Abstract A number of exact and approximate methods are available for inference calculations in graphical models. Many recent approximate methods for graphs with cycles are based on tractable algorithms for tree structured graphs. Here we base the approximation on a different tractable model, planar graphs with binary variables and pure interaction potentials (no external field). The partition function for such models can be calculated exactly using an algorithm introduced by Fisher and Kasteleyn in the 1960s. We show how such tractable planar models can be used in a decomposition to derive upper bounds on the partition function of non-planar models. The resulting algorithm also allows for the estimation of marginals. We compare our planar decomposition to the tree decomposition method of Wainwright et. al., showing that it results in a much tighter bound on the partition function, improved pairwise marginals, and comparable singleton marginals. Graphical models are a powerful tool for modeling multivariate distributions, and have been successfully applied in various fields such as coding theory and image processing. Applications of graphical models typically involve calculating two types of quantities, namely marginal distributions, and MAP assignments. The evaluation of the model partition function is closely related to calculating marginals [12]. These three problems can rarely be solved exactly in polynomial time, and are provably computationally hard in the general case [1]. When the model conforms to a tree structure, however, all these problems can be solved in polynomial time. This has prompted extensive research into tree based methods. For example, the junction tree method [6] converts a graphical model into a tree by clustering nodes into cliques, such that the graph over cliques is a tree. The resulting maximal clique size (cf. tree width) may nevertheless be prohibitively large. Wainwright et. al. [9, 11] proposed an approximate method based on trees known as tree reweighting (TRW). The TRW approach decomposes the potential vector of a graphical model into a mixture over spanning trees of the model, and then uses convexity arguments to bound various quantities, such as the partition function. One key advantage of this approach is that it provides bounds on partition function value, a property which is not shared by approximations based on Bethe free energies [13]. In this paper we focus on a different class of tractable models: planar graphs. A graph is called planar if it can be drawn in the plane without crossing edges. Works in the 1960s by physicists Fisher [5] and Kasteleyn [7], among others, have shown that the partition function for planar graphs may be calculated in polynomial time. This, however, is true under two key restrictions. One is that the variables xi are binary. The other is that the interaction potential depends only on xixj (where xi ∈{±1}), and not on their individual values (i.e., the zero external field case). Here we show how the above method can be used to obtain upper bounds on the partition function for non-planar graphs. As in TRW, we decompose the potential of a non-planar graph into a sum over spanning planar models, and then use a convexity argument to obtain an upper bound on the log partition function. The bound optimization is a convex problem, and can be solved in polynomial time. We compare our method with TRW on a planar graph with an external field, and show that it performs favorably with respect to both pairwise marginals and the bound on the partition function, and the two methods give similar results for singleton marginals. 1 Definitions and Notations Given a graph G with n vertices and a set of edges E, we are interested in pairwise Markov Random Fields (MRF) over the graph G. A pairwise MRF [13] is a multivariate distribution over variables x = {x1, . . . , xn} defined as p(x) = 1 Z e P ij∈E fij(xi,xj) (1) where fij are a set of |E| functions, or interaction potentials, defined over pairs of variables. The partition function is defined as Z = P x e P ij∈E fij(xi,xj). Here we will focus on the case where xi ∈{±1}. Furthermore, we will be interested in interaction potentials which only depend on agreement or disagreement between the signs of their variables. We define those by f(xi, xj) = 1 2θij(1 + xixj) = θijI(xi = xj) (2) so that fij(xi, xj) is zero if xi ̸= xj and θij if xi = xj. The model is then defined via the set of parameters θij. We use θ to denote the vector of parameters θij, and denote the partition function by Z(θ) to highlight its dependence on these parameters. A graph G is defined as planar if it can be drawn in the plane without any intersection of edges [4]. With some abuse of notation, we define E as the set of line segments in ℜ2 corresponding to the edges in the graph. The regions of ℜ2 \ E are defined as the faces of the graph. The face which corresponds to an unbounded region is called the external face. Given a planar graph G, its dual graph G∗is defined in the following way: the vertices of G∗correspond to faces of G, and there is an edge between two vertices in G∗iff the two corresponding faces in G share an edge. If the graph G is weighted, the weight on an edge in G∗is the weight on the edge shared by the corresponding faces in G. A plane triangulation of a planar graph G is obtained from G by adding edges such that all the faces of the resulting graph have exactly three vertices. Thus a plane triangulated graph has a dual where all vertices have degree three. It can be shown that every plane graph can be plane triangulated [4]. We shall also need the notion of a perfect matching on a graph. A perfect matching on a graph G is defined as a set of edges H ⊆E such that every vertex in G has exactly one edge in H incident on it. If the graph is weighted, the weight of the matching is defined as the product of the weights of the edges in the matching. Finally, we recall the definition of a marginal polytope of a graph [12]. Consider an MRF over a graph G where fij are given by Equation 2. Denote the probability of the event I(xi = xj) under p(x) by τij. The marginal polytope of G, denoted by M(G), is defined as the set of values τij that can be obtained under some assignment to the parameters θij. For a general graph G the polytope M(G) cannot be described using a polynomial number of inequalities. However, for planar graphs, it turns out that a set of O(n3) constraints, commonly referred to as triangle inequalities, suffice to describe M(G) (see [3] page 434). The triangle inequalities are defined by 1 TRI(n) = {τij : τij + τjk −τik ≤1, τij + τjk + τik ≥1, ∀i, j, k ∈{1, . . . , n}} (3) Note that the above inequalities actually contain variables τij which do not correspond to edges in the original graph G. Thus the equality M(G) = TRI(n) should be understood as referring only to the values of τij that correspond to edges in the graph. Importantly, the values of τij for edges not in the graph need not be valid marginals for any MRF. In other words M(G) is a projection of TRI(n) on the set of edges of G. It is well known that the marginal polytope for trees is described via pairwise constraints. It is thus interesting that for planar graphs, it is triplets, rather than pairwise 1The definition here is slightly different from that in [3], since here we refer to agreement probabilities, whereas [3] refers to disagreement probabilities. This polytope is also referred to as the cut polytope. constraints, that characterize the polytope. In this sense, planar graphs and trees may be viewed as a hierarchy of polytope complexity classes. It remains an interesting problem to characterize other structures in this hierarchy and their related inference algorithms. 2 Exact calculation of partition function using perfect matching The seminal works of Kasteleyn [7] and Fisher [5] have shown how one can calculate the partition function for a binary MRF over a planar graph with pure interaction potentials. We briefly review Fisher’s construction, which we will use in what follows. Our interpretation of the method differs somewhat from that of Fisher, but we believe it is more straightforward. The key idea in calculating the partition function is to convert the summation over values of x to the problem of calculating the sum of weights of all perfect matchings in a graph constructed from G, as shown below. In this section, we consider weighted graphs (graphs with numbers assigned to their edges). For the graph G associated with the pairwise MRF, we assign weights wij = e2θij to the edges. The first step in the construction is to plane triangulate the graph G. Let us call the resulting graph GT. We define an MRF on GT by assigning a parameter θij = 0 to the edges that have been added to G, and the corresponding weight wij = 1. Thus GT essentially describes the same distribution as G, and therefore has the same partition function. We can thus restrict our attention to calculating the partition function for the MRF on GT. As a first step in calculating a partition function over GT, we introduce the following definition: a set of edges ˆE in GT is an agreement edge set (or AES) if for every triangle face F in GT one of the following holds: The edges in F are all in ˆE, or exactly one of the edges in F is in ˆE. The weight of a set ˆE is defined as the product of the weights of the edges in ˆE. It can be shown that there exists a bijection between pairs of assignments {x, −x} and agreement edge sets. The mapping from x to an edge set is simply the set of edges such that xi = xj. It is easy to see that this is an agreement edge set. The reverse mapping is obtained by finding an assignment x such that xi = xj iff the corresponding edge is in the agreement edge set. The existence of this mapping can be shown by induction on the number of (triangle) faces. The contribution of a given assignment x to the partition function is e P ij∈E θijI(xi=xj). If x corresponds to an AES denoted by ˆE it is easy to see that e P ij∈E θijI(xi=xj) = e−P ij∈E θije P ij∈ˆ E 2θij = c e P ij∈ˆ E 2θij = c Y ij∈ˆ E wij (4) where c = e−P ij∈E θij. Define the superset Λ as the set of agreement edge sets. The above then implies that Z(θ) = 2c P ˆ E∈Λ Q ij∈ˆ E wij, and is thus proportional to the sum of AES weights. To sum over agreement edge sets, we use the following elegant trick introduced by Fisher [5]. Construct a new graph GPM from the dual of GT by introducing new vertices and edges according to the following rule: Replace each original vertex with three vertices that are connected to each other, and assign a weight of one to the new edges. Next, consider the three neighbors of the original vertex 2. Connect each of the three new vertices to one of these three neighbors, keeping the original weights on these edges. The transformation is illustrated in Figure 1. The new graph GPM has O(3n) vertices, and is also planar. It can be seen that there is a one to one correspondence between perfect matchings in GPM and agreement edge sets in GT. Define Ωto be the set of perfect matchings in GPM. Then Z(θ) = 2c P M∈Ω Q ij∈M wij where we have used the fact that all the new weights have a value of one. Thus, the partition function is a sum over the weights of perfect matchings in GPM. Finally, we need a way of summing over the weights of the set of perfect matchings in a graph. Kasteleyn [7] proved that for a planar graph GPM, this sum may be obtained using the following sequence of steps: • Direct the edges of the graph GPM such that for every face (except possibly the external face), the number of edges on its perimeter oriented in a clockwise manner is odd. Kasteleyn showed that such a so called Pfaffian orientation may be constructed in polynomial time for a planar graph (see also [8] page 322). 2Note that in the dual of GT all vertices have degree three, since GT is plane triangulated. 1.2 0.6 0.7 0.8 1.4 1.5 1.2 0.6 0.7 0.8 1.4 1.5 1 1 1 1 1 1 1 1 1 1 1 1 Figure 1: Illustration of the graph transformations in Section 2 for a complete graph with four vertices. Left panel shows the original weighted graph (dotted edges and grey vertices) and its dual (solid edges and black vertices). Right panel shows the dual graph with each vertex replaced by a triangle (the graph GPM in the text). Weights for dual graph edges correspond to the weights on the original graph. • Define the matrix P(GPM) to be a skew symmetric matrix such that Pij = 0 if ij is not an edge, Pij = wij if the arrow on edge ij runs from i to j and Pij = −wij otherwise. • The sum over weighted matchings can then be shown to equal p |P(GPM)|. The partition function is thus given by Z(θ) = 2c p |P(GPM)|. To conclude this section we reiterate the following two key points: the partition function of a binary MRF over a planar graph with interaction potentials as in Equation 2 may be calculated in polynomial time by calculating the determinant of a matrix of size O(3n). An important outcome of this result is that the functional relation between Z(θ) and the parameters θij is known, a fact we shall use in what follows. 3 Partition function bounds via planar decomposition Given a non-planar graph G over binary variables with a vector of interaction potentials θ, we wish to use the exact planar computation to obtain a bound on the partition function of the MRF on G. We assume for simplicity that the potentials on the MRF for G are given in the form of Equation 2. Thus, G violates the assumptions of the previous section only in its non-planarity. Define G(r) as a set of spanning planar subgraphs of G, i.e., each graph G(r) is planar and contains all the vertices of G and some its edges. Denote by m the number of such graphs. Introduce the following definitions: • θ(r) is a set of parameters on the edges of G(r), and θ(r) ij is an element in this set. Z(θ(r)) is the partition function of the MRF on G(r) with parameters θ(r). • ˆθ (r) is a set of parameters on the edges of G such that if edge (ij) is in G(r) then ˆθ(r) ij = θ(r) ij , and otherwise ˆθ(r) ij = 0. Given a distribution ρ(r) on the graphs G(r) (i.e., ρ(r) ≥0 for r = 1, . . . , m and P r ρ(r) = 1), assume that the parameters for G(r) are such that θ = X r ρ(r)ˆθ (r) (5) Then, by the convexity of the log partition function, as a function of the model parameters, we have log Z(θ) ≤ X r ρ(r) log Z(θ(r)) ≡f(θ, ρ, θ(r)) (6) Since by assumption the graphs G(r) are planar, this bound can be calculated in polynomial time. Since this bound is true for any set of parameters θ(r) which satisfies the condition in Equation 5 and for any distribution ρ(r), we may optimize over these two variables to obtain the tightest bound possible. Define the optimal bound for a fixed value of ρ(r) by g(ρ, θ) (optimization is w.r.t. θ(r)) g(ρ, θ) = min θ(r):P ρ(r)ˆθ (r)=θ f(θ, ρ, θ(r)) (7) Also, define the optimum of the above w.r.t. ρ by h(θ). h(θ) = min ρ(r) ≥0, P ρ(r) = 1 g(θ, ρ) (8) Thus, h(θ) is the optimal upper bound for the given parameter vector θ. In the following section we argue that we can in fact find the global optimum of the above problem. 4 Globally Optimal Bound Optimization First consider calculating g(ρ, θ) from Equation 7. Note that since log Z(θ(r)) is a convex function of θ(r), and the constraints are linear, the overall optimization is convex and can be solved efficiently. In the current implementation, we use a projected gradient algorithm [2]. The gradient of f(θ, ρ, θ(r)) w.r.t. θ(r) is given by ∂f(θ, ρ, θ(r)) ∂θ(r) ij = ρ(r) 1 + eθ(r) ij h P −1(G(r) PM ) i k(i,j) Sign(Pk(i,j)(G(r) PM )) (9) where k(i, j) returns the row and column indices of the element in the upper triangular matrix of P(G(r) PM ), which contains the element e2θ(r) ij . Since the optimization in Equation 7 is convex, it has an equivalent convex dual. Although we do not use this dual for optimization (because of the difficulty of expressing the entropy of planar models solely in terms of triplet marginals), it nevertheless allows some insight into the structure of the problem. The dual in this case is closely linked to the notion of the marginal polytope defined in Section 1. Using a derivation similar to [11], we arrive at the following characterization of the dual g(ρ, θ) = max τ∈TRI(n) θ · τ + X r ρ(r)H(θ(r)(τ)) (10) where θ(r)(τ) denotes the parameters of an MRF on G(r) such that its marginals are given by the restriction of τ to the edges of G(r), and H(θ(r)(τ)) denotes the entropy of the MRF over G(r) with parameters θ(r)(τ). The maximized function in Equation 10 is linear in ρ and thus g(ρ, θ) is a pointwise maximum over (linear) convex functions in ρ and is thus convex in ρ. It therefore has no local minima. Denote by θ(r) min(ρ) the set of parameters that minimizes Equation 7 for a given value of ρ. Using a derivation similar to that in [11], the gradient of g(ρ, θ) can be shown to be ∂g(ρ, θ) ∂ρ(r) = H(θ(r) min(ρ)) (11) Since the partition function for G(r) can be calculated efficiently, so can the entropy. We can now summarize the algorithm for calculating h(θ) • Initialize ρ0. Iterate: – For ρt, find θ(r) which solves the minimization in Equation 7. – Calculate the gradient of g(ρ, θ) at ρt using the expression in Equation 11 – Update ρt+1 = ρt + αv where v is a feasible search direction calculated from the gradient of g(ρ, θ) and the simplex constraints on ρ. The step size α is calculated via an Armijo line search. – Halt when the change in g(ρ, θ) is smaller than some threshold. Note that the minimization w.r.t. θ(r) is not very time consuming since we can initialize it with the minimum from the previous step, and thus only a few iterations are needed to find the new optimum, provided the change in ρ is not too big. The above algorithm is guaranteed to converge to a global optimum of ρ [2], and thus we obtain the tightest possible upper bound on Z(θ) given our planar graph decomposition. The procedure described here is asymmetric w.r.t. ρ and θ(r). In a symmetric formulation the minimizing gradient steps could be carried out jointly or in an alternating sequence. The symmetric formulation can be obtained by decoupling ρ and θ(r) in the bi-linear constraint P ρ(r)ˆθ (r) = θ. Field Figure 2: Illustration of planar subgraph construction for a rectangular lattice with external field. Original graph is shown on the left. The field vertex is connected to all vertices (edges not shown). The graph on the right results from isolating the 4th,5th columns of the original graph (shown in grey), and connecting the field vertex to the external vertices of the three disconnected components. Note that the resulting graph is planar. Specifically, we introduce ˜θ(r) = θ(r)ρ(r) and perform the optimization w.r.t. ρ and ˜θ(r). It can be shown that a stationary point of f(θ, ρ, ˜θ(r)) with the relevant (de-coupled) constraint is equivalent to the procedure described above. The advantage of this approach is that the exact minimization w.r.t θ(r) is not required before modifying ρ. Our experiments have shown, however, that the methods take comparable times to converge, although this may be a property of the implementation. 5 Estimating Marginals The optimization problem as defined above minimizes an upper bound on the partition function. However, it may also be of interest to obtain estimates of the marginals of the MRF over G. To obtain marginal estimates, we follow the approach in [11]. We first characterize the optimum of Equation 7 for a fixed value of ρ. Deriving the Lagrangian of Equation 7 w.r.t. θ(r) we obtain the following characterization of θ(r) min(ρ): Marginal Optimality Criterion: For any two graphs G(r), G(s) such that the edge (ij) is in both graphs, the optimal parameter vector satisfies τij(θ(r) min(ρ)) = τij(θ(s) min(ρ)). Thus, the optimal set of parameters for the graphs G(r) is such that every two graphs agree on the marginals of all the edges they share. This implies that at the optimum, there is a well defined set of marginals over all the edges. We use this set as an approximation to the true marginals. A different method for estimating marginals uses the partition function bound directly. We first calculate partition function bounds on the sums: αi(1) = P x:xi=1 e P ij∈E fij(xi,xj) and αi(−1) = P x:xi=−1 e P ij∈E fij(xi,xj) and then normalize αi(1) αi(1)+αi(−1) to obtain an estimate for p(xi = 1). This method has the advantage of being more numerically stable (since it does not depend on derivatives of log Z). However, it needs to be calculated separately for each variable, so that it may be time consuming if one is interested in marginals for a large set of variables. 6 Experimental Evaluation We study the application of our Planar Decomposition (PDC) method to a binary MRF on a square lattice with an external field. The MRF is given by p(x) ∝e P ij∈E θijxixj+P i∈V θixi where V are the lattice vertices, and θi and θij are parameters. Note that this interaction does not satisfy the conditions for exact calculation of the partition function, even though the graph is planar. This problem is in fact NP hard [1]. However, it is possible to obtain the desired interaction form by introducing an additional variable xn+1 that is connected to all the original variables. Denote the corresponding graph by Gf. Consider the distribution p(x, xn+1) ∝e P ij∈E θijxixj+P i∈V θi,n+1xixn+1, where θi,n+1 = θi. It is easy to see that any property of p(x) (e.g., partition function, marginals) may be calculated from the corresponding property of p(x, xn+1). The advantage of the latter distribution is that it has the desired interaction form. We can thus apply PDC by choosing planar subgraphs of the non-planar graph Gf. 0.5 1 1.5 2 0.05 0.1 0.15 0.2 0.25 Z Bound Error Interaction Strength PDC TRW 0.5 1 1.5 2 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Interaction Strength Pairwise Marginals Error 0.5 1 1.5 2 0.005 0.01 0.015 0.02 0.025 0.03 Interaction Strength Singleton Marginal Error 0.5 1 1.5 2 0.015 0.02 0.025 0.03 Z Bound Error Field Strength 0.5 1 1.5 2 0.015 0.02 0.025 0.03 Field Strength Pairwise Marginals Error 0.5 1 1.5 2 3 4 5 6 7 8 9 x 10 !3 Field Strength Singleton Marginal Error Figure 3: Comparison of the TRW and Planar Decomposition (PDC) algorithms on a 7×7 square lattice. TRW results shown in red squares, and PDC in blue circles. Left column shows the error in the log partition bound. Middle column is the mean error for pairwise marginals, and right column is the error for the singleton marginal of the variable at the lattice center. Results in upper row are for field parameters drawn from U[−0.05, 0.05] and various interaction parameters. Results in the lower row are for interaction parameters drawn from U[−0.5, 0.5] and various field parameters. Error bars are standard errors calculated from 40 random trials. There are clearly many ways to choose spanning planar subgraphs of Gf. Spanning subtrees are one option, and were used in [11]. Since our optimization is polynomial in the number of subgraphs, we preferred to use a number of subgraphs that is linear in √n. The key idea in generating these planar subgraphs is to generate disconnected components of the lattice and connect xn+1 only to the external vertices of these components. Here we generate three disconnected components by isolating two neighboring columns (or rows) from the rest of the graph, resulting in three components. This is illustrated in Figure 2. To this set of 2√n graphs, we add the independent variables graph consisting only of edges from the field node to all the other nodes. We compared the performance of the PDC and TRW methods 3 4 on a 7 × 7 lattice . Since the exact partition function and marginals can be calculated for this case, we could compare both algorithms to the true values. The MRF parameters were set according to the two following scenarios: 1) Varying Interaction - The field parameters θi were drawn uniformly from U[−0.05, 0.05], and the interaction θij from U[−α, α] where α ∈{0.2, 0.4, . . . , 2}. This is the setting tested in [11]. 2) Varying Field θi was drawn uniformly from U[−α, α], where α ∈{0.2, 0.4, . . . , 2} and θij from U[−0.5, 0.5]. For each scenario, we calculated the following measures: 1) Normalized log partition error 1 49(log Zalg −log Ztrue). 2) Error in pairwise marginals 1 |E| P ij∈E |palg(xi = 1, xj = 1) − ptrue(xi = 1, xj = 1)|. Pairwise marginals were calculated jointly using the marginal optimality criterion of Section 5. 3) Error in singleton marginals. We calculated the singleton marginals for the innermost node in the lattice (i.e., coordinate [3, 3]), which intuitively should be the most difficult for the planar based algorithm. This marginal was calculated using two partition functions, as explained in Section 5 5. The same method was used for TRW. The reported error measure is |palg(xi = 1) −ptrue(xi = 1)|. Results were averaged over 40 random trials. Results for the two scenarios and different evaluation measures are given in Figure 3. It can be seen that the partition function bound for PDC is significantly better than TRW for almost all parameter settings, although the difference becomes smaller for large field values. Error for the PDC pairwise 3TRW and PDC bounds were optimized over both the subgraph parameters and the mixture parameters ρ. 4In terms of running time, PDC optimization for a fixed value of ρ took about 30 seconds, which is still slower than the TRW message passing implementation. 5Results using the marginal optimality criterion were worse for PDC, possibly due to its reduced numerical precision. marginals are smaller than those of TRW for all parameter settings. For the singleton parameters, TRW slightly outperforms PDC. This is not surprising since the field is modeled by every spanning tree in the TRW decomposition, whereas in PDC not all the structures model a given field. 7 Discussion We have presented a method for using planar graphs as the basis for approximating non-planar graphs such as planar graphs with external fields. While the restriction to binary variables limits the applicability of our approach, it remains relevant in many important applications, such as coding theory and combinatorial optimization. Moreover, it is always possible to convert a non-binary graphical model to a binary one by introducing additional variables. The resulting graph will typically not be planar, even when the original graph over k−ary variables is. However, the planar decomposition method can then be applied to this non-planar graph. The optimization of the decomposition is carried out explicitly over the planar subgraphs, thus limiting the number of subgraphs that can be used in the approximation. In the TRW method this problem is circumvented since it is possible to implicitly optimize over all spanning trees. The reason this can be done for trees is that the entropy of an MRF over a tree may be written as a function of its marginal variables. We do not know of an equivalent result for planar graphs, and it remains a challenge to find one. It is however possible to combine the planar and tree decompositions into one single bound, which is guaranteed to outperform the tree or planar approximations alone. The planar decomposition idea may in principle be applied to bounding the value of the MAP assignment. However, as in TRW, it can be shown that the solution is not dependent on the decomposition (as long as each edge appears in some structure), and the problem is equivalent to maximizing a linear function over the marginal polytope (which can be done in polynomial time for planar graphs). However, such a decomposition may suggest new message passing algorithms, as in [10]. Acknowledgments The authors acknowledge support from the Defense Advanced Research Projects Agency (Transfer Learning program). Amir Globerson is also supported by the Rothschild Yad-Hanadiv fellowship. The authors also wish to thank Martin Wainwright for providing his TRW code. References [1] F. Barahona. On the computational complexity of ising spin glass models. J. Phys. A., 15(10):3241–3253, 1982. [2] D. P. Bertsekas, editor. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995. [3] M.M. Deza and M. Laurent. Geometry of Cuts and Metrics. Springe-Verlag, 1997. [4] R. Diestel. Graph Theory. Springer-Verlag, 1997. [5] M.E. Fisher. On the dimer solution of planar ising models. J. Math. Phys., 7:1776–1781, 1966. [6] M.I. Jordan, editor. Learning in graphical models. MIT press, Cambridge, MA, 1998. [7] P.W. Kasteleyn. Dimer statistics and phase transitions. Journal of Math. Physics, 4:287–293, 1963. [8] L. Lovasz and M.D. Plummer. Matching Theory, volume 29 of Annals of discrete mathematics. NorthHolland, New-York, 1986. [9] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree-based reparameterization framework for analysis of sum-product and related algorithms. IEEE Trans. on Information Theory, 49(5):1120–1146, 2003. [10] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Map estimation via agreement on trees: messagepassing and linear programming. IEEE Trans. on Information Theory, 51(11):1120–1146, 2005. [11] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. on Information Theory, 51(7):2313–2335, 2005. [12] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Technical report, UC Berkeley Dept. of Statistics, 2003. [13] J.S. Yedidia, W.T. W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282–2312, 2005.
|
2006
|
110
|
2,934
|
Context Effects in Category Learning: An Investigation of Four Probabilistic Models Michael C. Mozer+⋄, Michael Jones⋄†, Michael Shettel+ +Dept. of Computer Science, †Dept. of Psychology, and ⋄Institute of Cognitive Science University of Colorado, Boulder, CO 80309-0430 {mozer,mike.jones,shettel}@colorado.edu Abstract Categorization is a central activity of human cognition. When an individual is asked to categorize a sequence of items, context effects arise: categorization of one item influences category decisions for subsequent items. Specifically, when experimental subjects are shown an exemplar of some target category, the category prototype appears to be pulled toward the exemplar, and the prototypes of all nontarget categories appear to be pushed away. These push and pull effects diminish with experience, and likely reflect long-term learning of category boundaries. We propose and evaluate four principled probabilistic (Bayesian) accounts of context effects in categorization. In all four accounts, the probability of an exemplar given a category is encoded as a Gaussian density in feature space, and categorization involves computing category posteriors given an exemplar. The models differ in how the uncertainty distribution of category prototypes is represented (localist or distributed), and how it is updated following each experience (using a maximum likelihood gradient ascent, or a Kalman filter update). We find that the distributed maximum-likelihood model can explain the key experimental phenomena. Further, the model predicts other phenomena that were confirmed via reanalysis of the experimental data. Categorization is a key cognitive activity. We continually make decisions about characteristics of objects and individuals: Is the fruit ripe? Does your friend seem unhappy? Is your car tire flat? When an individual is asked to categorize a sequence of items, context effects arise: categorization of one item influences category decisions for subsequent items. Intuitive naturalistic scenarios in which context effects occur are easy to imagine. For example, if one lifts a medium-weight object after lifting a light-weight or heavy-weight object, the medium weight feels heavier following the light weight than following the heavy weight. Although the object-contrast effect might be due to fatigue of sensory-motor systems, many context effects in categorization are purely cognitive and cannot easily be attributed to neural habituation. For example, if you are reviewing a set of conference papers, and the first three in the set are dreadful, then even a mediocre paper seems like it might be above threshold for acceptance. Another example of a category boundary shift due to context is the following. Suppose you move from San Diego to Pittsburgh and notice that your neighbors repeatedly describe muggy, somewhat overcast days as ”lovely.” Eventually, your notion of what constitutes a lovely day accommodates to your new surroundings. As we describe shortly, experimental studies have shown a fundamental link between context effects in categorization and long-term learning of category boundaries. We believe that context effects can be viewed as a reflection of a trial-to-trial learning, and the cumulative effect of these trial-to-trial modulations corresponds to what we classically consider to be category learning. Consequently, any compelling model of category learning should also be capable of explaining context effects. 1 Experimental Studies of Context Effects in Categorization Consider a set of stimuli that vary along a single continuous dimension. Throughout this paper, we use as an illustration circles of varying diameters, and assume four categories of circles defined ranges of diameters; call them A, B, C, and D, in order from smallest to largest diameter. In a classification paradigm, experimental subjects are given an exemplar drawn from one category and are asked to respond with the correct category label (Zotov, Jones, & Mewhort, 2003). After making their response, subjects receive feedback as to the correct label, which we’ll refer to as the target. In a production paradigm, subjects are given a target category label and asked to produce an exemplar of that category, e.g., using a computer mouse to indicate the circle diameter (Jones & Mewhort, 2003). Once a response is made, subjects receive feedback as to the correct or true category label for the exemplar they produced. Neither classification nor production task has sequential structure, because the order of trial is random in both experiments. The production task provides direct information about the subjects’ internal representations, because subjects are producing exemplars that they consider to be prototypes of a category, whereas the categorization task requires indirect inferences to be made about internal representations from reaction time and accuracy data. Nonetheless, the findings in the production and classification tasks mirror one another nicely, providing converging evidence as to the nature of learning. The production task reveals how mental representations shift as a function of trial-to-trial sequences, and these shifts cause the sequential pattern of errors and response times typically observed in the classification task. We focus on the production task in this paper because it provides a richer source of data. However, we address the categorization task with our models as well. Figure 1 provides a schematic depiction of the key sequential effects in categorization. The horizontal line represents the stimulus dimension, e.g., circle diameter. The dimension is cut into four regions labeled with the corresponding category. The category center, which we’ll refer to as the prototype, is indicated by a vertical dashed line. The long solid vertical line marks the current exemplar—whether it is an exemplar presented to subjects in the classification task or an exemplar generated by subjects in the production task. Following an experimental trial with this exemplar, category prototypes appear to shift: the target-category prototype moves toward the exemplar, which we refer to as a pull effect, and all nontarget-category prototypes move away from the exemplar, which we refer to as a push effect. Push and pull effects are assessed in the production task by examining the exemplar produced on the following trial, and in the categorization task by examining the likelihood of an error response near category boundaries. The set of phenomena to be explained are as follows, described in terms of the production task. All numerical results referred to are from Jones and Mewhort (2003). This experiment consisted of 12 blocks of 40 trials, with each category label given as target 10 times within a block. • Within-category pull: When a target category is repeated on successive trials, the exemplar generated on the second trial moves toward the exemplar generated on the first trial, with respect to the true category prototype. Across the experiment, a correlation coefficient of 0.524 is obtained, and remains fairly constant over trials. • Between-category push: When the target category changes from one trial to the next, the exemplar generated on the second trial moves away from the exemplar generated on the first trial (or equivalently, from the prototype of the target category on the first trial). Figure 2a summarizes the sequential push effects from Jones and Mewhort. The diameter of the circle produced on trial t is plotted as a function of the target category on trial t −1, with one line for each of the four trial t targets. The mean diameter for each target category is subtracted out, so the absolute vertical offset of each line is unimportant. The main feature of the data to note is that all four curves have a negative slope, which has the following meaning: the smaller that target t −1 is (i.e., the further to the left on the x axis in Figure 1), the larger the response to target t is (further to the right in Figure 1), and vice versa, reflecting a push away from target t −1. Interestingly and importantly, the magnitude of the push increases with the ordinal distance between targets t −1 and t. Figure 2a is based on data from only eight subjects and is therefore noisy, though the effect is statistically reliable. As further evidence, Figure 2b shows data from a categorization task (Zotov et al., 2003), where the y-axis is a different dependent measure, but the negative slope has the same interpretation as in Figure 2a. Figure 1: Schematic depiction of sequential effects in categorization stimulus dimension A B C D example Figure 2: Push effect data from (a) production task of Jones and Mewhort (2003), (b) classification task of Zotov et al. (2003), and (c)-(f) the models proposed in this paper. The y axis is the deviation of the response from the mean, as a proportion of the total category width. The response to category A is solid red, B is dashed magenta, C is dash-dotted blue, and D is dotted green. A B C D −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 (a) humans: production response deviation previous category label A D C B A B C D (b) humans: classification response bias previous category label A B C D −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 (c) KFU−local previous category label response deviation A B C D −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 (d) KFU−distrib previous category label response deviation A B C D −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 (e) MLGA−local previous category label response deviation A B C D −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 (f) MLGA−distrib previous category label response deviation • Push and pull effects are not solely a consequence of errors or experimenter feedback. In quantitative estimation of push and pull effects, trial t is included in the data only if the response on trial t −1 is correct. Thus, the effects follow trials in which no error feedback is given to the subjects, and therefore the adjustments are not due to explicit error correction. • Push and pull effects diminish over the course of the experiment. The magnitude of push effects can be measured by the slope of the regression lines fit to the data in Figure 2a. The slopes get shallower over successive trial blocks. The magnitude of pull effects can be measured by the standard deviation (SD) of the produced exemplars, which also decreases over successive trial blocks. • Accuracy increases steadily over the course of the experiment, from 78% correct responses in the first block to 91% in the final block. This improvement occurs despite the fact that error feedback is relatively infrequent and becomes even less frequent as performance improves. 2 Four Models In this paper, we explore four probabilistic (Bayesian) models to explain data described in the previous section. The key phenomenon to explain turns out to be the push effect, for which three of the four models fail to account. Modelers typically discard the models that they reject, and present only their pet model. In this work, we find it useful to report on the rejected models for three reasons. First, they help to set up and motivate the one successful model. Second, they include several obvious candidates, and we therefore have the imperative to address them. Third, in order to evaluate a model that can explain certain data, one needs to know the degree to which the the data constrain the space of models. If many models exist that are consistent with the data, one has little reason to prefer our pet candidate. Underlying all of the models is a generative probabilistic framework in which a category i is represented by a prototype value, di, on the dimension that discriminates among the categories. In the example used throughout this paper, the dimension is the diameter of a circle (hence the notation d for the prototype). An exemplar, E, of category i is drawn from a Gaussian distribution with mean di and variance vi, denoted E ∼N(di, vi). Category learning involves determining d ≡{di}. In this work, we assume that the {vi} are fixed and given. Because d is unknown at the start of the experiment, it is treated as the value of a random vector, D ≡{Di}. Figure 3a shows a simple graphical model representing the generative framework, in which E is the exemplar and C the category label. To formalize our discussion so far, we adopt the following notation: P(E|C = c, D = d) ∼N(hcd, vc), (1) where, for the time being, hc is a unary column vector all of whose elements are zero except for element c which has value 1. (Subscripts may indicate either an index over elements of a vector, or an index over vectors. Boldface is used for vectors and matrices.) Figure 3: (a) Graphical model depicting selection of an exemplar, E, of a category, C, based on the prototype vector, D; (b) Dynamic version of model indexed by trials, t D E C Dt-1 Ct-1 Dt Ct Et Et-1 (a) (b) We assume that the prototype representation, D, is multivariate Gaussian, D ∼N(Ψ, Σ), where Ψ and Σ encode knowledge—and uncertainty in the knowledge—of the category prototype structure. Given this formulation, the uncertainty in D can be integrated out: P(E|C) ∼N(hcΨ, hcΣhT c + vc). (2) For the categorization task, a category label can be assigned by evaluating the category posterior, P(C|E), via Bayes rule, Equation 1, and the category priors, P(C). In this framework, learning takes place via trial-to-trial adaptation of the category prototype distribution, D. In Figure 3b, we add the subscript t to each random variable to denote the trial, yielding a dynamic graphical model for the sequential updating of the prototype vector, Dt. (The reader should be attentive to the fact that we use subscripted indices to denote both trials and category labels. We generally use the index t to denote trial, and c or i to denote a category label.) The goal of our modeling work is to show that the sequential updating process leads to context effects, such as the push and pull effects discussed earlier. We propose four alternative models to explore within this framework. The four models are obtained via the Cartesian product of two binary choices: the learning rule and the prototype representation. 2.1 Learning rule The first learning rule, maximum likelihood gradient ascent (MLGA), attempts to adjust the prototype representation so as to maximize the log posterior of the category given the exemplar. (The category, C = c, is the true label associated with the exemplar, i.e., either the target label the subject was asked to produce, or—if an error was made—the actual category label the subject did produce.) Gradient ascent is performed in all parameters of Ψ and Σ: ∆ψi = ϵψ ∂ ∂ψi log(P(c|e)) and ∆σij = ϵσ ∂ ∂σij log(P(c|e)), (3) where ϵψ and ϵσ are step sizes. To ensure that Σ remains a covariance matrix, constrained gradient steps are applied. The constraints are: (1) diagonal terms are nonnegative, i.e., σ2 i ≥0; (2) offdiagonal terms are symmetric, i.e., σij = σji; and (3) the matrix remains positive definite, ensured by −1 ≤ σij σiσj ≤1. The second learning rule, a Kalman filter update (KFU), reestimates the uncertainty distribution of the prototypes given evidence provided by the current exemplar and category label. To draw the correspondence between our framework and a Kalman filter: the exemplar is a scalar measurement that pops out of the filter, the category prototypes are the hidden state of the filter, the measurement noise is vc, and the linear mapping from state to measurement is achieved by hc. Technically, the model is a measurement-switched Kalman filter, where the switching is determined by the category label c, i.e., the measurement function, hc, and noise, vc, are conditioned on c. The Kalman filter also allows temporal dynamics via the update equation, dt = Adt−1, as well as internal process noise, whose covariance matrix is often denoted Q in standard Kalman filter notation. We investigated the choice of A and R, but because they did not impact the qualitative outcome of the simulations, we used A = I and R = 0. Given the correspondence we’ve established, the KFU equations—which specify Ψt+1 and Σt+1 as a function of ct, et, Ψt, and Σt—can be found in an introductory text (e.g., Maybeck, 1979). Figure 4: Change to a category prototype for each category following a trial of a given category. Solid (open) bars indicate trials in which the exemplar is larger (smaller) than the prototype. A B C D −0.2 0 0.2 trial t −1: A trial t prototype mvt. A B C D −0.2 0 0.2 trial t −1: B trial t A B C D −0.2 0 0.2 trial t −1: C trial t A B C D −0.2 0 0.2 trial t −1: D trial t 2.2 Representation of the prototype The prototype representation that we described is localist: there is a one-to-one correspondence between the prototype for each category i and the random variable Di. To select the appropriate prototype given a current category c, we defined the unary vector hc and applied hc as a linear transform on D. The identical operations can be performed in conjunction with a distributed representation of the prototype. But we step back momentarily to motivate the distributed representation. The localist representation suffers from a key weakness: it does not exploit interrelatedness constraints on category structure. The task given to experimental subjects specifies that there are four categories, and they have an ordering; the circle diameters associated with category A are smaller than the diameters associated with B, etc. Consequently, dA < dB < dC < dD. One might make a further assumption that the category prototypes are equally spaced. Exploiting these two sources of domain knowledge leads to the distributed representation of category structure. A simple sort of distributed representation involves defining the prototype for category i not as di but as a linear function of an underlying two-dimensional state-space representation of structure. In this state space, d1 indicates the distance between categories and d2 an offset for all categories. This representation of state can be achieved by applying Equation 1 and defining hc = (nc, 1), where nc is the ordinal position of the category (nA = 1, nB = 2, etc.). We augment this representation with a bit of redundancy by incorporating not only the ordinal positions but also the reverse ordinal positions; this addition yields a symmetry in the representation between the two ends of the ordinal category scale. As a result of this augmentation, d becomes a three-dimensional state space, and hc = (nc, N + 1 −nc, 1), where N is the number of categories. To summarize, both the localist and distributed representations posit the existence of a hidden-state space—unknown at the start of learning—that specifies category prototypes. The localist model assumes one dimension in the state space per prototype, whereas the distributed model assumes fewer dimensions in the state space—three, in our proposal—than there are prototypes, and computes the prototype location as a function of the state. Both localist and distributed representations assume a fixed, known {hc} that specify the interpretation of the state space, or, in the case of the distributed model, the subject’s domain knowledge about category structure. 3 Simulation Methodology We defined a one-dimensional feature space in which categories A-D corresponded to the ranges [1, 2), [2, 3), [3, 4), and [4, 5), respectively. In the human experiment, responses were considered incorrect if they were smaller than A or larger than D; we call these two cases out-of-bounds-low (OOBL) and out-of-bounds-high (OOBH). OOBL and OOBH were treated as two additional categories, resulting in 6 categories altogether for the simulation. Subjects and the model were never asked to produce exemplars of OOBL or OOBH, but feedback was given if a response fell into these categories. As in the human experiment, our simulation involved 480 trials. We performed 100 replications of each simulation with identical initial conditions but different trial sequences, and averaged results over replications. All prototypes were initialized to have the same mean, 3.0, at the start of the simulation. Because subjects had some initial practice on the task before the start of the experimental trials, we provided the models with 12 initial trials of a categorization (not production) task, two for each of the 6 categories. (For the MLGA models, it was necessary to use a large step size on these trials to move the prototypes to roughly the correct neighborhood.) To perform the production task, the models must generate an exemplar given a category. It seems natural to draw an exemplar from the distribution in Equation 2 for P(E|C). However, this distribution reflects the full range of exemplars that lie within the category boundaries, and presumably in the production task, subjects attempt to produce a prototypical exemplar. Consequently, we exclude the intrinsic category variance, vc, from Equation 2 in generating exemplars, leaving variance only via uncertainty about the prototype. Each model involved selection of various parameters and initial conditions. We searched the parameter space by hand, attempting to find parameters that satisfied basic properties of the data: the accuracy and response variance in the first and second halves of the experiment. We report only parameters for the one model that was successful, the MLGA-Distrib: ϵψ = 0.0075, ϵσ = 1.5 × 10−6 for off-diagonal terms and 1.5 × 10−7 for diagonal terms (the gradient for the diagonal terms was relatively steep), Σ0 = 0.01I, and for all categories c, vc = 0.42. 4 Results 4.1 Push effect The phenomenon that most clearly distinguishes the models is the push effect. The push effect is manifested in sequential-dependency functions, which plot the (relative) response on trial t as a function of trial t −1. As we explained using Figures 2a,b, the signature of the push effect is a negatively sloped line for each of the different trial t target categories. The sequential-dependency functions for the four models are presented in Figures 2c-f. KFU-Local (Figure 2c) produces a flat line, indicating no push whatsoever. The explanation for this result is straightforward: the Kalman filter update alters only the variable that is responsible for the measurement (exemplar) obtained on that trial. That variable is the prototype of the target class c, Dc. We thought the lack of an interaction among the category prototypes might be overcome with KFU-Distrib, because with a distributed prototype representation, all of the state variables jointly determine the target category prototype. However, our intuition turned out to be incorrect. We experimented with many different representations and parameter settings, but KFU-Distrib consistently obtained flat or shallow positive sloping lines (Figure 2d). MLGA-Local (Figure 2e) obtains a push effect for neighboring classes, but not distant classes. For example, examining the dashed magenta line, note that B is pushed away by A and C, but is not affected by D. MLGA-Local maximizes the likelihood of the target category both by pulling the classconditional density of the target category toward the exemplar and by pushing the class-conditional densities of the other categories away from the exemplar. However, if a category has little probability mass at the location of the exemplar, the increase in likelihood that results from pushing it further away is negligible, and consequently, so is the push effect. MLGA-Distrib obtains a lovely result (Figure 2f)—a negatively-sloped line, diagnostic of the push effect. The effect magnitude matches that in the human data (Figure 2a), and captures the key property that the push effect increases with the ordinal distance of the categories. We did not build a mechanism into MLGA-Distrib to produce the push effect; it is somewhat of an emergent property of the model. The state representation of MLGA-Distrib has three components: d1, the weight of the ordinal position of a category prototype, d2, the weight of the reverse ordinal position, and d3, an offset. The last term, d3, cannot be responsible for a push effect, because it shifts all prototypes equally, and therefore can only produce a flat sequential dependency function. Figure 4 helps provide an intuition how d1 and d2 work together to produce the push effect. Each graph shows the average movement of the category prototype (units on the y-axis are arbitrary) observed on trial t, for each of the four categories, following presentation of a given category on trial t−1. Positve values on the y axis indicate increases in the prototype (movement to the right in Figure 1), and negative values decreases. Each solid vertical bar represents the movement of a given category prototype following a trial in which the exemplar is larger than its current prototype; each open vertical bar represents movement when the exemplar is to the left of its prototype. Notice that all category prototypes get larger or smaller on a given trial. But over the course of the experiment, the exemplar should be larger than the prototype as often as it is smaller, and the two shifts should sum together and partially cancel out. The result is the value indicated by the small horizontal bar along each line. The balance between the shifts in the two directions exactly corresponds to the push effect. Thus, the model produce a push-effect graph, but it is not truly producing a push effect as was originally conceived by the experimentalists. We are currently considering empirical consequences of this simulation result. Figure 5 shows a trial-by-trial trace from MLGA-Distrib. 50 100 150 200 250 300 350 400 450 0 2 4 6 example (a) 50 100 150 200 250 300 350 400 450 0 2 4 6 class prototype (b) 50 100 150 200 250 300 350 400 450 −6 −4 −2 0 log(class variance) (c) 50 100 150 200 250 300 350 400 450 0.4 0.6 0.8 1 P(correct) (d) 50 100 150 200 250 300 350 400 450 −0.2 0 0.2 shift (+=toward −=away) (e) 50 100 150 200 250 300 350 400 450 0.4 0.6 0.8 1 posterior (f) Figure 5: Trial-by-trial trace of MLGA-Distrib. (a) exemplars generated on one run of the simulation; (b) the mean and (c) variance of the class prototype distribution for the 6 classes on one run; (d) mean proportion correct over 100 replications of the simulation; (e) push and pull effects, as measured by changes to the prototype means: the upper (green) curve is the pull of the target prototype mean toward the exemplar, and the lower (red) curve is the push of the nontarget prototype means away from the exemplar, over 100 replications; (f) category posterior of the generated exemplar over 100 replications, reflecting gradient ascent in the posterior. 4.2 Other phenomena accounted for MLGA-Distrib captures the other phenomena we listed at the outset of this paper. Like all of the other models, MLGA-Distrib readily produces a pull effect, which is shown in the movement of category prototypes in Figure 5e. More observably, a pull effect is manifested when two successive trials of the same category are positively correlated: when trial t−1 is to the left of the true category prototype, trial t is likely to be to the left as well. In the human data, the correlation coefficient over the experiment is 0.524; in the model, the coefficient is 0.496. The explanation for the pull effect is apparent: moving the category prototype to the exemplar increases the category likelihood. Although many learning effects in humans are based on error feedback, the experimental studies showed that push and pull effects occur even in the absence of errors, as they do in MLGA-Distrib. The model simply assumes that the target category it used to generate an exemplar is the correct category when no feedback to the contrary is provided. As long as the likelihood gradient is nonzero, category prototypes will be shifted. Pull and push effects shrink over the course of the experiment in human studies, as they do in the simulation. Figure 5e shows a reduction in both pull and push, as measured by the shift of the prototype means toward or away from the exemplar. We measured the slope of MLGA-Distrib’s push function (Figure 2f) for trials in the first and second half of the simulation. The slope dropped from −0.042 to −0.025, as one would expect from Figure 5e. (These slopes are obtained by combining responses from 100 replications of the simulation. Consequently, each point on the push function was an average over 6000 trials, and therefore the regression slopes are highly reliable.) A quantitative, observable measure of pull is the standard deviation (SD) of responses. As push and pull effects diminish, SDs should decrease. In human subjects, the response SDs in the first and second half of the experiment are 0.43 and 0.33, respectively. In the simulation, the response SDs are 0.51 and 0.38. Shrink reflects the fact that the model is approaching a local optimum in log likelihood, causing gradients—and learning steps—to become smaller. Not all model parameter settings lead to shrink; as in any gradient-based algorithm, step sizes that are too large do not lead to converge. However, such parameter settings make little sense in the context of the learning objective. 4.3 Model predictions MLGA-Distrib produces greater pull of the target category toward the exemplar than push of the neighboring categories away from the exemplar. In the simulation, the magnitude of the target pull— measured by the movement of the prototype mean—is 0.105, contrasted with the neighbor push, which is 0.017. After observing this robust result in the simulation, we found pertinent experimental data. Using the categorization paradigm, Zotov et al. (2003) found that if the exemplar on trial t is near a category border, subjects are more likely to produce an error if the category on trial t −1 is repeated (i.e., a pull effect just took place) than if the previous trial is of the neighboring category (i.e., a push effect), even when the distance between exemplars on t −1 and t is matched. The greater probability of error translates to a greater magnitude of pull than push. The experimental studies noted a phenomenon termed snap back. If the same target category is presented on successive trials, and an error is made on the first trial, subjects perform very accurately on the second trial, i.e., they generate an exemplar near the true category prototype. It appears as if subjects, realizing they have been slacking, reawaken and snap the category prototype back to where it belongs. We tested the model, but observed a sort of anti snap back. If the model made an error on the first trial, the mean deviation was larger—not smaller—on the second trial: 0.40 versus 0.32. Thus, MLGA-Distrib fails to explain this phenomenon. However, the phenomenon is not inconsistent with the model. One might suppose that on an error trial, subjects become more attentive, and increased attention might correspond to a larger learning rate on an error trial, which should yield a more accurate response on the following trial. McLaren et al. (1995) studied a phenomenon in humans known as peak shift, in which subjects are trained to categorize unidimensional stimuli into one of two categories. Subjects are faster and more accurate when presented with exemplars far from the category boundary than those near the boundary. In fact, they respond more efficiently to far exemplars than they do to the category prototype. The results are characterized in terms of the prototype of one category being pushed away from the prototype of the other category. It seems straightforward to explain these data in MLGA-Distrib as a type of long-term push effect. 5 Related Work and Conclusions Stewart, Brown, and Chater (2002) proposed an account of categorization context effects in which responses are based solely on the relative difference between the previous and present exemplars. No representation of the category prototype is maintained. However, classification based solely on relative difference cannot account for a diminished bias effects as a function of experience. A long-term stable prototype representation, of the sort incorporated into our models, seems necessary. We considered four models in our investigation, and the fact that only one accounts for the experimental data suggests that the data are nontrivial. All four models have principled theoretical underpinnings, and they space they define may suggest other elegant frameworks for understanding mechanisms of category learning. The successful model, MLDA-Distrib, offers a deep insight into understanding multiple-category domains: category structure must be considered. MLGA-Distrib exploits knowledge available to subjects performing the task concerning the ordinal relationships among categories. A model without this knowledge, MLGA-Local, fails to explain data. Thus, the interrelatedness of categories appears to provide a source of constraint that individuals use in learning about the structure of the world. Acknowledgments This research was supported by NSF BCS 0339103 and NSF CSE-SMA 0509521. Support for the second author comes from an NSERC fellowship. References Jones, M. N., & Mewhort, D. J. K. (2003). Sequential contrast and assimilation effects in categorization of perceptual stimuli. Poster presented at the 44th Meeting of the Psychonomic Society. Vancouver, B.C. Maybeck, P.S. (1979). Stochastic models, estimation, and control, Volume I. Academic Press. McLaren, I. P. L., et al. (1995). Prototype effects and peak shift in categorization. JEP:LMC, 21, 662–673. Stewart, N. Brown, G. D. A., & Chater, N. (2002). Sequence effects in categorization of simple perceptual stimuli. JEP:LMC, 28, 3–11. Zotov, V., Jones, M. N., & Mewhort, D. J. K. (2003). Trial-to-trial representation shifts in categorization. Poster presented at the 13th Meeting of the Canadian Society for Brain, Behaviour, and Cognitive Science: Hamilton, Ontario.
|
2006
|
111
|
2,935
|
An Application of Reinforcement Learning to Aerobatic Helicopter Flight Pieter Abbeel, Adam Coates, Morgan Quigley, Andrew Y. Ng Computer Science Dept. Stanford University Stanford, CA 94305 Abstract Autonomous helicopter flight is widely regarded to be a highly challenging control problem. This paper presents the first successful autonomous completion on a real RC helicopter of the following four aerobatic maneuvers: forward flip and sideways roll at low speed, tail-in funnel, and nose-in funnel. Our experimental results significantly extend the state of the art in autonomous helicopter flight. We used the following approach: First we had a pilot fly the helicopter to help us find a helicopter dynamics model and a reward (cost) function. Then we used a reinforcement learning (optimal control) algorithm to find a controller that is optimized for the resulting model and reward function. More specifically, we used differential dynamic programming (DDP), an extension of the linear quadratic regulator (LQR). 1 Introduction Autonomous helicopter flight represents a challenging control problem with high-dimensional, asymmetric, noisy, nonlinear, non-minimum phase dynamics. Helicopters are widely regarded to be significantly harder to control than fixed-wing aircraft. (See, e.g., [14, 20].) At the same time, helicopters provide unique capabilities, such as in-place hover and low-speed flight, important for many applications. The control of autonomous helicopters thus provides a challenging and important testbed for learning and control algorithms. In the “upright flight regime” there has recently been considerable progress in autonomous helicopter flight. For example, Bagnell and Schneider [6] achieved sustained autonomous hover. Both LaCivita et al. [13] and Ng et al. [17] achieved sustained autonomous hover and accurate flight in regimes where the helicopter’s orientation is fairly close to upright. Roberts et al. [18] and Saripalli et al. [19] achieved vision based autonomous hover and landing. In contrast, autonomous flight achievements in other flight regimes have been very limited. Gavrilets et al. [9] achieved a split-S, a stall turn and a roll in forward flight. Ng et al. [16] achieved sustained autonomous inverted hover. The results presented in this paper significantly expand the limited set of successfully completed aerobatic maneuvers. In particular, we present the first successful autonomous completion of the following four maneuvers: forward flip and axial roll at low speed, tail-in funnel, and nose-in funnel. Not only are we first to autonomously complete such a single flip and roll, our controllers are also able to continuously repeat the flips and rolls without any pauses in between. Thus the controller has to provide continuous feedback during the maneuvers, and cannot, for example, use a period of hovering to correct errors of the first flip before performing the next flip. The number of flips and rolls and the duration of the funnel trajectories were chosen to be sufficiently large to demonstrate that the helicopter could continue the maneuvers indefinitely (assuming unlimited fuel and battery endurance). The completed maneuvers are significantly more challenging than previously completed maneuvers. In the (forward) flip, the helicopter rotates 360 degrees forward around its lateral axis (the axis going from the right to the left of the helicopter). To prevent altitude loss during the maneuver, the helicopter pushes itself back up by using the (inverted) main rotor thrust halfway through the flip. In the (right) axial roll the helicopter rotates 360 degrees around its longitudinal axis (the axis going from the back to the front of the helicopter). Similarly to the flip, the helicopter prevents altitude loss by pushing itself back up by using the (inverted) main rotor thrust halfway through the roll. In the tail-in funnel, the helicopter repeatedly flies a circle sideways with the tail pointing to the center of the circle. For the trajectory to be a funnel maneuver, the helicopter speed and the circle radius are chosen such that the helicopter must pitch up steeply to stay in the circle. The nose-in funnel is similar to the tail-in funnel, the difference being that the nose points to the center of the circle throughout the maneuver. The remainder of this paper is organized as follows: Section 2 explains how we learn a model from flight data. The section considers both the problem of data collection, for which we use an apprenticeship learning approach, as well as the problem of estimating the model from data. Section 3 explains our control design. We explain differential dynamic programming as applied to our helicopter. We discuss our apprenticeship learning approach to choosing the reward function, as well as other design decisions and lessons learned. Section 4 describes our helicopter platform and our experimental results. Section 5 concludes the paper. Movies of our autonomous helicopter flights are available at the following webpage: http://www.cs.stanford.edu/˜pabbeel/heli-nips2006. 2 Learning a Helicopter Model from Flight Data 2.1 Data Collection The E3-family of algorithms [12] and its extensions [11, 7, 10] are the state of the art RL algorithms for autonomous data collection. They proceed by generating “exploration” policies, which try to visit inaccurately modeled parts of the state space. Unfortunately, such exploration policies do not even try to fly the helicopter well, and thus would invariably lead to crashes. Thus, instead, we use the apprenticeship learning algorithm proposed in [3], which proceeds as follows: 1. Collect data from a human pilot flying the desired maneuvers with the helicopter. Learn a model from the data. 2. Find a controller that works in simulation based on the current model. 3. Test the controller on the helicopter. If it works, we are done. Otherwise, use the data from the test flight to learn a new (improved) model and go back to Step 2. This procedure has similarities with model-based RL and with the common approach in control to first perform system identification and then find a controller using the resulting model. However, the key insight from [3] is that this procedure is guaranteed to converge to expert performance in a polynomial number of iterations. In practice we have needed at most three iterations. Importantly, unlike the E3 family of algorithms, this procedure never uses explicit exploration policies. We only have to test controllers that try to fly as well as possible (according to the current simulator). 2.2 Model Learning The helicopter state s comprises its position (x, y, z), orientation (expressed as a unit quaternion), velocity ( ˙x, ˙y, ˙z) and angular velocity (ωx, ωy, ωz). The helicopter is controlled by a 4-dimensional action space (u1, u2, u3, u4). By using the cyclic pitch (u1, u2) and tail rotor (u3) controls, the pilot can rotate the helicopter around each of its main axes and bring the helicopter to any orientation. This allows the pilot to direct the thrust of the main rotor in any particular direction (and thus fly in any particular direction). By adjusting the collective pitch angle (control input u4), the pilot can adjust the thrust generated by the main rotor. For a positive collective pitch angle the main rotor will blow air downward relative to the helicopter. For a negative collective pitch angle the main rotor will blow air upward relative to the helicopter. The latter allows for inverted flight. Following [1] we learn a model from flight data that predicts accelerations as a function of the current state and inputs. Accelerations are then integrated to obtain the helicopter states over time. The key idea from [1] is that, after subtracting out the effects of gravity, the forces and moments acting on the helicopter are independent of position and orientation of the helicopter, when expressed in a “body coordinate frame”, a coordinate frame attached to the body of the helicopter. This observation allows us to significantly reduce the dimensionality of the model learning problem. In particular, we use the following model: ¨xb = Ax ˙xb + gb x + wx, ¨yb = Ay ˙yb + gb y + D0 + wy, ¨zb = Az ˙zb + gb z + C4u4 + E0∥( ˙xb, ˙yb, ˙zb)∥2 + D4 + wz, ˙ωb x = Bxωb x + C1u1 + D1 + wωx, ˙ωb y = Byωb y + C2u2 + C24u4 + D2 + wωy, ˙ωb z = Bzωb z + C3u3 + C34u4 + D3 + wωz. By our convention, the superscripts b indicate that we are using a body coordinate frame with the x-axis pointing forwards, the y-axis pointing to the right and the z-axis pointing down with respect to the helicopter. We note our model explicitly encodes the dependence on the gravity vector (gb x, gb y, gb z) and has a sparse dependence of the accelerations on the current velocities, angular rates and inputs. This sparse dependence was obtained by scoring different models by their simulation accuracy over time intervals of two seconds (similar to [4]). We estimate the coefficients A·, B·, C·, D· and E· from helicopter flight data. First we obtain state and acceleration estimates using a highly optimized extended Kalman filter, then we use linear regression to estimate the coefficients. The terms wx, wy, wz, wωx, wωy, wωz are zero mean Gaussian random variables, which represent the perturbations to the accelerations due to noise (or unmodeled effects). Their variances are estimated as the average squared prediction error on the flight data we collected. The coefficient D0 captures sideways acceleration of the helicopter due to thrust generated by the tail rotor. The term E0∥( ˙xb, ˙yb, ˙zb)∥2 models translational lift: the additional lift the helicopter gets when flying at higher speed. Specifically, during hover, the helicopter’s rotor imparts a downward velocity on the air above and below it. This downward velocity reduces the effective pitch (angle of attack) of the rotor blades, causing less lift to be produced [14, 20]. As the helicopter transitions into faster flight, this region of altered airflow is left behind and the blades enter “clean” air. Thus, the angle of attack is higher and more lift is produced for a given choice of the collective control (u4). The translational lift term was important for modeling the helicopter dynamics during the funnels. The coefficient C24 captures the pitch acceleration due to main rotor thrust. This coefficient is nonzero since (after equipping our helicopter with our sensor packages) the center of gravity is further backward than the center of main rotor thrust. There are two notable differences between our model and the most common previously proposed models (e.g., [15, 8]): (1) Our model does not include the inertial coupling between different axes of rotation. (2) Our model’s state does not include the blade-flapping angles, which are the angles the rotor blades make with the helicopter body while sweeping through the air. Both inertial coupling and blade flapping have previously been shown to improve accuracy of helicopter models for other RC helicopters. However, extensive attempts to incorporate them into our model have not led to improved simulation accuracy. We believe the effects of inertial coupling to be very limited since the flight regimes considered do not include fast rotation around more than one main axis simultaneously. We believe that—at the 0.1s time scale used for control—the blade flapping angles’ effects are sufficiently well captured by using a first order model from cyclic inputs to roll and pitch rates. Such a first order model maps cyclic inputs to angular accelerations (rather than the steady state angular rate), effectively capturing the delay introduced by the blades reacting (moving) first before the helicopter body follows. 3 Controller Design 3.1 Reinforcement Learning Formalism and Differential Dynamic Programming (DDP) A reinforcement learning problem (or optimal control problem) can be described by a Markov decision process (MDP), which comprises a sextuple (S, A, T, H, s(0), R). Here S is the set of states; A is the set of actions or inputs; T is the dynamics model, which is a set of probability distributions {P t su} (P t su(s′|s, u) is the probability of being in state s′ at time t + 1 given the state and action at time t are s and u); H is the horizon or number of time steps of interest; s(0) ∈S is the initial state; R : S × A →R is the reward function. A policy π = (µ0, µ1, · · · , µH) is a tuple of mappings from the set of states S to the set of actions A, one mapping for each time t = 0, · · · , H. The expected sum of rewards when acting according to a policy π is given by: E[PH t=0 R(s(t), u(t))|π]. The optimal policy π∗for an MDP (S, A, T, H, s(0), R) is the policy that maximizes the expected sum of rewards. In particular, the optimal policy is given by π∗= arg maxπ E[PH t=0 R(s(t), u(t))|π]. The linear quadratic regulator (LQR) control problem is a special class of MDPs, for which the optimal policy can be computed efficiently. In LQR the set of states is given by S = Rn, the set of actions/inputs is given by A = Rp, and the dynamics model is given by: s(t + 1) = A(t)s(t) + B(t)u(t) + w(t), where for all t = 0, . . . , H we have that A(t) ∈Rn×n, B(t) ∈Rn×p and w(t) is a zero mean random variable (with finite variance). The reward for being in state s(t) and taking action/input u(t) is given by: −s(t)⊤Q(t)s(t) −u(t)⊤R(t)u(t). Here Q(t), R(t) are positive semi-definite matrices which parameterize the reward function. It is well-known that the optimal policy for the LQR control problem is a linear feedback controller which can be efficiently computed using dynamic programming. Although the standard formulation presented above assumes the all-zeros state is the most desirable state, the formalism is easily extended to the task of tracking a desired trajectory s∗ 0, . . . , s∗ H. The standard extension (which we use) expresses the dynamics and reward function as a function of the error state e(t) = s(t) −s∗(t) rather than the actual state s(t). (See, e.g., [5], for more details on linear quadratic methods.) Differential dynamic programming (DDP) approximately solves general continuous state-space MDPs by iterating the following two steps: 1. Compute a linear approximation to the dynamics and a quadratic approximation to the reward function around the trajectory obtained when using the current policy. 2. Compute the optimal policy for the LQR problem obtained in Step 1 and set the current policy equal to the optimal policy for the LQR problem. In our experiments, we have a quadratic reward function, thus the only approximation made in the first step is the linearization of the dynamics. To bootstrap the process, we linearized around the target trajectory in the first iteration.1 3.2 DDP Design Choices Error state. We use the following error state e = ( ˙xb −( ˙xb)∗, ˙yb −( ˙yb)∗, ˙zb −( ˙zb)∗, x −x∗, y − y∗, z −z∗, ˙ωb x −( ˙ωb y)∗, ˙ωb y −( ˙ωb y)∗, ˙ωb z −( ˙ωb z)∗, ∆q). Here ∆q is the axis-angle representation of the rotation that transforms the coordinate frame of the target orientation into the coordinate frame of the actual state. This axis angle representation results in the linearizations being more accurate approximations of the non-linear model since the axis angle representation maps more directly to the angular rates than naively differencing the quaternions or Euler angles. Cost for change in inputs. Using DDP as thus far explained resulted in unstable controllers on the real helicopter: The controllers tended to rapidly switch between low and high values, which resulted in poor flight performance. Similar to frequency shaping for LQR controllers (see, e.g., [5]), we added a term to the reward function that penalizes the change in inputs over consecutive time steps. Controller design in two phases. Adding the cost term for the change in inputs worked well for the funnels. However flips and rolls do require some fast changes in inputs. To still allow aggressive maneuvering, we split our controller design into two phases. In the first phase, we used DDP to find the open-loop input sequence that would be optimal in the noise-free setting. (This can be seen as a planning phase and is similar to designing a feedforward controller in classical control.) In the second phase, we used DDP to design our actual flight controller, but we now redefine the inputs as the deviation from the nominal open-loop input sequence. Penalizing for changes in the new inputs penalizes only unplanned changes in the control inputs. Integral control. Due to modeling error and wind, the controllers (so far described) have non-zero steady-state error. Each controller generated by DDP is designed using linearized dynamics. The orientation used for linearization greatly affects the resulting linear model. As a consequence, the linear model becomes significantly worse an approximation with increasing orientation error. This in turn results in the control inputs being less suited for the current state, which in turn results in larger orientation error, etc. To reduce the steady-state orientation errors—similar to the I term 1For the flips and rolls this simple initialization did not work: Due to the target trajectory being too far from feasible, the control policy obtained in the first iteration of DDP ended up following a trajectory for which the linearization is inaccurate. As a consequence, the first iteration’s control policy (designed for the time-varying linearized models along the target trajectory) was unstable in the non-linear model and DDP failed to converge. To get DDP to converge to good policies we slowly changed the model from a model in which control is trivial to the actual model. In particular, we change the model such that the next state is α times the target state plus 1 −α times the next state according to the true model. By slowly varying α from 0.999 to zero throughout DDP iterations, the linearizations obtained throughout are good approximations and DDP converges to a good policy. in PID control—we augment the state vector with integral terms for the orientation errors. More specifically, the state vector at time t is augmented with Pt−1 τ=0 0.99t−τ∆q(τ). Our funnel controllers performed significantly better with integral control. For the flips and rolls the integral control seemed to matter less.2 Factors affecting control performance. Our simulator included process noise (Gaussian noise on the accelerations as estimated when learning the model from data), measurement noise (Gaussian noise on the measurements as estimated from the Kalman filter residuals), as well as the Kalman filter and the low-pass filter, which is designed to remove the high-frequency noise from the IMU measurements.3 Simulator tests showed that the low-pass filter’s latency and the noise in the state estimates affect the performance of our controllers most. Process noise on the other hand did not seem to affect performance very much. 3.3 Trade-offs in the reward function Our reward function contained 24 features, consisting of the squared error state variables, the squared inputs, the squared change in inputs between consecutive timesteps, and the squared integral of the error state variables. For the reinforcement learning algorithm to find a controller that flies “well,” it is critical that the correct trade-off between these features is specified. To find the correct trade-off between the 24 features, we first recorded a pilot’s flight. Then we used the apprenticeship learning via inverse reinforcement learning algorithm [2]. The inverse RL algorithm iteratively provides us with reward weights that result in policies that bring us closer to the expert. Unfortunately the reward weights generated throughout the iterations of the algorithm are often unsafe to fly on the helicopter. Thus rather than strictly following the inverse RL algorithm, we hand-chose reward weights that (iteratively) bring us closer to the expert human pilot by increasing/decreasing the weights for those features that stood out as mostly different from the expert (following the philosophy, but not the strict formulation of the inverse RL algorithm). The algorithm still converged in a small number of iterations. 4 Experiments Videos of all of our maneuvers are available at the URL provided in the introduction. 4.1 Experimental Platform The helicopter used is an XCell Tempest, a competition-class aerobatic helicopter (length 54”, height 19”, weight 13 lbs), powered by a 0.91-size, two-stroke engine. Figure 2 (c) shows a close-up of the helicopter. We instrumented the helicopter with a Microstrain 3DM-GX1 orientation sensor, and a Novatel RT2 GPS receiver. The Microstrain package contains triaxial accelerometers, rate gyros, and magnetometers. The Novatel RT2 GPS receiver uses carrier-phase differential GPS to provide real-time position estimates with approximately 2cm accuracy as long as its antenna is pointing at the sky. To maintain position estimates throughout the flips and rolls, we have used two different setups. Originally, we used a purpose-built cluster of four U-Blox LEA-4T GPS receivers/antennas for velocity sensing. The system provides velocity estimates with standard deviation of approximately 1 cm/sec (when stationary) and 10cm/sec (during our aerobatic maneuvers). Later, we used three PointGrey DragonFly2 cameras that track the helicopter from the ground. This setup gives us 25cm accurate position measurements. For extrinsic camera calibration we collect data from the Novatel RT2 GPS receiver while in view of the cameras. A computer on the ground uses a Kalman filter to estimate the state from the sensor readings. Our controllers generate control commands at 10Hz. 4.2 Experimental Results For each of the maneuvers, the initial model is learned by collecting data from a human pilot flying the helicopter. Our sensing setup is significantly less accurate when flying upside-down, so all data for model learning is collected from upright flight. The model used to design the flip and roll controllers is estimated from 5 minutes of flight data during which the pilot performs frequency sweeps on each of the four control inputs (which covers as similar a flight regime as possible without having to invert the helicopter). For the funnel controllers, we learn a model from the same frequency sweeps and from our pilot flying the funnels. For the rolls and flips the initial model was sufficiently accurate for control. For the funnels, our initial controllers did not perform as well, and we performed two iterations of the apprenticeship learning algorithm described in Section 2.1. 2When adding the integrated error in position to the cost we did not experience any benefits. Even worse, when increasing its weight in the cost function, the resulting controllers were often unstable. 3The high frequency noise on the IMU measurements is caused by the vibration of the helicopter. This vibration is mostly caused by the blades spinning at 25Hz. 4.2.1 Flip In the ideal forward flip, the helicopter rotates 360 degrees forward around its lateral axis (the axis going from the right to the left of the helicopter) while staying in place. The top row of Figure 1 (a) shows a series of snapshots of our helicopter during an autonomous flip. In the first frame, the helicopter is hovering upright autonomously. Subsequently, it pitches forward, eventually becoming vertical. At this point, the helicopter does not have the ability to counter its descent since it can only produce thrust in the direction of the main rotor. The flip continues until the helicopter is completely inverted. At this moment, the controller must apply negative collective to regain altitude lost during the half-flip, while continuing the flip and returning to the upright position. We chose the entries of the cost matrices Q and R by hand, spending about an hour to get a controller that could flip indefinitely in our simulator. The initial controller oscillated in reality whereas our human piloted flips do not have any oscillation, so (in accordance with the inverse RL procedure, see Section 3.3) we increased the penalty for changes in inputs over consecutive time steps, resulting in our final controller. 4.2.2 Roll In the ideal axial roll, the helicopter rotates 360 degrees around its longitudinal axis (the axis going from the back to the front of the helicopter) while staying in place. The bottom row of Figure 1 (b) shows a series of snapshots of our helicopter during an autonomous roll. In the first frame, the helicopter is hovering upright autonomously. Subsequently it rolls to the right, eventually becoming inverted. When inverted, the helicopter applies negative collective to regain altitude lost during the first half of the roll, while continuing the roll and returning to the upright position. We used the same cost matrices as for the flips. 4.2.3 Tail-In Funnel The tail-in funnel maneuver is essentially a medium to high speed circle flown sideways, with the tail of the helicopter pointed towards the center of the circle. Throughout, the helicopter is pitched backwards such that the main rotor thrust not only compensates for gravity, but also provides the centripetal acceleration to stay in the circle. For a funnel of radius r at velocity v the centripetal acceleration is v2/r, so—assuming the main rotor thrust only provides the centripetal acceleration and compensation for gravity—we obtain a pitch angle θ = atan(v2/(rg)). The maneuver is named after the path followed by the length of the helicopter, which sweeps out a surface similar to that of an inverted cone (or funnel). 4 For the funnel reported in this paper, we had H = 80 s, r = 5 m, and v = 5.3 m/s (which yields a 30 degree pitch angle during the funnel). Figure 1 (c) shows an overlay of snapshots of the helicopter throughout a tail-in funnel. The defining characteristic of the funnel is repeatability—the ability to pass consistently through the same points in space after multiple circuits. Our autonomous funnels are significantly more accurate than funnels flown by expert human pilots. Figure 2 (a) shows a complete trajectory in (North, East) coordinates. In figure 2 (b) we superimposed the heading of the helicopter on a partial trajectory (showing the entire trajectory with heading superimposed gives a cluttered plot). Our autonomous funnels have an RMS position error of 1.5m and an RMS heading error of 15 degrees throughout the twelve circuits flown. Expert human pilots can maintain this performance at most through one or two circuits. 5 4.2.4 Nose-In Funnel The nose-in funnel maneuver is very similar to the tail-in funnel maneuver, except that the nose points to the center of the circle, rather than the tail. Our autonomous nose-in funnel controller results in highly repeatable trajectories (similar to the tail-in funnel), and it achieves a level of performance that is difficult for a human pilot to match. Figure 1 (d) shows an overlay of snapshots throughout a nose-in funnel. 5 Conclusion To summarize, we presented our successful DDP-based control design for four new aerobatic maneuvers: forward flip, sideways roll (at low speed), tail-in funnel, and nose-in funnel. The key design decisions for the DDP-based controller to fly our helicopter successfully are the following: 4The maneuver is actually broken into three parts: an accelerating leg, the funnel leg, and a decelerating leg. During the accelerating and decelerating legs, the helicopter accelerates at amax(= 0.8m/s2) along the circle. 5Without the integral of heading error in the cost function we observed significantly larger heading errors of 20-40 degrees, which resulted in the linearization being so inaccurate that controllers often failed entirely. Figure 1: (Best viewed in color.) (a) Series of snapshots throughout an autonomous flip. (b) Series of snapshots throughout an autonomous roll. (c) Overlay of snapshots of the helicopter throughout a tail-in funnel. (d) Overlay of snapshots of the helicopter throughout a nose-in funnel. (See text for details.) −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 East (m) North (m) −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 East (m) North (m) (a) (b) (c) Figure 2: (a) Trajectory followed by the helicopter during tail-in funnel. (b) Partial tail-in funnel trajectory with heading marked. (c) Close-up of our helicopter. (See text for details.) We penalized for rapid changes in actions/inputs over consecutive time steps. We used apprenticeship learning algorithms, which take advantage of an expert demonstration, to determine the reward function and to learn the model. We used a two-phase control design: the first phase plans a feasible trajectory, the second phase designs the actual controller. Integral penalty terms were included to reduce steady-state error. To the best of our knowledge, these are the most challenging autonomous flight maneuvers achieved to date. Acknowledgments We thank Ben Tse for piloting our helicopter and working on the electronics of our helicopter. We thank Mark Woodward for helping us with the vision system. References [1] P. Abbeel, Varun Ganapathi, and Andrew Y. Ng. Learning vehicular dynamics with application to modeling helicopters. In NIPS 18, 2006. [2] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proc. ICML, 2004. [3] P. Abbeel and A. Y. Ng. Exploration and apprenticeship learning in reinforcement learning. In Proc. ICML, 2005. [4] P. Abbeel and A. Y. Ng. Learning first order Markov models for control. In NIPS 18, 2005. [5] B. Anderson and J. Moore. Optimal Control: Linear Quadratic Methods. Prentice-Hall, 1989. [6] J. Bagnell and J. Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In International Conference on Robotics and Automation. IEEE, 2001. [7] Ronen I. Brafman and Moshe Tennenholtz. R-max, a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 2002. [8] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron. Flight test and simulation results for an autonomous aerobatic helicopter. In AIAA/IEEE Digital Avionics Systems Conference, 2002. [9] V. Gavrilets, B. Mettler, and E. Feron. Human-inspired control logic for automated maneuvering of miniature helicopter. Journal of Guidance, Control, and Dynamics, 27(5):752–759, 2004. [10] S. Kakade, M. Kearns, and J. Langford. Exploration in metric state spaces. In Proc. ICML, 2003. [11] M. Kearns and D. Koller. Efficient reinforcement learning in factored MDPs. In Proc. IJCAI, 1999. [12] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning Journal, 2002. [13] M. La Civita, G. Papageorgiou, W. C. Messner, and T. Kanade. Design and flight testing of a highbandwidth H∞loop shaping controller for a robotic helicopter. Journal of Guidance, Control, and Dynamics, 29(2):485–494, March-April 2006. [14] J. Leishman. Principles of Helicopter Aerodynamics. Cambridge University Press, 2000. [15] B. Mettler, M. Tischler, and T. Kanade. System identification of small-size unmanned helicopter dynamics. In American Helicopter Society, 55th Forum, 1999. [16] A. Y. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Autonomous inverted helicopter flight via reinforcement learning. In Int’l Symposium on Experimental Robotics, 2004. [17] Andrew Y. Ng, H. Jin Kim, Michael Jordan, and Shankar Sastry. Autonomous helicopter flight via reinforcement learning. In NIPS 16, 2004. [18] Jonathan M. Roberts, Peter I. Corke, and Gregg Buskey. Low-cost flight control system for a small autonomous helicopter. In IEEE Int’l Conf. on Robotics and Automation, 2003. [19] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme. Visually-guided landing of an unmanned aerial vehicle. IEEE Transactions on Robotics and Autonomous Systems, 2003. [20] J. Seddon. Basic Helicopter Aerodynamics. AIAA Education Series. America Institute of Aeronautics and Astronautics, 1990.
|
2006
|
112
|
2,936
|
Parameter Expanded Variational Bayesian Methods Yuan (Alan) Qi MIT CSAIL 32 Vassar street Cambridge, MA 02139 alanqi@csail.mit.edu Tommi S. Jaakkola MIT CSAIL 32 Vassar street Cambridge, MA 02139 tommi@csail.mit.edu Abstract Bayesian inference has become increasingly important in statistical machine learning. Exact Bayesian calculations are often not feasible in practice, however. A number of approximate Bayesian methods have been proposed to make such calculations practical, among them the variational Bayesian (VB) approach. The VB approach, while useful, can nevertheless suffer from slow convergence to the approximate solution. To address this problem, we propose Parameter-eXpanded Variational Bayesian (PX-VB) methods to speed up VB. The new algorithm is inspired by parameter-expanded expectation maximization (PX-EM) and parameterexpanded data augmentation (PX-DA). Similar to PX-EM and -DA, PX-VB expands a model with auxiliary variables to reduce the coupling between variables in the original model. We analyze the convergence rates of VB and PX-VB and demonstrate the superior convergence rates of PX-VB in variational probit regression and automatic relevance determination. 1 Introduction A number of approximate Bayesian methods have been proposed to offset the high computational cost of exact Bayesian calculations. Variational Bayes (VB) is one popular method of approximation. Given a target probability distribution, variational Bayesian methods approximate the target distribution with a factored distribution. While factoring omits dependencies present in the target distribution, the parameters of the factored approximation can be adjusted to improve the match. Specifically, the approximation is optimized by minimizing the KL-divergence between the factored distribution and the target. This minimization can be often carried out iteratively, one component update at a time, despite the fact that the target distribution may not lend itself to exact Bayesian calculations. Variational Bayesian approximations have been widely used in Bayesian learning (e.g., (Jordan et al., 1998; Beal, 2003; Bishop & Tipping, 2000)). Variational Bayesian methods nevertheless suffer from slow convergence when the variables in the factored approximation are actually strongly coupled in the original model. The same problem arises in popular Gibbs sampling algorithm. The sampling process converges slowly in cases where the variables are strongly correlated. The slow convergence can be alleviated by data augmentation (van Dyk & Meng, 2001; Liu & Wu, 1999), where the idea is to identify an optimal reparameterization (within a family of possible reparameterizations) so as to remove coupling. Similarly, in a deterministic context, Liu et al. (1998) proposed over-parameterization of the model to speed up EM convergence. Our work here is inspired by DA sampling and PX-EM. Our approach uses auxiliary parameters to speed up the deterministic approximation of the target distribution. Specifically, we propose Parameter-eXpanded Variational Bayesian (PX-VB) method. The original model is modified by auxiliary parameters that are optimized in conjunction with the variational approximation. The optimization of the auxiliary parameters corresponds to a parameterized joint optimization of the variational components; the role of the new updates is to precisely remove otherwise strong functional couplings between the components thereby facilitating fast convergence. 2 An illustrative example Consider a toy Bayesian model, which has been considered by Liu and Wu (1999) for sampling. p(y|w, z) = N(y | w + z, 1), p(z) = N(z | 0, D) (1) where D is a know hyperparameter and p(w) ∝1. The task is to compute the posterior distribution of w. Suppose we use a VB method to approximate p(w|y), p(z|y) and p(w, z|y) by q(w), q(z) and q(w, z) = q(w)q(z), respectively. The approximation is optimized by minimizing KL(q(w)q(z)∥p(y|w, z)p(z)) (the second argument need not be normalized). The general forms of the component updates are given by q(w) ∝ exp(⟨ln p(y|w, z)p(z)⟩q(z)) (2) q(z) ∝ exp(⟨ln p(y|w, z)p(z)⟩q(w)) (3) It is easy to derive the updates in this case: q(w) = N(w|y −⟨z⟩, 1) q(z) = N(z| y −⟨w⟩ 1 + D−1 , 1 1 + D−1 ) (4) Now let us analyze the convergence of the mean parameter of q(w), ⟨w⟩= y −⟨z⟩. Iteratively, ⟨w⟩= D−1 1 + D−1 y + ⟨w⟩ 1 + D−1 = D−1 (1 + D−1)−1y + (1 + D−1)−2y + · · · = y. The variational estimate ⟨w⟩converges to y, which actually is the true posterior mean (For this toy problem, p(w|y) = N(w|y, 1+D)). Furthermore, if D is large, ⟨w⟩converges slowly. Note that the variance parameter of q(w) converges to 1 in one iteration, though underestimates the true posterior variance 1 + D. Intuitively, the convergence speed of ⟨w⟩and q(w) suffers from strong coupling between the updates of w and z. In other words, the update information has to go through a feedback loop w →z → w · · · . To alleviate the coupling, we expand the original model with an additional parameter α: p(y|w, z) = N(y | w + z, 1) p(z|α) = N(z | α, D) (5) The expanded model reduces to the original one when α equals the null value α0 = 0. Now having computed q(z) given α = 0, we minimize KL(q(w)q(z)∥p(y|w, z)p(z|α)) over α and obtain the minimizer α = ⟨z⟩. Then, we reduce the expanded model to the original one by applying the reduction rule znew = z −α = z −⟨z⟩, wnew = w + α = w + ⟨z⟩. Correspondingly, we change the measures of q(w) and q(z): q(w + ⟨z⟩) →q(wnew) = N(wnew|y, 1) q(z −⟨z⟩) →q(znew) = N(znew|0, 1 1 + D−1 ) (6) Thus, the PX-VB method converges. Here α breaks the update loop between q(w) and q(z) and plays the role of a correction force; it corrects the update trajectories of q(w) and q(z) and makes them point directly to the convergence point. 3 The PX-VB Algorithm In the general PX-VB formulation, we over-parameterize the model p(ˆx, D) to get pα(x, D), where the original model is recovered for some default values of the auxiliary parameters α = α0. The algorithm consists of the typical VB updates relative to pα(x, D), the optimization of auxiliary parameters α, as well as a reduction step to turn the model back to the original form where α = α0. This last reduction step has the effect of jointly modifying the components of the factored variational approximation. Put another way, we push the change in pα(x, D), due to the optimization of α, into the variational approximation instead. Changing the variational approximation in this manner permits us to return the model into its original form and set α = α0. Specifically, we first expand p(ˆx, D) to obtain pα(x, D). Then at the tth iteration, 1. q(xs) are updated sequentially. Note that the approximate distribution q(x) = Q s q(xs). 2. We minimize KL(q(x)∥pα(x, D)) over the auxiliary parameters α. This optimization can be done jointly with some components of the variational distribution, if feasible. 3. The expanded model is reduced to the original model through reparameterization. Accordingly, we change q(t+1)(x) to q(t+1)(ˆx) such that KL(q(t+1)(ˆx)∥pα0(ˆx, D)) = KL(q(x)∥pα(t+1)(x, D)) (7) where q(t+1)(ˆx) are the modified components of the variational approximation. 4. Set α = α0. Since each update of PX-VB decreases or maintains the KL divergence KL(q(x)∥p(x, D)), which is lower bounded, PX-VB reaches a stationary point for KL(q(x)∥p(x, D)). Empirically, PX-VB often achieves solution similar to what VB achieves, with faster convergence. A simple strategy to implement PX-VB is to use a mapping Sα, parameterized by α, over the variables ˆx. After sequentially optimizing over the components {q(xs)}, we maximize ⟨ln pα(x)⟩q(x) over α. Then, we reduce pα(x, D) to p(ˆx, D) and q(x) to q(ˆx) through the inverse mapping of Sα, Mα ≡S−1 α . Since we optimize α after optimizing {q( ˆxs}, the mapping S should change at least two components of x. Otherwise, the optimization over α will do nothing since we have already optimized over each q( ˆxs). If we jointly optimize α and one component q(xs), it suffices (albeit need not be optimal) for the mapping Sα to change only q(xs). Algorithmically, PX-VB bears a strong similarity to PX-EM (Liu et al., 1998). They both expand the original model and both are based on lower bounding KL-divergence. However, the key difference is that the reduction step in PX-VB changes the lower-bounding distributions {q(xs)}, while in PXEM the reduction step is performed only for the parameters in p(x, D). We also note that the PX-VB reduction step via Mα leaves the KL-divergence (lower bound on the likelihood) invariant, while in PX-EM the likelihood of the observed data remains the same after the reduction. Because of these differences, general EM acceleration methods (e.g., (Salakhutdinov et al., 2003)) can not be directly applied to speed up VB convergence. In the following sections, we present PX-VB methods for two popular Bayesian models: Probit regression for data classification and Automatic Relevance Determination (ARD) for feature selection and sparse learner. 3.1 Bayesian Probit regression Probit regression is a standard classification technique (see, e.g., (Liu et al., 1998) for the maximum likelihood estimation). Here we demonstrate the use of variational Bayesian methods to train Probit models. The data likelihood for Probit regression is p(t|X, w) = Y n σ(tnwTxn), where X = [x1, . . . , xN] and σ is the standard normal cumulative distribution function. We can rewrite the likelihood in an equivalent form p(tn|zn) = sign(tnzn) p(zn|w, xn) = N(zn|wTxn, 1) (8) Given a Gaussian prior over the parameter, p(w) = N(w|0, v0I), we wish to approximate the posterior distribution p(w, z|X, t) by q(w, z) = q(w) Q n q(zn). Minimizing KL(q(w) Q n q(zn)∥p(w, z, t|X)), we obtain the following VB updates: q(zn) = T N(zn|⟨w⟩Txn, 1, tnzn) (9) q(w) = N(w|(XXT + v−1 0 I)−1X⟨z⟩, (XXT + v−1 0 I)−1) (10) where T N(zn|⟨w⟩Txn, 1, tnzn) stands for a truncated Gaussian such that T N(zn|⟨w⟩Txn, 1, tnzn) = N(zn|⟨w⟩Txn, 1) when tnzn > 0, and it equals 0 otherwise. To speed up the convergence of the above iterative updates, we apply the PX-VB method. First, we expand the orginal model p( ˆw, ˆz, t|X) to pc(w, z, t|X) with the mapping w = ˆwc z = ˆzc (11) such that pc(zn|w, xn) = N(zn|wTxn, c2) p(w) = N(w|0, c2v0I) (12) Setting c = c0 = 1 in the expanded model, we update q(zn) and q(w) as before, via (9) and (10). Then, we minimize KL q(z)q(w)∥pc(w, z, t|X) over c, yielding c2 = 1 N + M X n (⟨z2 n⟩−2⟨zn⟩⟨w⟩Txn + xT n⟨wwT⟩xn) + v−1 0 ⟨wwT⟩ (13) where M is the dimension of w. In the degenerate case where v0 = ∞, the denominator of the above equation becomes N instead of N + M. Since this equation can be efficiently calculated, the extra computational cost induced by the auxiliary variable is therefore small. We omit the details. The transformation back to pc0 can be made via the inverse map bw = w/c bz = z/c. (14) Accordingly, we change q(w) to obtain a new posterior approximation qc(bw): qc(bw) = N(bw|(XXT + v−1 0 I)−1X⟨z⟩/c, (XXT + v−1 0 I)−1/c2) (15) We do not actually need to compute qc(zn) if this component will be optimized next. By changing variables w to bw through (14), the KL divergence between the approximate and exact posteriors remains the same. After obtaining new approximations qc(bw) and q(ˆzn), we reset c = c0 = 1 for the next iteration. Though similar to the PX-EM updates for the Probit regression problem (Liu et al., 1998), the PXVB updates are geared towards providing an approximate posterior distribution. We use both synthetic data and a kidney biopsy data (van Dyk & Meng, 2001) as numerical examples for probit regression. We set v0 = ∞in the experiment. The comparison of convergence speeds for VB and PXVB is illustrated in figure 1. 0 1000 2000 3000 4000 5000 −8 −6 −4 −2 0 Number of iterations log(||wt+1−wt||) VB PX−VB 0 2000 4000 6000 −8 −6 −4 −2 0 Number of iterations log(||wt+1−wt||) VB PX−VB (a) (b) Figure 1: Comparison between VB and PX-VB for probit regression on synthetic (a) and kidneybiospy data sets (b). PX-VB converges significantly faster than VB. Note that the Y axis shows the difference between two consecutive estimates of the posterior mean of the parameter w. For the synthetic data, we randomly sample a classifier and use it to define the data labels for sampled inputs. We have 100 training and 500 test data points, each of which is 20 features. The kidney data set has 55 data points, each of which is a 3 dimensional vector. On the synthetic data, PXVB converges immediately while VB updates are slow to converge. Both PX-VB and VB trained classifiers achieve zero test error. On the kidney biopsy data set, PX-VB converges in 507 iterations, while VB converges in 7518 iterations. In other words, PX-VB requires 15 times fewer iterations than VB. In terms of CPU time, which reflects the extra computational cost induced by the auxiliary variables, PX-VB is 14 times more efficient. Among all these runs, PX-VB and VB achieve very similar estimates of the model parameters and the same prediction results. In sum, with a simple modification of VB updates, we significantly improve the convergence speed of variational Bayesian estimation for probit model. 3.2 Automatic Relevance Determination Automatic relevance determination (ARD) is a powerful Bayesian sparse learning technique (MacKay, 1992; Tipping, 2000; Bishop & Tipping, 2000). Here, we focus on variational ARD proposed by Bishop and Tipping (2000) for sparse Bayesian regression and classification. The likelihood for ARD regression is p(t|X, w, τ) = Y n N(tn|wTφn, τ −1) where φn is a feature vector based on xn, such as [k(x1, xn), . . . , [k(xN, xn)]T where k(xi, xj) is a nonlinear basis function. For example, we can choose a radial basis function k(xi, xj) = exp(−∥xi −xj∥/(2λ2), where λ is the kernel width. In ARD, we assign a Gaussian prior on the model parameters w: p(w|α) = QM m=0 N(wm|0, α−1 m ), where the inverse variance diag(α) follows a factorized Gamma distribution: p(α) = Y m Gamma(αm|a, b) = Y m baαa−1 m e−bαm/Γ(a) (16) where a and b are hyperparameters of the model. The posterior does not have a closed form. Let us approximate p(w, α, τ|X, t) by a factorized distribution q(w, α, τ) = q(w)q(α)q(τ). The sequential VB updates on q(τ), q(w) and q(α) are described by Bishop and Tipping (2000). The variational RVM achieves good generalization performance as demonstrated by Bishop and Tipping (2000). However, its training based on the VB updates can be quite slow. We apply PX-VB to address this issue. First, we expand the original model p(bw, ˆα, ˆτ|X, t) via w = bw/r (17) while maintaining ˆα and ˆτ unchanged. Consequently, the data likelihood and the prior on w become pr(t|w, X, τ) = Y n N(tn|rwTφn, τ −1) pr(w|α) = M Y m=0 N(wm|0, r−2α−1 m ) (18) Setting r = r0 = 1, we update q(τ) and q(α) as in the regular VB. Then, we want to joint optimize over q(w) and r. Instead of performing a fully joint optimization, we optimize q(w) and r separately at the same time. This gives r = g + p g2 + 16Mf 4f (19) where f = ⟨τ⟩P n xT n⟨wwT⟩xn + P m⟨w2 m⟩⟨αm⟩and g = 2⟨τ⟩P m⟨wT⟩xntn. where ⟨wT⟩and ⟨wwT⟩are the first and second order moments of the previous q(w). Since both f and XT⟨w⟩has been computed previously in VB updates, the added computational cost for r is negligible overall. The separate optimization over q(w) and r often decreases the KL divergence. But it cannot guarantee to achieve a smaller KL divergence than what optimization only over q(w) would achieves. If the regular update over q(w) achieves a smaller KL divergence, we reset r = 1. Given r and q(w), we use bw = rw to reduce the expanded model to the original one. Correspondingly, we change q(w) = N(w|µw, Σw) via this reduction rule to obtain qr( ˆw) = N(bw|rµw, r2Σw). We can also introduce another auxiliary variable s such that α = ˆα/s. Similar to the above procedure, we optimize over s the expected log joint probability of the expanded model, and at the same time update q(α). Then we change q(α) back to qs(ˆα) using the inverse mapping ˆα = sα. Due to the space limitation, we skip the details here. The auxiliary variables r and s change the individual approximate posteriors q(w) and q(α) separately. We can combine these two variables into one and use it to adjust q(w) and q(α) jointly. Specifically, we introduce the variable c: w = bw/c α = c2 bα. 0 500 1000 1500 2000 2500 −8 −6 −4 −2 0 Number of iterations log(||wt+1−wt||) VB PX−VB 0 500 1000 1500 2000 2500 −8 −6 −4 −2 0 Number of iterations log(||wt+1−wt||) VB PX−VB 0 500 1000 1500 2000 2500 −8 −7 −6 −5 −4 Number of iterations log(||wt+1−wt||) VB PX−VB (a) (b) (c) Figure 2: Convergence comparison between VB and PX-VB for ARD regression on synthetic data (a,b) and gene expression data (c). The PX-VB results in (a) and (c) are based on independent auxiliar variables on w and α. The PX-VB result in (b) is based on the auxiliar variable that correlates both w and α. The added computational cost for PX-VB in each iteraction is negligible overall. Setting c = c0 = 1, we perform the regular updates over q(τ), q(w) and q(α). Then we optimize over c the expected log joint probablity of the expanded model. We cannot find a closed-form solution for the maximization. But we can efficiently compute its gradient and Hessian. Therefore, we perform a few steps of Newton updates to partially optimize c. Again, the additional computational cost for calculating c is small. Then using the inverse mapping, we reduce the expanded model to the original one and adjust both q(w) and q(α) accordingly. Empirically, this approach can achieve faster convergence than using auxiliary variables on q(w) and q(α) separately. This is demonstrated in figure 2(a) and (b). We compare the convergence speed of VB and PX-VB for the ARD model on both synthetic data and gene expression data. The synthetic data are sampled from the function sinc(x) = (sinx)/x for x ∈(−10, 10) with added Gaussian noise. We use RBF kernels for the feature expansion φn with kernel width 3. VB and PX-VB provide basically identical predictions. For gene expression data, we apply ARD to analyze the relationship between binding motifs and the expression of their target genes. For this task, we use 3 order polynomial kernels. The results of convergence comparison are shown in figure 2. With a little modification of VB updates, we increase the convergence speed significantly. Though we demonstrate PX-VB improvement only for ARD regression, the same technique can be used to speed up ARD classification. 4 Convergence properties of VB and PX-VB In this section, we analyze convergence of VB and PX-VB, and their convergence rates. Define the mapping q(t+1) = M(q(t)) as one VB update of all the approximate distributions. Define an objective function as the unnormalized KL divergence: Q(q) = Z Y qi(x) log Q qi(x) p(x) ) + ( Z p(x)dx − Z Y qi(x)dx). (20) It is easy to check that minimizing Q(q) gives the same updates as VB which minimizes KL divergence. Based on Theorem 2.1 by Luo and Tseng (1992), an iterative application of this mapping to minimize Q(q) results in at least linear convergence to an element q⋆in the solution set. Define the mapping q(t+1) = Mx(q(t)) as one PX-VB update of all the approximate distributions. The convergence of PX-VB follows from similar arguments. i.e.,β = [qTαT]T converges to [q⋆TαT 0 ]T, where α ∈Λ are the expanded model parameters, α0 are the null value in the original model. 4.1 Convergence rate of VB and PX-VB The matrix rate of convergence DM(q): q(t+1) −q⋆= DM(q)T(q(t) −q⋆) (21) where DM(q) = ∂Mj(q) ∂qi . Define the global rate of convergence for q: r = limt→∞ ∥q(t+1)−q⋆∥ ∥q(t)−q⋆∥. Under certain regularity conditions, r = the largest eigenvalue of DM(q). The smaller r is, the faster the algorithm converges. Define the constraint set gs as the constraints for the sth update. Then the following theorem holds: Theorem 4.1 The matrix convergence rate for VB is: DM(q⋆) = S Y s=1 Ps (22) where Ps = Bs[BT s D2Q(q⋆) −1Bs]−1BT s D2Q(q⋆) −1 and Bs = ∇gs(q⋆). Proof: Define ξ as the current approximation q. Let Gs(ξ) be qs that maximizes the objective function Q(q) under the constraint gs(q) = gs(ξ) = [ξ\s]. Let M0(q) = q and Ms(q) = Gs(Ms−1(q)) for all 1 ≤s ≤S. (23) Then by construction of VB, we have q(t+s/S) = Ms(q(t)), s = 1, . . . , S and DM(q⋆) = DMS(q⋆). At the stationary points, q⋆= DMs(q⋆) for all s. We differentiate both sides of equation (23) and evaluate them at q = q⋆: DMs(q) = DMs−1(q)DGS(Ms−1(q⋆)) = DMs−1(q⋆)DGS(q⋆) (24) It follows that DM(q⋆) = QS s=1 DGS(q⋆). To calculate DGS(q⋆), we differentiate the constraint gs(Gs(ξ)) = gs(ξ) and evaluate both sides at ξ = q⋆, such that DGs(q⋆)Bs = Bs. (25) Similarly, we differentiate the Lagrange equation DQs(G(ξ))−∇gs(G(ξ))λs(ξ) = 0 and evaluate both sides at ξ = q⋆. This yields DGs(q⋆)D2Qs(q⋆) −Dλs(q⋆)BT s = 0 (26) Equation (26) holds because ∂2gs ∂qi∂qj = 0. Combining (25) and (26) yields DGs(q⋆) = Bs[BT s D2Qs(q⋆) −1Bs]−1BT s D2Qs(q⋆) −1.2 (27) In the s update we fix q\s, i.e., gs(q) = q\s. Therefore, Bs is an identity matrix with its sth column removed Bs = I:,s, where I is the identity matrix and s, : means without the sth column. Denote Cs = D2Qs(q⋆) −1. Without the loss of generality, we set s = S. It is easy to obtain BT SCBS = C\S,\S (28) where \S, \S means without row S and column S. Inserting (28) into (27) yields PS = DGS(q⋆) = Id−1 C−1 \S,\SC\S,S 0 0 = Id−1 −D2Q\S,S(D2QS,S)−1 0 0 (29) where Id−1 is a (d −1) by (d −1) identity matrix, and D2Q\S,S = ∂2Q(qq(x)∥p(x)) ∂q\S T∂qS and D2QS,S = ∂2Q(qq(x)∥p(x)) ∂qS T∂qS . Notice that we use Schur complements to obtain (29). Similar to the calculation of PS via (29), we can derive Ps for s = 1, . . . , S −1 with structures similar to PS. The above results help us understand the convergence speed of VB. For example, we have q(t+1) −q⋆= P T S · · · P T 1 (q(t) −q⋆). (30) For qS, q(t+1) S −q⋆ S = −(D2QS,S)−1D2QS,\S 0 (q(t+(S−1)/S) −q⋆). Clearly, if we view D2QS,\S as the correlation between qS and q\S, then the smaller “correlation”, the faster the convergence. In the extreme case, if there is no correlation between qS and q\S, then q(t+1) S −q⋆ S = 0 after the first iteration. Since the global convergence rate is bounded by the maximal component convergence rate and generally there are many components with convergence rate same as the global rate. Therefore, the instant convergence of qS could help increase the global convergence rate. For PX-VB, we can compute the matrix rate of convergence similarly. In the toy example in Section 2, PX-VB introduces an auxiliary variable α which has zero correlation with w, leading an instant convergence of the algorithm. This suggests that PX-VB improves the convergence by reducing the correlation among {qs}. Rigorously speaking, the reduction step in PXVB implictly defines a mapping between q to qα0 through the auxiliary variables α: (q, pα0) → (q, pα) →(qα, pα0). Denote this mapping as Mα such as qα = Mα(q). Then we have DMx(q⋆) = DG1(q⋆) · · · DGα(q⋆) · · · DGS(q⋆) It is known that the spectral norm has the following submultiplicative property ∥EF∥<= ∥E∥∥F∥, where E and F are two matrices. Thus, as long as the largest eigenvalue of Mα is smaller than 1, PX-VB converges faster than VB. The choice of α affects the convergence rate by controlling the eigenvalue of this mapping. The smaller the largest eigenvalue of Mα, the faster PX-VB converges. In practice, we can check this eigenvalue to make sure the constructed PX-VB algorithm enjoys a fast convergence rate. 5 Discussion We have provided a general approach to speeding up convergence of variational Bayesian learning. Faster convergence is guaranteed theoretically provided that the Jacobian of the transformation from auxiliary parameters to variational components has spectral norm bounded by one. This property can be verified in each case separately. Our empirical results show that the performance gain due to the auxiliary method is substantial. Acknowledgments T. S. Jaakkola was supported by DARPA Transfer Learning program. References Beal, M. (2003). Variational algorithms for approximate Bayesian inference. Doctoral dissertation, Gatsby Computational Neuroscience Unit, University College London. Bishop, C., & Tipping, M. E. (2000). Variational relevance vector machines. 16th UAI. Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1998). An introduction to variational methods in graphical models. Learning in Graphical Models. http://www.ai.mit.edu/˜tommi/papers.html. Liu, C., Rubin, D. B., & Wu, Y. N. (1998). Parameter expansion to accelerate EM: the PX-EM algorithm. Biometrika, 85, 755–770. Liu, J. S., & Wu, Y. N. (1999). Parameter expansion for data augmentation. Journal of the American Statistical Association, 94, 1264–1274. Luo, Z. Q., & Tseng, P. (1992). On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72, 7–35. MacKay, D. J. (1992). Bayesian interpolation. Neural Computation, 4, 415–447. Salakhutdinov, R., Roweis, S. T., & Ghahramani, Z. (2003). Optimization with EM and Expectation-ConjugateGradient. Proceedings of International Conference on Machine Learning. Tipping, M. E. (2000). The relevance vector machine. NIPS (pp. 652–658). The MIT Press. van Dyk, D. A., & Meng, X. L. (2001). The art of data augmentation (with discussion). Journal of Computational and Graphical Statistics, 10, 1–111.
|
2006
|
113
|
2,937
|
An EM Algorithm for Localizing Multiple Sound Sources in Reverberant Environments Michael I. Mandel, Daniel P. W. Ellis LabROSA, Dept. of Electrical Engineering Columbia University New York, NY {mim,dpwe}@ee.columbia.edu Tony Jebara Dept. of Computer Science Columbia University New York, NY jebara@cs.columbia.edu Abstract We present a method for localizing and separating sound sources in stereo recordings that is robust to reverberation and does not make any assumptions about the source statistics. The method consists of a probabilistic model of binaural multisource recordings and an expectation maximization algorithm for finding the maximum likelihood parameters of that model. These parameters include distributions over delays and assignments of time-frequency regions to sources. We evaluate this method against two comparable algorithms on simulations of simultaneous speech from two or three sources. Our method outperforms the others in anechoic conditions and performs as well as the better of the two in the presence of reverberation. 1 Introduction Determining the direction from which a sound originated using only two microphones is a difficult problem. It is exacerbated by the presence of sounds from other sources and by realistic reverberations, as would be found in a classroom. A related and equally difficult problem is determining in which regions of a spectrogram a sound is observable, the so-called time-frequency mask, useful for source separation [1]. While humans can solve these problems well enough to carry on conversations in the canonical “cocktail party”, current computational solutions are less robust. Either they assume sound sources are statistically stationary, they assume anechoic conditions, or they require an array with at least as many microphones as there are sources to be localized. The method proposed in this paper takes a probabilistic approach to localization, using the psychoacoustic cue of interaural phase difference (IPD). Unlike previous approaches, this EM algorithm estimates true probability distributions over both the direction from which sounds originate and the regions of the time-frequency plane associated with each sound source. The basic assumptions that make this possible are that a single source dominates each time-frequency point and that a single delay and amplification cause the difference in the ears’ signals at a particular point. By modelling the observed IPD in this way, this method overcomes many of the limitations of other systems. It is able to localize more sources than it has observations, even in reverberant environments. It makes no assumptions about the statistics of the source signal, making it well suited to localizing speech, a highly non-Gaussian and non-stationary signal. Its probabilistic nature also facilitates the incorporation of other probabilistic cues for source separation such as those obtained from single-microphone computational auditory scene analysis. Many comparable methods are also based on IPD, but they first convert it into interaural time difference. Because of the inherent 2π ambiguity in phase differences, this mapping is one-to-one only up to a certain frequency. Our system, however, is able to use observations across the entire frequency range, because even though the same phase difference can correspond to multiple delays, a particular delay corresponds unambiguously to a specific phase difference at every frequency. We evaluate our system on the localization and separation of two and three simultaneous speakers in simulated anechoic and reverberant environments. The speech comes from the TIMIT acousticphonetic continuous speech corpus [2], the anechoic simulations use the head related transfer functions described in [3], and the reverberant simulations use the binaural classroom impulse responses described in [4]. We use four metrics to evaluate our system, the root mean square localization error, the mutual information between the estimated mask and a ground truth mask, the signal to noise ratio of separated speech from [5], and the W-disjoint orthogonality metric of [1]. Our EM approach outperformed Yilmaz and Rickard’s DUET algorithm [1] and Aarabi’s PHAT-histogram [6] in anechoic situations, and performed comparably to PHAT-histogram in reverberation. 1.1 Previous work Many systems exist for localizing sounds using a microphone array, e.g. [7]. These systems can be quite accurate, but this accuracy requires physical bulk, special hardware to synchronize many recordings, and tedious calibration procedures. They isolate signals in reverberation using directional filtering, which becomes more selective only through the addition of further microphones. Because of the structure and abilities of the human auditory system, researchers have paid particular attention to the two-microphone case. Roman et al. [5] make empirical models of the timing and level differences for combinations of two sources in known positions synthesized with anechoic head-related transfer functions (HRTFs). They then classify each time-frequency cell in their auditory-based representation, creating a binary time-frequency mask which contains the cells that appear to be dominated by the target source. Yilmaz and Rickard [1] studied the interaction of speech signals in the time-frequency plane, concluding that multiple speech signals generally do not overlap much in both time and frequency. They also conclude, as in [5], that the best ground truth mask includes only points in which the signal to noise ratio is 0dB or greater. They propose a method for localization that maps IPD to delay before aggregating information and thus cannot use information at higher frequencies. It is designed for anechoic and noise-free situations and subsequently its accuracy suffers in more realistic settings. Aarabi [6] and Rennie [8] focus on localizing sounds. Aarabi’s method, while quite simple, is still one of the most accurate methods for localizing many simultaneous sound sources, even in reverberation. Rennie refined this approach with an EM algorithm for performing the same process probabilistically. A limitation of both algorithms, however, is the assumption that a single source dominates each analysis window, as compared to time-frequency masking algorithms which allow different sources to dominate different frequencies in the same analysis window. 2 Framework For the purposes of deriving this model we will examine the situation where one sound source arrives at two spatially distinct microphones or ears. This will generalize to the assumption that only a single source arrives at each time-frequency point in a spectrogram, but that different points can contain different sources. Denote the sound source as s(t), and the signals received at the left and right ears as ℓ(t) and r(t), respectively. The two received signals will have some delay and some gain relative to the source, in addition to a disruption due to noise. For this model, we assume a convolutive noise process, because it fits our empirical observations, it is easy to analyze, and in general is it is very similar to the additive noise processes that other authors assume. The various signals are then related by, ℓ(t) = aℓs(t −τℓ) ∗nℓ(t) r(t) = ars(t −τr) ∗nr(t). (1) The ratio of the short-time Fourier transforms, F{·}, of both equations is the interaural spectrogram, XIS(ω, t) ≡L(ω, t) R(ω, t) = α(ω, t)eφ(ω,t) = ea−jωτN(ω, t), (2) where τ = τℓ−τr, N(ω, t) = Nℓ(ω,t) Nr(ω,t) = F{nℓ(t)} F{nr(t)}, and a = log aℓ ar . This equivalence assumes that τ is much smaller than the length of the window over which the Fourier transform is taken, a condition easily met for dummy head recordings with moderately sized Fourier transform windows. For example, in our experiments the maximum delay was 0.75ms, and the window length was 64ms. As observed in [9], N(ω, t), the noise in the interaural spectrogram of a single source, is unimodal and approximately identically distributed for all frequencies and times. Using the standard rectangular-to-polar change of coordinates, the noise can be separated into independent magnitude and phase components. The magnitude noise is approximately log-normal, while the phase noise has a circular distribution with tails heavier than the von Mises distribution. In this work, we ignore the magnitude noise and approximate the phase noise with a mixture of Gaussians, all with the same mean. This approximation includes the distribution’s heavy tailed characteristic, but ignores the circularity, meaning that the variance of the noise is generally underestimated. A true circular distribution is avoided because its maximum likelihood parameters cannot be found in closed form. 3 Derivation of EM algorithm The only observed variable in our model is φ(ω, t), the phase difference between the left and right channels at frequency ω and time t. While 2π ambiguities complicate the calculation of this quantity, we use φ(ω, t) = arg L(ω,t) R(ω,t) so that it stays within (−π, π]. For similar reasons, we define ˆφ(ω, t; τ) = arg L(ω,t) R(ω,t)ejωτ as a function of the observation. Our model of the interaural phase difference is a mixture over sources, delays, and Gaussians. In particular, we have I sources, indexed by i, each of which has a distribution over delays, τ. For this model, the delays are discretized to a grid and probabilities over them are computed as a multinomial. This discretization gives the most flexible possible distribution over τ for each source, but since we expect sources to be compact in τ, a unimodal parametric distribution could work. Experiments approximating Laplacian and Gaussian distributions over τ, however, did not perform as well at localizing sources or creating masks as the more flexible multinomial. For a particular source, then, the probability of an observed delay is p(φ(ω, t) | i, τ) = p̸ N(ˆφ(ω, t; τ)) (3) where p̸ N(·) is the probability density function of the phase noise, N(ω, t), described above. We approximate this distribution as a mixture of J Gaussians, indexed by j and centered at 0, p(φ(ω, t) | i, τ) = J X j=1 p(j | i)N(ˆφ(ω, t; τ) | 0, σ2 ij) (4) In order to allow parameter estimation, we define hidden indicator variables zωt ijτ such that zωt ijτ = 1 if φ(ω, t) comes from Gaussian j in source i at delay τ, and 0 otherwise. There is one indicator for each observation, so P ijτ zωt ijτ = 1 and zωt ijτ ≥0. The estimated parameters of our model are thus ψijτ ≡p(i, j, τ), a third order tensor of discrete probabilities, and σij, the variances of the various Gaussians. For convenience, we define θ ≡ {ψijτ, σij ∀i, j, τ}. Thus, the total log-likelihood of our data, including marginalization over the hidden variables, is: log p(φ(ω, t) | θ) = X ωt log X ijτ ψijτN(ˆφ(ω, t; τ) | 0, σ2 ij). (5) This log likelihood allows us to derive the E and M steps of our algorithm. For the E step, we compute the expected value of zωt ijτ given the data and our current parameter estimates, νijτ(ω, t) ≡E{zωt ijτ | φ(ω, t), θ} = p(zωt ijτ = 1 | φ(ω, t), θ) = p(zωt ijτ = 1, φ(ω, t) | θ) p(φ(ω, t) | θ) (6) = ψijτN(ˆφ(ω, t; τ) | 0, σ2 ij) P ijτ ψijτN(ˆφ(ω, t; τ) | 0, σ2 ij) (7) time / sec freq / kHz 0.5 1 1.5 2 1 2 3 4 5 6 7 time / sec freq / kHz 0.5 1 1.5 2 1 2 3 4 5 6 7 (a) Ground Truth time / sec freq / kHz 0.5 1 1.5 2 1 2 3 4 5 6 7 (d) EM mask (b) Yilmaz mask (e) p(τ | i) time / sec freq / kHz 0.5 1 1.5 2 1 2 3 4 5 6 7 (c) Aarabi mask (f) Noise distrib. by source -10 -8 -6 -4 -2 0 2 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 τ -3 -2 -1 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 angle / rad Figure 1: Example parameters estimated for two speakers located at 0◦and 45◦in a reverberant classroom. (a) Ground truth mask for speaker 1, (b) mask estimated by Yilmaz’s algorithm, (c) mask estimated by Aarabi’s algorithm, (d) mask estimated by EM algorithm, (e) probability distribution over τ for each speaker p(τ | i) estimated by EM algorithm, (f) probability distribution over phase error for each speaker, p(φ(ω, t) | i, τ) estimated by EM algorithm. For the M step, we first compute the auxiliary function Q(θ | θs), where θ is the set of parameters over which we wish to maximize the likelihood, and θs is the estimate of the parameters after s iterations of the algorithm. Q(θ | θs) = c + X ωt X ijτ νijτ(ω, t) log p(φ(ω, t), zωt ijτ | θ) (8) where c does not depend on θ. Since Q is concave in θ, we can maximize it by taking derivatives with respect to θ and setting them equal to zero while also including a Lagrange multiplier to enforce the constraint that P ijτ ψijτ = 1. This results in the update rules ψijτ = 1 ΩT X ωt νijτ(ω, t) σ2 ij = P ωt P τ νijτ(ω, t)ˆφ(ω, t; τ)2 P ωt P τ νijτ(ω, t) . (9) Note that we are less interested in the joint distribution ψijτ = p(i, j, τ) than other distributions derived from it. Specifically, we are interested in the marginal probability of a point’s coming from source i, p(i), the distributions over delays and Gaussians conditioned on the source, p(τ | i) and p(j | i), and the probability of each time-frequency point’s coming from each source, Mi(ω, t). To calculate these masks, we marginalize p(zωt ijτ | φ(ω, t), θ) over τ and j to get Mi(ω, t) ≡p(zωt i | φ(ω, t), θ) = X jτ p(zωt ijτ | φ(ω, t), θ) = X jτ νijτ(ω, t). (10) See Figure 1(d)-(f) for an example of the parameters estimated for two speakers located at 0◦and 45◦in a reverberant classroom. 4 Experiments In order to evaluate our system, we simulated speech in anechoic and reverberant noise situations by convolving anechoic speech samples with binaural impulse responses. We used speech from the TIMIT acoustic-phonetic continuous speech corpus [2], a dataset of utterances spoken by 630 native American English speakers. Of the 6300 utterances in the database, we chose 15 at random to use in our evaluation. To allow the speakers to be equally represented in each mixture, we normalized all of the signals by their average energies before convolving them with the binaural impulse responses. The anechoic binaural impulse responses came from Algazi et al. [3], a large effort to record headrelated transfer functions for many different individuals. Impulse response measurements were taken over the sphere surrounding subjects’ heads at 25 different azimuths and 50 different elevations. The measurements we used were for the KEMAR dummy head with small ears, although the dataset contains impulse responses for around 50 individuals. The reverberant binaural impulse responses we used were recorded by Shinn-Cunningham et al. in a real classroom [4]. These measurements were also made with a KEMAR dummy head, although a different actual unit was used. Measurements were taken at four different positions in the classroom, three distances from the subject, seven directions, and three repetitions of each measurement. We used the measurements taken in the middle of the classroom with the sources at a distance of 1 m from the subject. Our method has a number of parameters that need to be set. Perhaps the most important part of running the algorithm is the initialization. We initialized it by setting p(τ | i) to discrete approximations to Gaussians centered at the I largest peaks in the average cross-correlation. The other parameters are numerical. Following [1], we use a 1024 point window, which corresponds to 64 ms at 16 kHz. We chose J, the number of Gaussians in the noise GMM, to be 2, striking a balance between model flexibility and computational cost. Since the log likelihood increases monotonically with each EM iteration, we chose to stop after 10, when improvements in log likelihood generally became insignificant. Finally, we discretized τ to 31 values linearly spaced between -0.9375 ms and 0.9375 ms. 4.1 Comparison algorithms We compare the performance of the time-frequency masks and the localization accuracy of our algorithm with those of two other algorithms. The first is Yilmaz and Rickard’s DUET algorithm from [1], although it had to be modified slightly to accommodate our recordings. In order to estimate the interaural time and level differences of the signals in a mixture, DUET creates a two-dimensional histogram of them at every point in the interaural spectrogram. It then smooths the histogram and finds the I largest peaks, which should correspond to the I sources. The interaural parameter calculation of DUET requires that the interaural phase of a measurement unambiguously translates to a delay. The maximum frequency at which this is possible is c 2d where c is the speed of sound and d is the distance between the two microphones. The authors in [1] choose a fixed sampling rate and adjust the distance between their free-standing microphones to prevent ambiguity. In the case of our KEMAR recordings, however, the distance between the two ears is fixed at approximately 0.15 m and since the speed of sound is approximately 340 m/s, we must lower the maximum frequency from 8000 to 1150 Hz. Even though the frequencies used to estimate the interaural parameters are limited, a time-frequency mask can still be computed for all frequencies. See Figure 1(b) for an example of such a mask estimated by DUET. We also implemented Aarabi’s PHAT-histogram technique from [6], augmented to create timefrequency masks. The algorithm localizes multiple simultaneous sources by cross-correlating the left and right channels using the Phase Transform (PHAT) for each frame of the interaural spectrogram. This gives point estimates of the delay at each frame, which are pooled over all of the frames of the signal into a histogram. The I largest peaks in this histogram are assumed to be the interaural delays of the I sources. While not designed to create time-frequency masks, one can be constructed that simply assigns an entire frame to the source from which its delay originates. See Figure 1(c) for an example mask estimated by PHAT-histogram. As discussed in the next section, we compare these algorithms using a number of metrics, some of which admit baseline masks. For power-based metrics, we include ground truth and random masks in the comparison as baselines. The ground truth, or 0 dB, mask is the collection of all time-frequency points in which a particular source is louder than the mixture of all other sources, it is included to measure the maximum improvement achievable by an algorithmically created mask. The random mask is created by assigning each time-frequency point to one of the I sources at random, it is included to measure the performance of the simplest possible masking algorithm. 4.2 Performance measurement Measuring performance of localization results is straightforward, we use the root-mean-square error. Measuring the performance of time-frequency masks is more complicated, but the problem has been well studied in other papers [5, 1, 10]. There are two extremes possible in these evaluations. The first of which is to place equal value on every time-frequency point, regardless of the power it contains, e.g. the mutual information metric. The second is to measure the performance in terms of the amount of energy allowed through the mask or blocked by it, e.g. the SNR and WDO metrics. To measure performance valuing all points equally, we compute the mutual information between the ground truth mask and the predicted mask. Each mask point is treated as a binary random variable, so the mutual information can easily be calculated from their individual and joint entropies. In order to avoid including results with very little energy, the points in the lowest energy decile in each band are thrown out before calculating the mutual information. One potential drawback to using the mutual information as a performance metric is that it has no fixed maximum, it is bounded below by 0, but above by the entropy of the ground truth mask, which varies with each particular mixture. Fortunately, the entropy of the ground truth mask was close to 1 for almost all of the mixtures in this evaluation. To measure the signal to noise ratio (SNR), we follow [5] and take the ratio of the amount of energy in the original signal that is passed through the mask to the amount of energy in the mixed signal minus the original signal that is passed through the mask. Since the experimental mixtures are simulated, we have access to the original signal. This metric penalizes masks that eliminate signal as well as masks that pass noise. A similar metric is described in [1], the W-disjoint orthogonality (WDO). This is the signal to noise ratio in the mixture the mask passes through, multiplied by a (possibly negative) penalty term for eliminating signal energy. When evaluated on speech, energy based metrics tend to favor systems with better performance at frequencies below 500 Hz, where the energy is concentrated. Frequencies up to 3000 Hz, however, are still important for the intelligibility of speech. In order to more evenly distribute the energy across frequencies and thus include the higher frequencies more equally in the energy-based metrics, we apply a mild high pass pre-emphasis filter to all of the speech segments. The experimental results were quite similar without this filtering, but the pre-emphasis provides more informative scoring. 4.3 Results We evaluated the performance of these algorithms in four different conditions, using two and three simultaneous speakers in reverberant and anechoic conditions. In the two source experiments, the target source was held at 0◦, while the distracter was moved from 5◦to 90◦. In the three source experiments, the target source was held at 0◦and distracters were located symmetrically on either side of the target from 5◦to 90◦. The experiment was repeated five times for each separation, using different utterances each time to average over any interaction peculiarities. See Figure 2 for plots of the results of all of the experiments. Our EM algorithm performs quite well at localization. Its root mean square error is particularly low for two-speaker and anechoic tests, and only slightly higher for three speakers in reverberation. It does not localize well when the sources are very close together, i.e. within 5◦, most likely because of problems with its automatic initialization. At such a separation, two cross-correlation peaks are difficult to discern. Performance also suffers slightly for larger separations, most likely a result of greater head shadowing. Head shadowing causes interaural intensity differences at high frequencies which change the distribution of IPDs, and violate our model’s assumption that phase noise is identically distributed across frequencies. It also performs well at time-frequency masking, more so for anechoic simulations than reverberant. See Figure 1(d) for an example time-frequency mask in reverberation. Notice that the major features follow the ground truth, but much detail is lost. Notice also the lower contrast bands in this figure at 0, 2.7, and 5.4 kHz corresponding to the frequencies at which the sources have the same IPD, modulo 2π. For any particular relative delay between sources, there are frequencies which provide no information to distinguish one from the other. Our EM algorithm, however, can distinguish between the two because the soft assignment in τ uses information from many relative delays. 0 0.2 0.4 0.6 0.8 1 error / ms 0 0.1 0.2 0.3 info / bits SNR / dB -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg anechoic, 2 sources Mean-square localization error Mutual information W-disjoint orthogonality SNR 0 0.2 0.4 0.6 0.8 1 error / ms 0 0.1 0.2 0.3 info / bits SNR / dB -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg anechoic, 3 sources reverberant, 2 sources reverberant, 3 sources -10 -5 0 5 10 -2 -1 0 1 -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -10 -5 0 5 10 -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 error / ms 0 0.1 0.2 0.3 info / bits SNR / dB -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -10 -5 0 5 10 -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 error / ms 0 0.1 0.2 0.3 info / bits SNR / dB -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -80 -60 -40 -20 0 separation / deg -10 -5 0 5 10 -2 -1 0 1 EM aarabi yilmaz gnd truth random Figure 2: Experimental results for four conditions (rows) compared using four metrics (columns). First row: two sources, anechoic; second row: three sources, anechoic; third row: two sources, reverberant; fourth row: three sources, reverberant. The EM approach always performs as well as the better of the other two algorithms and outperforms them both in many situations. Its localization performance is comparable to PHAT-histogram in twospeaker conditions and slightly worse in three-speaker conditions. DUET suffers even in anechoic, two-source situations, possibly because it was designed for free-standing microphones as opposed to dummy head recordings. Its performance decreases further as the tasks become more difficult. The advantage of our method for masking, however, is particularly clear in anechoic conditions, where it has the highest mutual information at all angles and the highest SNR and WDO at lower angles. In reverberant conditions, the mutual information between estimated masks and ground truth masks becomes quite low, but PHAT-histogram comes out slightly ahead. Comparing SNR measurements in reverberation, PHAT-histogram and the EM approach perform similarly, with DUET trailing. In WDO, however, PHAT-histogram performs best, with EM and DUET performing similarly to the random mask. 5 Conclusions and Future Work We have derived and demonstrated an expectation-maximization algorithm for probabilistic source separation and time-frequency masking. Using the interaural phase delay, it is able to localize more sources than microphones, even in the reverberation found in a typical classroom. It does not depend on any assumptions about sound source statistics, making it well suited for such non-stationary signals as speech and music. Because it is probabilistic, it is straightforward to augment the feature representation with other monaural or binaural cues. There are many directions to take this project in the future. Perhaps the largest gain in signal separation accuracy could come from the combination of this method with other computational auditory scene analysis techniques [11, 12]. A system using both monaural and binaural cues should surpass the performance of either approach alone. Another binaural cue that would be easy to add is IID caused by head shadowing and pinna filtering, allowing localization in both azimuth and elevation. This EM algorithm could also be expanded in a number of ways by itself. A minimum entropy prior [13] could be included to keep the distributions of the various sources separate from one another. In addition, a parametric, heavy tailed model could be used instead of the current discrete model to ensure unimodality of the distributions and enforce the separation of different sources. Along the same lines, a variational Bayes model could be used with a slightly different parameterization to treat all of the parameters probabilistically, as in [14]. Finally, we could relax the independence constraints between adjacent time-frequency points, making a Markov random field. Since sources tend to dominate regions of adjacent points in both time and frequency, the information at its neighbors could help a particular point localize itself. Acknowledgments The authors would like to thank Barbara Shinn-Cunningham for sharing her lab’s binaural room impulse response data with us and Richard Duda for making his lab’s head-related transfer functions available on the web. This work is supported in part by the National Science Foundation (NSF) under Grants No. IIS-0238301, IIS-05-35168, CCR-0312690, and IIS-0347499. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. References [1] Ozgur Yilmaz and Scott Rickard. Blind separation of speech mixtures via time-frequency masking. IEEE Transactions on signal processing, 52(7):1830–1847, July 2004. [2] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren. DARPA TIMIT acoustic phonetic continuous speech corpus CDROM, 1993. [3] V. R. Algazi, R. O. Duda, D. M. Thompson, and C. Avendano. The CIPIC HRTF database. In Proc 2001 IEEE Workshop on Applications of Signal Processing to Audio and Electroacoustics, pages 99–102, Oct 2001. [4] Barbara Shinn-Cunningham, Norbert Kopco, and Tara J. Martin. Localizing nearby sound sources in a classroom: Binaural room impulse responses. Journal of the Acoustical Society of America, 117:3100– 3115, 2005. [5] Nicoleta Roman, DeLiang Wang, and Guy J. Brown. A classification-based cocktail party processor. In Proceedings of Neural Information Processing Systems, 2003. [6] Parham Aarabi. Self-localizing dynamic microphone arrays. IEEE transactions on systems, man, and cybernetics, 32(4), November 2002. [7] M. Brandstein and H. Silverman. A practical methodology for speech source localization with microphone arrays. Computer, Speech, and Language, 11(2):91–126, April 1997. [8] Steven J. Rennie. Robust probabilistic TDOA estimation in reverberant environments. Technical Report PS1-TR-2005-011, University of Toronto, February 2005. [9] Michael I. Mandel and Daniel P. W. Ellis. A probability model for interaural phase difference. Workshop on Statistical and Perceptual Audio Processing (SAPA), 2006. [10] Ron Weiss and Daniel P. W. Ellis. Estimating single-channel source separation masks: relevance vector machine classifiers vs pitch-based masking. Workshop on Statistical and Perceptual Audio Processing (SAPA), 2006. [11] Martin Cooke and Daniel P. W. Ellis. The auditory organization of speech and other sources in listeners and computational models. Speech Communication, 35(3–4):141–177, 2001. [12] Sam Roweis. One microphone source separation. In Proceedings of Neural Information Processing Systems 13, pages 793–799, 2000. [13] Matthew Brand. Pattern discovery via entropy minimization. In Proceedings of Artificial Intelligence and Statistics, 1999. [14] Matthew J. Beal, Hagai Attias, and Nebojsa Jojic. Audio-video sensor fusion with probabilistic graphical models. In ECCV (1), pages 736–752, 2002.
|
2006
|
114
|
2,938
|
Recursive ICA Honghao Shan, Lingyun Zhang, Garrison W. Cottrell Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093-0404 {hshan,lingyun,gary}@cs.ucsd.edu Abstract Independent Component Analysis (ICA) is a popular method for extracting independent features from visual data. However, as a fundamentally linear technique, there is always nonlinear residual redundancy that is not captured by ICA. Hence there have been many attempts to try to create a hierarchical version of ICA, but so far none of the approaches have a natural way to apply them more than once. Here we show that there is a relatively simple technique that transforms the absolute values of the outputs of a previous application of ICA into a normal distribution, to which ICA maybe applied again. This results in a recursive ICA algorithm that may be applied any number of times in order to extract higher order structure from previous layers. 1 Introduction Linear implementations of Barlow’s efficient encoding hypothesis1, such as ICA [1] and sparse coding [2], have been used to explain the very first layers of auditory and visual information processing in the cerebral cortex [1, 2, 3]. Nevertheless, many interesting structures are nonlinear functions of the stimulus inputs, which are unlikely to be captured by a linear model. For example, for natural images, it has been observed that there is still significant statistical dependency between the variance of the filter outputs [4]. Several extensions of the linear ICA algorithm [5, 6, 7, 8] have been proposed to reduce such residual nonlinear redundancy, with an explicit or implicit aim of explaining higher perceptual layers, such as complex cells in V1. However, none of these extensions are obviously recursive, so it is unclear how to generalize them to multi-layer models in order to account for even higher perceptual layers. In this paper, we propose a hierarchical redundancy reduction model in which the problem of modeling the residual nonlinear dependency is transformed into another LEE problem, as illustrated in Figure 1. There are at least two reasons why we want to do this. First, this transforms a new and hard problem into an easier and previously solved problem. Second, different parts of the brain share similar anatomical structures and it is likely that they are also working under similar computational principles. For example, fMRI studies have shown that removal of one sensory modality leads to neural reorganization of the remaining modalities [9], suggesting that the same principles must be at work across modalities. Since the LEE model has been so successful in explaining the very first layer of perceptual information processing in the cerebral cortex, it seems reasonable to hypothesize that higher layers might also be explained by a LEE model. The problem at hand is then how to transform the problem of modeling the residual nonlinear dependency into a LEE problem. To achieve this goal, we need to first make clear what the input constraints are that are imposed by the LEE model. This is done in Section 2. After that, we will derive the transformation function that “prepares” the output of ICA for its recursive application, and then test this model on natural images. 1We refer to such algorithms as linear efficient encoding (LEE) algorithms throughout this paper. Figure 1: The RICA (Recursive ICA) model. After the first layer of linear efficient encoding, sensory inputs X are now represented by S. The signs of S are discarded. Then coordinate-wise nonlinear activation functions gi are applied to each dimension of S, so that the input of the next layer X′ = g(|S|) satisfies the input constraints imposed by the LEE model. The statistical structure among dimensions of X′ are then extracted by the next layer of linear efficient encoding. 2 Bayesian Explanation of Linear Efficient Encoding It has long been hypothesized that the functional role of perception is to capture the statistical structure of the sensory stimuli so that appropriate action decisions can be made to maximize the chance of survival (see [10] for a brief review). Barlow provided the insight that the statistical structure is measured by the redundancy of the stimuli and that completely independent stimuli cannot be distinguished from random noise [11]. He also hypothesized that one way for the neural system to capture the statistical structure is to remove the redundancy in the sensory outputs. This so-called redundancy reduction principle forms the foundation of ICA algorithms. Algorithms following the sparse coding principle are also able to find interesting structures when applied to natural image patches [2]. Later it was realized that although ICA and sparse coding algorithms started out from different principles and goals, their implementations can be summarized in the same Bayesian framework [12]. In this framework, the observed data X is assumed to be generated by some underlying signal sources S: X = AS + ϵ where A is a linear mixing matrix and ϵ is additive Gaussian noise. Also, it is assumed that the features Sj are independent from each other, and that the marginal distribution of Sj is sparse. For the sparse coding algorithm described in [2], although it started from the goal of finding sparse features, the algorithm’s implementation implicitly assumes the independence of Sj’s. For the infomax ICA algorithm [1], although it aimed at finding independent features, the algorithm’s implementation assumes a sparse marginal prior (p(Sj) ∝sech(Sj)). The energy-based ICA algorithm using a student-t prior [13] can also be placed in this framework for complete representations. The moral here, though, is that in practice, the samples available are always insufficient to allow any efficient inference without making some assumptions about the data distribution. A sparseness and independence assumption about the data distribution is appropriate because: (1) independence allows the system to capture the statistical structure of the stimuli, as described above, and (2) sparse distribution of the sensory outputs is energy-economic. This is important for the survival of the biological system, considering the fact that human brain consists 2% of the body weight but accounts for 20% of its resting metabolism [14]. The linear efficient encoding model captures the important characteristics of sensory coding: capturing the statistical structure (independence) of sensory stimuli with minimum cost (sparseness). This generative model describes our assumption about the data. How well the algorithms perform depends on how well this assumption matches the real data. Hence, it is very important to check what kind of data the model generates. If the input data strongly deviate from what can be generated by the model (in other words, the observed data strongly deviate from our assumption), the results could be errant no matter how much effort we put into the model parameter estimation. As to the LEE model, there is a clear constraint on the marginal distribution of Xi. Here we limit our study on those ICA algorithms that produce basis functions resembling the simple cells’ receptive fields when applied on natural image patches. Such algorithms [1, 13, 15] typically adopt a symmetric 2 and sparse marginal prior for Sj’s that can be well approximated by a generalized Gaussian distribution. In fact, if we apply linear filters resembling the receptive fields of simple cells on natural images, the distribution of the filter responses can be well approximated by a generalized Gaussian distribution. Here we show that such a prior suggests that the Xi’s should also be symmetric. A random variable X is symmetric if and only if its characteristic function is real valued. In the above Bayesian framework, we assume that Sj’s are independent and the marginal distribution of Sj is symmetric about zero. The characteristic function is then given by: E[e √−1tXi] = E[e √−1t P j Ai,jSj] (Xi = X j Ai,jSj) (1) = E[ Y j e √−1tAi,jSj] (2) = Y j E[e √−1tAi,jSj] (Sj’s are independent from each other) (3) Since Ai,jSj is symmetric, it is easy to see that Xi must also be symmetric. A surprising fact about our perceptual system is that there does exist such a process that regularizes the marginal distribution of the sensory inputs. In the visual system, for example, the data is whitened in the retina and the LGN before transmission to V1. The functional role of this process is generally described as removing pairwise redundancy, as natural images (as well as natural sounds) obey the 1/f power law in the frequency domain [16]. However, as shown in Figure 2, it also regulates the marginal distribution of the input to follow a generalized-gaussian-like distribution3. This phenomenon has long been observed. We believe that besides the functional role of removing second-order redundancy, whitening might also serve as a role of formatting the sensory input for the cortex. For example, it has been observed [1] that without pre-whitening the images, the learned basis functions by ICA do not cover a broad range of spatial frequencies. Figure 2: The distribution of pixel values of whiten images follows a generalized Gaussian distribution (see Section 2). The shape parameter of the distribution is about 1.094, which means that the marginal distribution of the inputs to the LEE model is already very sparse. 2p(X) is symmetric if X and −X have the same distribution. 3For all the image patches we tried, the distribution of pixel values on whitened image patches can be well fitted by a generalized Gaussian distribution. This is true even for small image patches. The only exception we have discovered occurs when the original image contains only binomial noise. In this work, we will make the assumption that the marginal distribution of the inputs to the LEE model is a generalized gaussian distribution, as this enables the LEE model to work more efficiently. Also, as just discussed, at least for sound and image processing, there is an effective way to achieve this neurally. 3 Reducing Residual Redundancy For the filter outputs S of a layer of LEE, we will first discard information that provides no interesting structure (i.e., redundancy), and find an activation function such that the marginal distribution obeys the input requirements of the next layer. 3.1 Discarding the Signs It has been argued that the signs of the filter outputs do not carry any redundancy [5]. The models proposed in [6, 7, 8] also implicitly or explicitly discard the signs. We have observed the usefulness of this process in a study of natural image statistics. We applied the FastICA algorithm [15] to 20x20 natural image patches, and studied the joint distribution of the filter outputs. As shown in the left plot of Figure 3, p(si|sj) = p(si| −sj), i.e. the conditional probability of si on sj only depends on the absolute value of sj. In other words, the signs of S do not provide any dependency among the dimensions. By removing the sign and applying our transformation (described in the next section), the nonlinear dependency between the si’s is exposed (see Figure 3, right). Figure 3: Left: s1 and s2 are ICA filter responses on natural image patches. The red dashed lines plot the linear regression between them. Right: After the coordinate-wise nonlinear transformation, the two features are no longer uncorrelated. 3.2 Nonlinear Activation Function The only problem left is to find the coordinate-wise activation function gi for each dimension of S such that X′ i = gi(|Si|) follows a generalized Gaussian distribution, as required by the next layer of LEE. In this work, we make the transformed features have a normal distribution. By doing so, we force the LEE model of the higher layer to set more A′ j,i to nonzero values (so that the Central Limit Theorem takes effect to make X′ i a Gaussian distribution), which leads to more global structures at the higher layer. We used two methods to find this activation function in our experiments. Parametric Activation Function Assume s approximately follows a generalized Gaussian distribution(GGD). The probability density function of a GGD is given by: f(s; σ, θ) = θ 2σΓ(1/θ) exp{− s σ θ } (4) where σ > 0 is a scale parameter and θ > 0 is a shape parameter and Γ denotes the gamma function. These two parameters can be estimated efficiently by an iterative algorithm developed by [17]. s is then transformed into a normally distributed N(0, 1) random variable by the function g: u = g(|s|) = F −1(γ( |s|θ σθ , 1 θ) Γ( 1 θ) ) (5) where F denotes the cumulative density function (cdf) of standard normal distribution and γ denotes the incomplete gamma function. This transformation can be seen as three consecutive steps: • Discard the sign: u ←|s|, now u bears pdf g(u; σ, θ) = θ σΓ( 1 θ ) exp{−u σ θ}, 0 ≤u < ∞ and cdf γ( |s|θ σθ , 1 θ ) Γ( 1 θ ) , 0 ≤u < ∞. • Transform to a uniform distribution U[0, 1] by applying its own cdf: u ← γ( |s|θ σθ , 1 θ ) Γ( 1 θ ) . • Transform to a Gaussian distribution by applying the inverse cdf of N(0, 1): u ←F −1(u). Nonparametric Activation Function When the number of samples N is sufficiently large, a non-parametric activation function works more efficiently. In this approach, all the samples |Si| are sorted in ascending order. For each sample s, cdf(|s|) is approximated by the ratio of its ranking in the list with N. Then u = F −1( c cdf(|s|)) will approximately follow the standard normal distribution. Note that since ui depends only on the rank order of |si|, the results would be the same if the signs are discarded by taking s2 i . 4 Experiments on Natural Images To test the behavior of our model, we applied it to small patches taken from digitized natural images. The image dataset is available on the World Wide Web from Bruno Olshausen 4. It contains ten 512x512 pre-whitened images. We took 151,290 evenly distributed 20x20 image patches. We ran the FastICA algorithm [15] and obtained 397 basis functions. As reported in other models, the basis functions are Gabor-like filters (Figure 4). The nonparametric method was used to transform the marginal distribution of the outputs’ absolute values to a standard normal distribution. Then the FastICA algorithm was applied again to retrieve 100 basis functions5. We adopted the visualization method employed by [12] to investigate what kind of structures the second layer units are fond of. The basis functions are fitted to Gabor filter functions using a gradient descent algorithm [12]. The connection weights from a layer-2 unit to layer-1 units are shown in Figure 5, arranged by either the center or frequency/orientation of the fitted Gabor filters. The layer-2 units are qualitatively similar to those found in [18]. Some units welcome strong activation of layer-1 units within a certain orientation range but have no preference for locations, while others have a location preference but welcome activation of layer-1 units of all frequencies and orientations, and some develop a picky appetite for both. Again, the nonparametric method was used to transform the marginal distribution of the absolute values of the outputs from the second layer to a standard normal distribution, and FastICA was applied to retrieve 20 basis functions. We had no initial guess of what kind of statistical structure these third layer units might capture. The activation map of a couple of these units, however, seemed to suggest that they might be tuned to respond to complicated textures. In particular, one unit seems more activated by seemingly blank background, while another seems to like textures of leaves (Figure 6). We think that probably a larger database than merely 10 images, and larger image patches would be helpful for producing cleaner high level units. The same procedure can be repeated for multiple layers. However, at this point, until we develop better methods for analyzing the representation developed by these deeply embedded units, we will leave this for future work. 4http://redwood.berkeley.edu/bruno/sparsenet/ 5This reduction in the number of units follows the example of [18]. In general, there appears to be less information in later layers (as assessed by eigenvalue analysis), most likely due to the discarding of the sign. Figure 4: A subset of the 397 ICA image basis functions. Each basis function is 20x20 pixels. They are 2D Gabor like filters. Figure 5: Sample units from the second layer. The upper panel arranges the connection weights from layer-2 units to layer-1 units by the centers of the fitted Gabor filters. Every point corresponds to one basis function of the first layer, located at the center of the fitted Gabor filter. Warm colors represent strong positive connections; cold colors represent negative connections. For example, the leftmost unit prefers strong activation of layer-1 units located on the right and weak activation of layer-1 units on the left. The lower panel arranges the connection weights by the frequencies and the orientations of the fitted Gabor filters. Now every point corresponds to the Gabor filter’s frequency and orientation (in polar coordinates). The third leftmost unit welcomes strong activation of Gabor filters whose orientations are around 3 4π but prefers no/little activation from those whose orientations are around 1 4π. 5 Discussion The key idea of our model is to transform the high-order residual redundancy to linear dependency that can be easily exploited again by the LEE model. By using activation functions that are dependent on the marginal distribution of the outputs, a normal Gaussian interface is provided at every layer. This procedure can then repeat itself and a hierarchical model with same structure at every level can thus be constructed. As the redundancy is reduced progressively along the layers, statistical structures are also captured to progressively higher orders. Our simulation of a three layer Recursive ICA shows the effectiveness of our model. The first layer, not surprisingly, produces the Gabor like basis functions as linear ICA always does. The second layer, however, produces basis functions that qualitatively resemble those produced by a previous hierarchical generative model [7]. This is remarkable given that our model is essentially a filtering model with no assumptions of underlying independent variables, but merely targeting redundancy reduction. The advantage of our model is the theoretical simplicity of generalization to a third layer or more. For the Karklin and Lewicki model, the assumption that the ultimate independent causal variables are two layers away from the images has to be reworked for a three layer system. It is not clear how the variables at every layer should affect the next when an extra layer is added. Osindero et al. [8] employed an energy based model. The energy function used at the first layer made it essentially a linear ICA algorithm, thus it also produces Gabor like filters. The first layer outputs are squared to discard the signs and then fed to the next layer. The inputs for the second layer are thus all positive and bear a very different marginal distribution from those for the first layer. The energy function is changed accordingly and the second layer is essentially doing nonnegative ICA. The output of this layer, however, will all be positive, which makes discarding the signs no longer an effective way of exposing higher-order dependence. Thus, to extend to another layer, new activation functions and new energy function must be derived. The third layer of our model produces some interesting results in that some units seem to have preferences for complicated textures (Figure 6). However, as the statistical structure represented here must be of very high order, we are still looking for an effective visualization method. Also, as Figure 6: Activation maps on two images (upper and lower panel respectively) for two units per layer. The leftmost two images are the raw images. The second left column to the rightmost column are activation maps of two units from the first layer to the third respectively. The first layer units respond to small local edges, the second layer units respond to larger borders, and the third layer units seem to respond to large area of textures. units at the second layer have larger receptive field than those at the first layer, it is reasonable to expect the third layer to bear even larger ones. We believe that a wider range of visual structure will be picked up by the third layer units with a larger patch size on a larger training set. Acknowledgments We thank Eric Wiewiora, Lei Zhang and members from GURU for helpful discussions. This work was supported by NIH grant MH57075 to GWC. References [1] Anthony J. Bell and Terrence J. Sejnowski. The ‘independent components’ of natural scenes are edge filters. Vision Research, 37(23):3327–3338, 1997. [2] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [3] Michael S. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356–363, 2002. [4] Odelia Schwartz and Eero P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8):819–825, 2001. [5] Aapo Hyvarinen and Patrik O. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413– 2423, 2001. [6] Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In Advances in Neural Information Processing Systems, volume 12, pages 855–861, Cambridge, MA, May 2000. MIT Press. [7] Yan Karklin and Michael S. Lewicki. A hierarchical bayesian model for learning non-linear statistical regularities in non-stationary natural signals. Neural Computation, 17(2):397–423, 2005. [8] Simon Osindero, Max Welling, and Geoffrey E. Hinton. Topographic Product Models Applied to Natural Scene Statistics. Neural Computation, 18:381–414, 2005. [9] Eva M. Finney, Ione Fine, and Karen R. Dobkins. Visual stimuli activate auditory cortex in the deaf. Nature Neuroscience, 4:1171–1173, 2001. [10] Horace B. Barlow. Redundancy reduction revisited. Network: Computation in Neural Systems, 12:241–253, 2001. [11] Horace B. Barlow. Possible principles underlying the transformation of sensory messages. In Walter A. Rosenblith, editor, Sensory Communication, pages 217–234. MIT Press, Cambridge, MA, USA, 1961. [12] Michael S. Lewicki and Bruno A. Olshausen. A probabilistic framework for the adaptation and comparison of image codes. Journal of the Optical Society of America A, 16(7):1587–1601, 1999. [13] Yee Whye Teh, Max Welling, Simon Osindero, and Geoffrey E. Hinton. Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4:1235– 1260, 2003. [14] David Attwell and Simon B. Laughlin. An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow and Metabolism, 21(10):1133–1145, 2001. [15] Aapo Hyvarinen and Erkki Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7):1483–1492, 1997. [16] David J. Field. What is the goal of sensory coding? Neural Computation, 6(4):559–601, 1994. [17] Kai-Sheng Song. A globally convergent and consistent method for estimating the shape parameter of a generalized Gaussian distribution. IEEE Transactions on Information Theory, 52(2):510–527, 2006. [18] Yan Karklin and Michael S. Lewicki. Learning higher-order structures in natural images. Network: Computation in Neural Systems, 14:483–499, 2003.
|
2006
|
115
|
2,939
|
The Neurodynamics of Belief Propagation on Binary Markov Random Fields Thomas Ott Institute of Neuroinformatics ETH/UNIZH Zurich Switzerland tott@ini.phys.ethz.ch Ruedi Stoop Institute of Neuroinformatics ETH/UNIZH Zurich Switzerland ruedi@ini.phys.ethz.ch Abstract We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield network can be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergenceis guaranteed. As a consequence, in the limit of many weak connections per neuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks. 1 Introduction Real brain structures employ inference algorithms as a basis of decision making. Belief Propagation (BeP) is a popular, widely applicable inference algorithm that seems particularly suited for a neural implementation. The algorithm is based on message passing between distributed elements that resembles the signal transduction within a neural network. The analogy between BeP and neural networks is emphasised if BeP is formulated within the framework of Markov random fields (MRF). MRF are related to spin models [1] that are often used as abstract models of neural networks with symmetric synaptic weights. If a neural implementation of BeP can be realised on the basis of MRF, each neuron corresponds to a message passing element (hidden node of a MRF) and the synaptic weights reflect their pairwise dependencies. The neural activity then would encode the messages that are passed between connected nodes. Due to the highly recurrent nature of biological neural networks, MRF obtained in this correspondence to a neural network are naturally very “loopy”. Convergence of BeP on loopy structures is, however, a delicate matter [1]-[2] . Here, we show that BeP on binary MRF can be reformulated as continuous Hopfield networks along the lines of the sketched correspondence. More precisely, the equations of a continuous Hopfield network are derived from the equations of BeP on a binary MRF, if there are many, but weak connections per neuron. As a central result in this case, attractive fixed points of the Hopfield network provide very good approximations of BeP fixed points of the corresponding MRF. In the Hopfield case a Lyapunov function guarantees the convergence towards these fixed points. As a consequence, Hopfield networks implement BeP with guaranteed convergence. The result of the inference is directly represented by the activity of the neurons in the steady state. To illustrate this mechanism, we compare the magnetisations obtained in the original BeP framework to that from the Hopfield network framework, for a symmetric ferromagnetic model. Hopfield networks may also serve as a guideline for the implementation or the detection of BeP in more realistic, e.g., spiking, neural networks. By giving up the symmetric synaptic weights constraints, we may generalise the original BeP inference algorithm towards capturing neurally inspired message passing. 2 A Quick Review on Belief Propagation in Markov Random Fields MRF have been used to formulate inference problems, e.g. in Boltzmann machines (which actually are MRF [3]) or in the field of computer vision [4] and are related to Bayesian networks. In fact, both concepts are equivalent variants of graphical models [1]. Typically, from a given set of observations , we want to infer some hidden quantities that, in our case, take on either of the two values
. For instance, the pixel values of a grey-scaled image may be represented by , whereas a particular variable describes whether pixel belongs to an object ( ) or to the background ( ). The natural question that emerges in this context is: Given the observations , what is the probability for ? The relation between and is usually given by a joint probability, written in the factorised form ! "$# % '& (*)!+ ,( -*( # /. -*
01 (1) where the functions + describe the pairwise dependencies of the hidden variables and the functions . give the evidences from . " is the normalisation constant [1]. (1) can directly be reformulated as an Ising system with the Energy 2 03 4/5 % '& (*)76 8( 93 3 (:;5 =< 93 91 (2) where the Boltzmann distribution provides the probability 03 of a spin configuration 3 , 03 > "@?AB %DC )0E*F>G (3) A comparison with (1) yields 3 := , 6 8( 03 - 3 (IHJKKLDM + 8( *-N( and < 03 9*HJKKLDM . **
0 . In many cases, it is reasonable to assume that 6 ,( 03 - 3 (O 6 8( 3 3 (P 6 (* 3 3 ( and that < 93 QR < 3 , where 6 8( and < are real-valued constants, so that (2) transforms into the familiar Ising Hamiltonian [5]. For convenience, we set JS . The inference task inherent to MRF amounts to extracting marginal probabilities 4 5 TU & V
W X ! G (4) An exact evaluation of according to Eq. (4) is generally very time-consuming. BeP provides us with approximated marginals within a reasonable time. This approach is based on the idea that connected elements (where a connection is given by 6 ,(ZY \[ ) interchange messages that contain a recommendation about what state the other elements should be in [1]. Given the set of messages ]_^ ,( (!` at time a , the messages at time ab are determined by ] ^'cd 8( ( > 5 Te . * + 8( - ( # Vf
g % D)ihj( ] ^ Vk G (5) Here, ]l,( denotes the message sent from the hidden variable (or node) to node m . n *o m denotes the set of all neighbouring nodes of without m . Usually, the messages are normalised at every time step, i.e., ]_^ ,( b ]_^ ,( pq . After (5) has converged, the marginals are approximated by the so called beliefs r that are calculated according to r 4=s . * # (1f
g % t) ] (* 1 (6) where s is a normalisation constant. In particular in connection with Ising systems, one is primarily interested in the quantity ] r r , the so-called local magnetisation. For a detailed introduction of BeP on MRF we refer to [1]. 3 BeP and the Neurodynamics of Hopfield Networks The goal of this section is to establish the relationship between the update rule (5) and the dynamical equation of a continuous Hopfield network, uv a u a w v a byx{z 5 V}| V1 v V a ~ b a G (7) Here v is some quantity describing the activity of neuron (e.g., the membrane potential) and x is the activation function, typically implemented in a sigmoid form, such as x =IM
. | 8( | (* are the connection (synaptic) weights which need to be symmetric in the Hopfield model. The connectivity might be all-to-all or sparse. a is an external signal or bias (see, e.g., [6] for a general introduction to Hopfield networks). According to the sketched picture, each neuron represents a node , whereas the messages are encoded in the variables v and | ,( . The exact nature of this encoding will be worked out below. The Hopfield architecture implements the point attractor paradigm, i.e., by means of the dynamics the network is driven into a fixed point. At the fixed point, the beliefs r can be read out. In the MRF picture, this corresponds to (5) and (6). We will now realise the translation from MRF into Hopfield networks as follows: (1) Reduction of the number of messages per connection from ]Z,( and ]l,( to one reparameterised variable ,( . (2) Translation into a continuous system. (3) Translation of the obtained equations into the equations of a Hopfield network, where we find the encoding of the variables 8( in terms of v and | ,( . This will establish the exact relationship between Hopfield and BeP. 3.1 Reparametrisation of the messages In the case of binary variables , the messages ]P(* can be reparameterised [2] according to I
MN
,(]_,( ( ]l8( (p G (8) By this, the update rules (5) transform into update rules for the new “messages” ,( x 0 ^'cd 8( `MN
A dP `MN
6 ,(7I
MN
5 Vf
g % t)0h( ^ V1 b < i@ G (9) For each connection m we obtain one single message ,( . We can now directly calculate the local magnetisation according to ] I
MN
9 Vf
g e Vk b < [2]. The Jacobian of (9) in a point is denoted by u x i 4
D e,¡ U-¢ £ ¤ . The used reparametrisation translates the update rules into an additive form (“log domain”) which is a basic assumption of most models of neural networks. 3.2 Translation into a time-continuous system Eq. (9) can be translated into the equivalent time-continuous system u 8( a u a ¥
,( i a *> 8( a b `MN
A d¦ `MN
6 ,(7I
MN
5 Vf
g % t)ihj( V1 a b < a ¦§ (10) where < a ¨ < is time-independent. The corresponding Jacobian in a point is denoted by u ¥ 0 ©ª¬« u b u x 0 , where « u is the £ £ -dimensional identity matrix ( £ £ is the number of messages ,( ). Obviously, (9) and (10) have the same fixed points :D® which are given by ,(p¯`MN
A d `MN
6 ,(7I
MN
5 Vf
g % t)ihj( V1 b < § (11) with identical stability properties in both frameworks: For stability of (9) it is required that the real part of the largest eigenvalue of the Jacobian u x i D® be smaller than 1, whereas for the stability of (10) the condition is that the real part of the largest eigenvalue of u ¥ i t® ¬°¬« u b u x it® must be smaller than 0. It is obvious that both conditions are identically satisfied. 3.3 Translation into a Hopfield network The comparison between Eq. (7) and Eq. (10) does not lead to a direct identification of v with ,( . Rather, under certain conditions, we can identify ,( with | ,( v . That is, a message corresponds to the presynaptic neural activity weighted by the synaptic strength. Formally, we may define a variable v ( by 8( | ,( v ( and rewrite Eq. (10) as u u a | 8( v ( | ,( v ( b I
MN
A d | ,( I
MN
±5 Vf
g % t) | Vk v V | (* v ( b < a Z (12) where we set | ,( ¯I
MN
6 8( .1 In the following, we assume that the synaptic weights | 8( are relatively small, i.e., | ,(O²³ . Hence IM
A d can be approximated by `MN
A d ¬´w . Moreover, if a neuron receives many inputs (number of connections µ ¶· ) then the single contribution | (* v ( can be neglected. Thus (12) simplifies to u u a | ,( v ( | 8( v ( b | ,(IM
5 Vf
g % D) | Vk v V b < a G (13) Upon a division by | ,( , we arrive at the equation u u a v ( v ( b IM
¸5 Vf
g % D) | Vk v V b < a (14) which for a uniform initialisation v d [7¹ v7º [7¹ GtGDG v» e [7 for all preserves this uniformity through time, i.e., v d a ¹ v7º a ¹ GtGDG v» e a . In other words, the subset defined by v d v º GDGtG v » e is invariant under the dynamics of (14). For such an initialisation we can therefore replace for a all v ( by a single variable v , which leads to the equation u
v u a v b I
MN
5 Vf
g % D) | Vk v V b < a G (15) Using I
MN
b ¼´q`MN
b `MN
N if y²½ , and with y < we end up with the postulated equation (7). After the convergence to an attractor fixed point, the local magnetisation is simply the activity v . This is because the fixed point and the read out equations collapse under the approximation `MN
V | Vk v V b < 0¾´¯I
MN
Q V | Vk v V b , i.e., v a K¿>]_ . In summary, we can emulate the original BeP procedure by a continuous Hopfield network provided that (I) the single weights | 8( and the external fields < a are relatively weak, (II) that each neuron receives many inputs and (III) that the original messages have been initialised according to v V [4 ÀV H | ÀV v V`Á [7¾ ÀV ÁH | ÀV Áp GDGtG v Vk e [ DV  e H | DV  e . From a biological point of view, the first two points seem reasonable. The effect of a single synapse is typically small compared to the totality of the numerous synaptic inputs of a cell [7]-[8]. In this sense, single weights are considered weak. In order to establish a firm biological correspondence, particular consideration will be required for the last point. In the next section, we show that Hopfield networks are guaranteed to converge and thus, the required initialisation can be considered a natural choice for BeP on MRF with the properties (I) and (II). 3.4 Guarantee of convergence A basic Hopfield model of the form u a u a © a b 5 ( | (* x ( a - b «I( (16) with x Ãq`MN
, has the same attractor structure as the model (7) described above (see [6] and references therein). For the former model, an explicit Lyapunov function has been constructed 1Hence the synaptic weights Ä>Å Æ are automatically restricted to the interval Ç ÈZÉÊkÉË . 5 25 0.2 1 T m (a) ÌPÍÀÎÏ 0.1 0.4 0.2 1 m w (b) Ì_ÍÐÄÏ Figure 1: The magnetisation ] as a function of J and | for the symmetric ferromagneticmodel. The results for the original BeP (grey stars) and for the Hopfield network (black circles) are compared. [9] which assures that these networks and with them the networks considered by us are globally asymptotically stable [6]. Moreover, the time-continuous model (7) can be translated back into a time-discrete model, yielding v ab `MN
5 ( | (* v ( a b a G (17) This equation is the proper analogue of Eq. (9). 4 Results for the Ferromagnetic Model In this section, we evaluate the Hopfield-based inference solution ] v a q¿y for networks with a simple connectivity structure: We assume constant positive synaptic weights | | ,( (ferromagnetic couplings) and a constant number of connections per neuron µ . We furthermore abstain from an external field and set [ . To realise this symmetric model, we may either think of an infinitely extended network or of a network with some spatial periodicity, e.g., a network on a torus. According to the last section, | is related to 6 in a spin model via | °I
MN
6 Ѱ`MN
H!J , where, for convenience, we reintroduced a quasi-temperature J as a scaling parameter. From Eq. (7), it is clear that Ò D® v
ÓkÔ v
ÓkÔ GDGDG v
ÓkÔ is a fixed point of the system if vÓ1Ô I
MN
µ | v
Ó1Ô . This equation has always a solution vÓ1Ô v!Õ ª[ . However, the stability of vÕ is restricted to JKÖ¯J× ØQÙ ^ , where the bifurcation point is given by J × ØQÙ ^ I
MN
A d d » G (18) This follows from the critical condition ÚiÛIÜÝ % »jÞß ) ß £ ß X ßIà á . For J$â}J × ØQÙ ^ , two additional and stable fixed points vã emerge which are symmetric with respect to the origin. After the convergence to a stable fixed point, v c for JáâäJ × ØQÙ ^ and v Õ for JáÖªJ × ØQÙ ^ , the obtained magnetisation ] I
MN
µ | v
Ó1Ô equal to v
Ó1Ô is shown in dependence of J in Fig. 1a (black circles), for µ wå
[ . The critical point is found at a temperature J × ØQÙ ^ H`MN
A d !Hå
[ w æ G æ
ç The result is compared to the result obtained on the basis of the original BeP equations (5) (grey stars in Fig 1a). We see that the critical point is slightly lower in the original BeP case. This can be understood from Eq. (9), for which the point given by the messages 4è [*[NI[ GDGtG looses stability at the critical temperature JéQê Ô ØQÙ ^ `MN
A d d » A d G (19) For the value µ qå[ , this yields JpéQê Ô ØQÙ ^ ¸ ç G æç . Jpé9ê Ô ØQÙ ^ is in fact the critical temperature for Ising grids obtained in the Bethe-Peierls approximation (for µ ìë , we get J é9ê Ô ØQÙ ^ íå G ç
ç7îïæ [5]). In this way, we casually come across the deep relationship of BeP and Bethe-Peierls which has been established by the theorem stating that stable BeP fixed points are local minima of the Bethe free energy functional [1],[10]. In the limit of small weights, i.e. large J , the results for Hopfield nets and BeP must be identical. This, in fact, is certainly true for JqÖJ × ØQÙ ^ , where ]ð°[ in both cases. For very large weights, i.e., small J , the results are also identical in the case of the ferromagnetic couplings studied here, as ] . It is only around the critical values, where the two results seem to differ. A comparison of the results against the synaptic weight | , however, shows an almost perfect agreement for all | . The differences can be made arbitrarily small for larger µ . 5 Discussion and Outlook In this report, we outlined the general structural affinity between belief propagation on binary Markov random fields and continuous Hopfield networks. According to this analogy, synaptic weights correspond to the pairwise dependencies in the MRF and the neuronal signal transduction corresponds to the message exchange. In the limit of many synaptic connections per neuron, but comparatively small individual synaptic weights, the dynamics of the Hopfield network is an exact mirror of the BeP dynamics in its time-continuous form. To achieve the agreement, the choice of initial messages needs to be confined. From this we can conclude that Hopfield network attractors are also BeP attractors (whereas the opposite does not necessarily hold). Unlike BeP, Hopfield networks are guaranteed to converge to a fixed point. We may thus argue that Hopfield networks naturally implement useful message initialisations that prevent trapping into a limit cycle. As a further benefit, the local magnetisations, as the result of the inference process, are just reflected in the asymptotic neural activity. The binary basis of the implementation is not necessarily a drawback, but could simply reflect the fact that many decisions have a yes-or-no character. Our work so far has preliminary character. The Hopfield network model is still a crude simplification of biological neural networks and the relevance of our results for such real-world structures remains somewhat open. However, the search for a possible neural implementation of BeP is appealing and different concepts have already been outlined [11]. This approach shares our guiding idea that the neural activity should directly be interpreted as a message passing process. Whereas our approach is a mathematically rigorous intermediate step towards more realistic models, the approach chosen in [11] tries to directly implement BeP with spiking neurons. In accordance with the guiding idea, our future work will comprise three major steps. First, we take the step from Hopfield networks to networks with spiking elements. Here, the question is to what extent can the concepts of message passing be adapted or reinterpreted so that a BeP implementation is possible. Second, we will give up the artificial requirement of symmetric synaptic weights. To do this, we might have to modify the original BeP concept, while we still may want to stick to the message passing idea. After all, there is no obvious reason why the brain should implement exactly the BeP algorithm. It rather seems plausible that the brain employs inference algorithms that might be conceptually close to BeP. Third, the context and the tasks for which such algorithms can actually be used must be elaborated. Furthermore, we need to explore how the underlying structure could actually be learnt by a neural system. Message passing-based inference algorithms offer an attractive alternative to traditional notions of computation inspired by computer science, paving the way towards a more profound understanding of natural computation [12]. To judge its eligibility, there is - ultimately - one question: How can the usefulness (or inappropriateness) of the message passing concept in connection with biological networks be verified or challenged experimentally? Acknowledgements This research has been supported by a ZNZ grant (Neuroscience Center Zurich). References [1] Yedidia, J.S., Freeman, W.T., Weiss, Y. (2003) Understanding belief propagtion and its generalizations. In G. Lakemeyer and B. Nebel (eds.) Exploring Artificial Intelligence in the New Millenium, Morgan Kaufmann, San Francisco. [2] Mooij, J.M., Kappen, H.J. (2005) On the properties of the Bethe approximation and loopy belief propagation on binary networks. J.Stat.Mech., doi:10.1088/1742-5468/2005/11/P11012. [3] Welling, M., Teh, W.T. (2003) Approximate inference in Boltzmann machines. Artificial Intelligence 143:19-50. [4] Geman, S., Geman, D. (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE-PAMI 6(6):721-741. [5] Huang, K. (1987) Statistical mechanics. Second edition, John Wiley & Sons, New York, Chapter 13. [6] Haykin, S. (1999) Neural networks - a comprehensive foundation. Second edition, Prentice-Hall, Inc., Chapter 14. [7] Koch, C. (1999), Biophysics of computation. Oxford University Press, Inc., New York. [8] Douglas, R.J., Mahowald, M., Martin, K.A.C., Stratford, K.J. (1996) The role of synapses in cortical computation. Journal of Neurocytology 25: 893-911. [9] Hopfield, J.J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. PNAS 81:3088-3092. [10] Heskes, T. (2004) On the uniqueness of loopy belief propagation fixed points. Neural Comput. 16:2379-2413. [11] Shon, A.P., Rao, R.P.N. (2005) Implementing belief propagation in neural circuits. Neurocomputing 65-66:877-884. [12] Stoop, R., Stoop, N. (2004) Natural computation measured as a reduction of complexity. Chaos 14(3):675-679.
|
2006
|
116
|
2,940
|
Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing Long (Leo) Zhu Department of Statistics University of California at Los Angeles Los Angeles, CA 90095 lzhu@stat.ucla.edu Yuanhao Chen Department of Automation University of Science and Technology of China Hefei, Anhui 230026 P.R.China yhchen4@ustc.edu Alan Yuille Department of Statistics University of California at Los Angeles Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract We describe an unsupervised method for learning a probabilistic grammar of an object from a set of training examples. Our approach is invariant to the scale and rotation of the objects. We illustrate our approach using thirteen objects from the Caltech 101 database. In addition, we learn the model of a hybrid object class where we do not know the specific object or its position, scale or pose. This is illustrated by learning a hybrid class consisting of faces, motorbikes, and airplanes. The individual objects can be recovered as different aspects of the grammar for the object class. In all cases, we validate our results by learning the probability grammars from training datasets and evaluating them on the test datasets. We compare our method to alternative approaches. The advantages of our approach is the speed of inference (under one second), the parsing of the object, and increased accuracy of performance. Moreover, our approach is very general and can be applied to a large range of objects and structures. 1 Introduction Remarkable progress in mathematics and computer science of probability is leading to a revolution in the scope of probabilistic models. In particular, there are exciting new probability models [1, 3, 4, 5, 6, 11] defined on structured relational systems such as graphs or grammars. These new formulations subsume more traditional models, such as Markov Random Fields (MRF’s) [2], and have growing applications to natural languages, machine learning, and computer vision. Although these models have enormous representational power, there are many practical drawbacks which must be overcome before using them. In particular, we need efficient algorithms to learn the models from training data and to perform inference on new examples. This problem is particularly difficult when the structure of the representation is unknown and needs to be induced from the data. In this paper we develop an algorithm called “structure induction” (or “structure pursuit”) which we use to learn the probability model in an unsupervised manner from a set of training data. This algorithm proceeds by building an AND-OR graph [5] in an iterative way. The form of the resulting graph structure ensures that inference can be performed rapidly for new data. Chair 90.9% Cougar 90.9% Piano 96.3% Scissors 94.9% Panda 90.0% Rooster 92.1% Stapler 90.5 % Wheelchair 92.4% Windsor Chair 92.4% Wrench 84.6% Figure 1: We have learnt probability grammars for these ten objects in the Caltech 101 database, obtaining scores over 90 % for most objects. A score of 90.00 %, means that we have a detection rate of 90 % and a false positive rate of 10 % (10 % = (100 - 90) %). The number of data examples are 62, 69, 90, 39, 38, 49, 45, 59, 56, 39 ordered left-to-right and top-to-bottom. Our application is to the detection, recognition, and parsing of objects in images. The training data consists of a set of images where the target object is present but at an unknown location. This topic has been much studied [16] (see technical report – Zhu, Chen and Yuille 2006 – for additional references). Our approach has the following four properties. Firstly, a wide range of applicability which we demonstrate by learning models for 13 object categories from the Caltech-101 [16], Figure (1,5). Secondly, the approach is invariant to rotation and a large range of scale of the objects. Thirdly, the approach is able to deal with object classes, which we illustrate by learning a hybrid class consisting of faces, motorbikes and airplane. Fourthly, the inference is performed rapidly in under a second. 2 Background 2.1 Representation, Inference and Learning Structured models define a probability distribution on structured relational systems such as graphs or grammars. This includes many standard models of probability distributions defined on graphs – for example, graphs with fixed structure, such as MRF’s [2] or Conditional Random Fields [3], or Probabilistic Context Free Grammars (PCFG’s) [4] where the structure is variable. Attempts have been made to unify these approaches under a common formulation. For example, Case-Factor Diagrams [1] have recently been proposed as a framework which subsumes both MRF’s and PCFG’s. In this paper, we will be concerned with models that combine probabilistic grammars with MRF’s. The grammars are based on AND-OR graphs [1, 5, 6], which relate to mixtures of trees [7]. This merging of MRF’s with probabilistic grammars results in structured models which have great representational power. There has been considerable interest in inference algorithms for these structured models, for example McAllester et al [1] describe how dynamic programming algorithms (e.g. Viterbi and inside-outside) can be used to rapidly compute properties of interest. Our paper is concerned with the task of unsupervised learning of structured models for applications to detecting, recognizing, and representing visual objects. In this paper, we restrict ourselves to a special case of Probabilistic Grammars with OR nodes, and MRF’s. This is simpler than the full cases studied by McAllester but is more complex than the MRF models standardly used for this problem. For MRF models, the number of graph nodes is fixed and structure induction consists of determining the connections between the nodes and the forms of the corresponding potentials. For these graphs, an effective strategy is feature induction [8] which is also known as feature pursuit [9]. A similar strategy is also used to learn CRF’s [10]. In both cases, the learning is fully supervised. For Bayesian network, there is work on learning the structure using the EM algorithm [12]. Learning the structure of grammars in an unsupervised way is more difficult. Klein and Manning [4] have developed unsupervised learning of PCFG’s for parsing natural language, but here the structure of grammar is specified. Zettlemoyer and Collins [11] perform similar work based on lexical learning with lambda-calculus language. In short, to our knowledge, there is no unsupervised learning algorithm for structure induction for a Probabilistic Grammar-MRF model. Moreover, our vision application requires the ability to learn the model of the target object in the presence of unknown background structure. Methods exist in the computer vision literature for achieving this for an MRF model [16], but not for Probabilistic Grammars. 2.2 Our Model: High-Level Description Figure 2: Graphical Models. In this paper, we consider a combination of PCFG and MRF. The leaf nodes of the graph will be image features that are described by MRF’s. Instead of using the full PCFG, we restrict the grammar to containing one OR-node. Our model contains a restricted set of grammatical rules, see figure (2). The top, triangular node, is an OR node. It can have an arbitrary number of child nodes. The simplest type of child node is a histogram model (far left panel of figure (2)). We can obtain more complex models by adding MRF models in the form of triples, see figure (2) left to right. Combination of triples can be expressed in a junction tree representation, see the sixth and seventh panels of figure (2). This representation enable rapid inference. The computation complexity of inference is bounded by the width and height of the subtrees. In more abstract terms, we define a set of rules R(x, y) for allowable parses of input x to a parse tree y. These rules have potentials φ(x, r, t) for a production rule r ∈R(x, y) and ψ(x, wM, t) for the MRF models (see details in the technical report), where t are nuisance parameters (e.g. geometric transformations and missing data) and w = (wG, wM) are model parameters. The wG are the grammar parameters and the wM are the MRF parameters. We define a set W of model parameters that are allowed to be non-zero (w = 0 if w /∈W). The structure of the model is determined by the set W. The model is defined by: P(x, y, w, t) = P(t)P(w)P(y)P(x|y, w, t), (1) where P(x|y, w, t) = 1 Z e P r∈R(x,y) wG·φ(x,r,t)+P MRF ΨMRF (x,t,wM), (2) where MRF denotes the cliques of the MRF. Z is the normalization constant. We now face three tasks: (I) structure learning, (II) parameter learning to estimate w, and (III) inference to estimate y. Inference requires estimating the parse tree y from input x. The model parameters w are fixed. The nuisance parameters are integrated out. This requires solving y∗= arg max P t P(y, t|x, w) by the EM algorithm using dynamic programming to estimate y∗efficiently. During the E step, we approximate the sum over t by a saddle point approximation. Parameter learning we specify a set W of parameters w which we estimate by MAP (the other w’s are constrained to be zero). Hence we estimate w∗= arg maxw∈W P y,t P(w, t, y|x). This is performed by an EM algorithm, where the summation over y can be performed by dynamic programming, the summation over t is again performed by a saddle point. The w can be calculated by sufficient statistics. Structure Learning corresponds to increasing the set of parameters w that can be non-zero. For each structure we define a score given by its fit to the data. Formally we extend W to W ′ where W ⊂W ′. (The allowed extensions are defined in the next section). We now compute P(x|w ∈ W) = P w∈W,t,y P(x, y, t|w) and P(x|w ∈W ′) = P w∈W ′,t,y P(x, y, t|w). This requires EM, dynamic programming, and a saddle point approximation. We refer to the model fits, P(x|w ∈W) and P(x|w ∈W ′), as the scores for structure W and W ′ respectively. 3 Brief Details of Our Model We now give a brief description of our model. A detailed description is given in our technical report (Zhu, Chen, and Yuille 2006). Figure 3: Triplets without Orientation (left two panels). Triplets with Orientation (right two panels). 3.1 The setup of the Model We represent the images by features {xi : i = 1, .., N(τ)}, where N(τ) is the number of features in image τ. Each feature is represented by a pair xi = (zi, Ai), where zi is the location of the feature in the image and Ai is an appearance vector. The image features are detected by the Kadir-Brady operator [13], and their appearance is calculated by the SIFT operator [14]. These operators ensure that the features are invariant to scale, rotation, and some appearance variations. The default background model for the image is to define a histogram model over the positions and appearance of the image features, see first panel of figure (2). Next we use triples of image features as the basic building blocks to construct a model. Our model will be constructed by adding new triplets to the existing model, as shown in the first few panels of figure (2). Each triplet will be represented by a triplet model which is given by Gaussian distributions on spatial position and on appearance P(⃗x| ⃗M = ⃗1, T) = G(⃗z|T(⃗µG, ΣG)G( ⃗A|⃗µA, ΣA), where µG, µA, ΣG, ΣA are the means and covariances of the positions and appearances. The {Mi} are missing data index variables [15], and T denotes transformations due to rotation and scaling. The major advantage of using triplets is that they have geometrical properties which are independent of the scale and rotation of the triplet. These properties include the angles between the vertices, see figure (3). Thus we can decompose the representation of the triplet into two types of properties: (i) those which are independent of scale and rotation, (ii) those that depend explicitly on scale and rotation. By using the invariant properties, we can perform rapid search over triplets when position, scale, and rotation are unknown. In addition, two triplets can be easily combined by a common edge to form a more complex model – see sixth panel of figure (2). This representation is suitable for the junction tree algorithm [2], which enables rapid inference. For structure learning, we face the task of how to expand the set W of non-zero parameters to a new set W ′. The problem is that there are many ways to expand the set, and it is computationally impossible to evaluate all of them. Our strategy is to use a clustering method, see below, to make proposals for expanding the structure. These proposals are then evaluated by model selection. Our clustering method exploits the invariance properties of triplets. We perform clustering on both the appearance and on the geometrical invariants of the triplets. This gives rise to a triplet vocabulary consisting of triplets that frequently occur in the dataset. These are used to make proposals for which triplets to include in the model, and hence for how to expand the set W of non-zero parameters. Input: Training Image τ = 1, .., M and the triplet vocabulary {Ta : a ∈Ω}. Initialize G to be the root node with the background model, and let G∗= G. Algorithm for Structure Induction: • STEP 1: – OR-NODE EXTENSION For T ∈{Ta : a ∈Ω} ∗G′ = G S T (ORing) ∗Update parameters of G′ by EM algorithm ∗If Score(G′) > Score(G∗) Then G∗= G′ – AND-NODE EXTENSION For Image τ = 1, .., M ∗P = the highest probability parse for Image τ by G ∗For each Triple T in Image τ if T T P ̸= ∅ · G′ = G S T (ANDing) · Update parameters of G′ by EM algorithm · If Score(G′) > Score(G∗) Then G∗= G′ • STEP 2: G = G∗. Go to STEP 1 until Score(G) −Score(G∗) < Threshold Output: G Figure 4: Structure Induction Algorithm 3.2 Structure Induction: Learning the Probabilistic Grammar MRF We now have the necessary background to describe our structure induction algorithm. The full procedure is described in the pseudo code in figure (4). Figure (2) shows an example of the structure being induced sequentially. Initially we assume that all the data is generated by the background model. In the terminology of section (2.2), this is equivalent to setting all of the model parameters w to be zero (except those for the background model). We can estimate the parameters of this model and score the model as described in section (2.2). Next we seek to expand the structure of this model. To do this, we use the triplet vocabularies to make proposals. Since the current model is the background model, the only structure change allowed is to add a triplet model as one child of the root node (i.e. to create the background plus triple model described in the previous section, see figure (2)). We consider all members of the triplet vocabulary as candidates, using their cluster means and covariances as prior probabilities on their geometry and attribute properties. Then, for all these triples we construct the background plus triplet model, estimate their parameters and score them. We accept the one with highest score as the new structure. As the graph structure grows, we now have more ways to expand the graph. We can add a new triplet as a child of the root node. This proceeds as in the previous paragraph. Or we can take two members of an existing triplet, and use them to construct a new triplet. In this case, we first parse the data using the current model. Then we use the triplet vocabulary to propose possible triplets, which partially overlap with the current model (and give them prior probabilities on their parameters as before). Then, for all possible extensions, we use the methods in section (2.2) to score the models. We select the one with highest score as the new graph model. If the score increase is not sufficient, we cease building the graph model. See the structured models in figure (5). ...
...
...
Figure 5: Individual Models learnt for Faces, Motorbikes and Airplanes. Table 1: Performance Comparisons Dataset Size Single Model Hybrid Model Constellation[16] Faces 435 98.0 84.0 96.4 Motorbikes 800 92.6 82.7 92.5 Airplanes 800 90.9 87.3 90.2 Faces(Rotated) 435 94.8 – – Faces(Rotated+Scaled) 435 92.3 – – 4 Experimental Results 4.1 Learning Individual Objects Models In this section, we demonstrate the performance of our models for thirteen objects chosen from the Caltech-101 dataset. Each dataset was randomly split into two sets with equal size(one for training and the other for testing). K-means clustering (typically, K is set to 150) was used to learn the triplet vocabularies (see Zhu, Chen, Yuille 2006 for details). Each row in figure 3 corresponds to some triples in the same group. In this experiment, we did not use orientation information from the feature detector. We illustrate our results in figure (1) and Table (1). A score of 90 % means that we get a true positive rate of 90 % and a false positive rate of 10 %. For comparison, we show the performance of the Constellation Model [16]. (Further comparisons to alternative methods are reported in the technical report). The models for individual objects classes, learnt from the proposed algorithm, are illustrated in figure (5). Observe that the generative models have different tree-width and depth. Each subtree of the root node defines an Markov Random Field to describe one configuration of the object. The computational cost of the inference, using dynamic programming, is proportional to the height of the subtree and exponential to the maximum width(only three in our case). The detection time is Figure 6: Parsed Results: Invariant to Rotation and Scale. Figure 7: Hybrid Model learnt for Faces, Motorbikes and Airplanes. less than one second (including the processing of features and inference) for the image with the size of 320*240. The training time is around two hours for 250 training images. 4.2 Invariance to Rotation and Scale This section shows that we can learn and detect objects even when the rotation (in the image) and the scale are unknown (within a range). In this experiment, orientation information, output from feature detector, is used to model the geometry distributions of the triplets. The relative angle between the orientation of each feature and the orientation of the edge of tri-angle is calculated to make the model invariant to rotation. See Figure (3). We implemented the comparison experiment on face dataset. A face model is learnt from the training images with normalized scale and orientation. We tested this model on the testing data with 360degree in-plane rotation and another testing data with rotation and scaling together. The scaling range is from 60% of the original size to 150%(i.e. 180 ∗120 −450 ∗300). Table (1) shows the comparison results. The parsing results (rotation+scale) are illustrated in Figure (6). 4.3 Learning Classes of Models In this section, we show that we can learn a model for an object class. We use a hybrid class which consists of faces, airplanes, and motorbikes. In other words, we know that one object is present in each image but we do not know which. In the training stage, we randomly select images from the datasets of faces, airplanes, and motorbikes. Similarly, we test the hybrid model on examples selected randomly from these three datasets. The learnt hybrid model is illustrated in Figure (7). It breaks down nicely into or’s of the models for each object. Table (1) shows the performance for the hybrid model. This demonstrates that the proposed method can learn a model for the class with extremely large variation. The parsed results are shown in Figure (8). 5 Discussion This paper showed that it is possible to perform unsupervised learning to determine a probabilistic grammar combined with a Markov Random Fields. Our approach is based on structure pursuit where the object model is built up in an iterative manner (similar to feature pursuit used for MRF’s and CRF’s). The building blocks of our model are triplets of features, whose invariance properties can be exploited for rapid computation. Our application is to the detection and parsing of objects. We demonstrated: (a) that we can learn probabilistic models for a variety of different objects, (b) that our approach is invariant to scale and Figure 8: Parsed Results by Hybrid Model (left three panels). Parsed by Standard Model (right three panels). rotation, (c) that we can learn models for hybrid classes, and (d) that we can perform inference rapidly in under one second. Our approach can also be extended. By using a richer vocabulary of features we can learn a more sophisticated generative grammar which will be able to represent objects in greater detail and deal with significant variations in viewpoint and appearance. Acknowledgements We gratefully acknowledge support from the W.M. Keck Foundation, NSF grant number 0413214, and NIH grant RO1 EY015261. References [1] D. McAllester, M. Collins, and F. Pereira. Case-Factor Diagrams for Structured Probabilistic Modeling in UAI, 2004. [2] B.D. Ripley. “Pattern Recognition and Neural Networks”. Cambridge University Press. 1996. [3] J. Lafferty, A. McCallum and F. Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML-2001 [4] D. Klein and C. Manning, ”Natural Language Grammar Induction Using a Constituent-Context Model,” Advances in Neural Information Processing Systems 14 (NIPS-2001), 2001. [5] H. Dechter and Robert Mateescu. AND/OR Search Spaces for Graphical Models. In Artificial Intelligence, 2006. [6] H. Chen, Z.J. Xu, Z.Q. Liu, and S.C. Zhu. Composite Templates for Cloth Modeling and Sketching. Proc. IEEE Conf. on Pattern Recognition and Computer Vision, June, New York, 2006. [7] M. Meila and M. I. Jordan. Mixture of Trees: Learning with mixtures of trees. Journal of Machine Learning Research, 1, 1-48, 2000. [8] S. Della Pietra, V. Della Pietra, and J. Lafferty. Feature Induction for MRF: Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), April 1997, pp. 380-393 [9] S. C. Zhu, Y. N. Wu, and D. Mumford, ”Minimax entropy principle and its application to texture modeling,” Neural Comp., vol. 9, no. 8, pp. 1627–1660, 1997. [10] A. McCallum. Feature Induction for CRF: Efficiently Inducing Features of Conditional Random Fields. Conference on Uncertainty in Artificial Intelligence (UAI), 2003. [11] L. S. Zettlemoyer and M. Collins. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. Conference on Uncertainty in Artificial Intelligence (UAI), 2005. [12] N. Friedman. The Bayesian structural EM algorithm. Fourteenth Conf. on Uncertainty in Artificial Intelligence (UAI), 1998. [13] T. Kadir and M. Brady. Scale, Saliency and Image Description. International Journal of Computer Vision. 45 (2):83-105, November 2001. [14] D. G. Lowe, ”Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, 60, 2. pp. 91-110. 2004. [15] R.J.A. Little and D.B. Rubin. Statistical Analysis with Missing Data, Wiley, Hoboken, New Jersey. 2002. [16] Fergus, R. , Perona, P. and Zisserman, A. Object Class Recognition by Unsupervised Scale-Invariant Learning Proc. of the IEEE Conf on Computer Vision and Pattern Recognition. 2003.
|
2006
|
117
|
2,941
|
A Humanlike Predictor of Facial Attractiveness Amit Kagian*1, Gideon Dror‡2, Tommer Leyvand*3, Daniel Cohen-Or*4, Eytan Ruppin*5 * School of Computer Sciences, Tel-Aviv University, Tel-Aviv, 69978, Israel. ‡ The Academic College of Tel-Aviv-Yaffo, Tel-Aviv, 64044, Israel. Email: {1kagianam, 3tommer, 4dcor, 5ruppin}@post.tau.ac.il, 2gideon@mta.ac.il Abstract This work presents a method for estimating human facial attractiveness, based on supervised learning techniques. Numerous facial features that describe facial geometry, color and texture, combined with an average human attractiveness score for each facial image, are used to train various predictors. Facial attractiveness ratings produced by the final predictor are found to be highly correlated with human ratings, markedly improving previous machine learning achievements. Simulated psychophysical experiments with virtually manipulated images reveal preferences in the machine's judgments which are remarkably similar to those of humans. These experiments shed new light on existing theories of facial attractiveness such as the averageness, smoothness and symmetry hypotheses. It is intriguing to find that a machine trained explicitly to capture an operational performance criteria such as attractiveness rating, implicitly captures basic human psychophysical biases characterizing the perception of facial attractiveness in general. 1 Introduction Philosophers, artists and scientists have been trying to capture the nature of beauty since the early days of philosophy. Although in modern days a common layman's notion is that judgments of beauty are a matter of subjective opinion, recent findings suggest that people might share a common taste for facial attractiveness and that their preferences may be an innate part of the primary constitution of our nature. Several experiments have shown that 2 to 8 months old infants prefer looking at faces which adults rate as being more attractive [1]. In addition, attractiveness ratings show very high agreement between groups of raters belonging to the same culture and even across cultures [2]. Such findings give rise to the quest for common factors which determine human facial attractiveness. Accordingly, various hypotheses, from cognitive, evolutional and social perspectives, have been put forward to describe the common preferences for facial beauty. Inspired by Sir Francis Galton’s photographic method of composing faces [3], Rubenstein, Langlois and Roggman created averaged faces by morphing multiple images together and proposed that averageness is the answer for facial attractiveness [4, 5]. Human judges found these averaged faces to be attractive and rated them with attractiveness ratings higher than the mean rating of the component faces composing them. Grammer and Thornhill have investigated symmetry and averageness of faces and concluded that symmetry was more important than averageness in facial attractiveness [6]. Little and colleagues have agreed that average faces are attractive but claim that faces with certain extreme features, such as extreme sexually dimorphic traits, may be more attractive than average faces [7]. Other researchers have suggested various conditions which may contribute to facial attractiveness such as neonate features, pleasant expressions and familiarity. Cunningham and his associates suggest a multiple fitness model in which there is no single constructing line that determines attractiveness. Instead, different categories of features signal different desirable qualities of the perceived target [8]. Even so, the multiple fitness model agrees that some facial qualities are universally physically attractive to people. Apart from eliciting the facial characteristics which account for attractiveness, modern researchers try to describe underlying mechanisms for these preferences. Many contributors refer to the evolutionary origins of attractiveness preferences [9]-[11]. According to this view, facial traits signal mate quality and imply chances for reproductive success and parasite resistance. Some evolutionary theorists suggest that preferred features might not signal mate quality but that the “good taste” by itself is an evolutionary adaptation (individuals with a preference for attractiveness will have attractive offspring that will be favored as mates) [9]. Another mechanism explains attractiveness' preferences through a cognitive theory - a preference for attractive faces might be induced as a by-product of general perception or recognition mechanisms [5, 12]: Attractive faces might be pleasant to look at since they are closer to the cognitive representation of the face category in the mind. These cognitive representations are described as a part of a cognitive mechanism that abstracts prototypes from distinct classes of objects. These prototypes relate to average faces when considering the averageness hypothesis. A third view has suggested that facial attractiveness originates in a social mechanism, where preferences may be dependent on the learning history of the individual and even on his social goals [12]. Different studies have tried to use computational methods in order to analyze facial attractiveness. Averaging faces with morph tools was done in several cases (e.g. [5, 13]). In [14], laser scans of faces were put into complete correspondence with the average face in order to examine the relationship between facial attractiveness, age, and averageness. Another approach was used in [15] where a genetic algorithm, guided by interactive user selections, was programmed to evolve a “most beautiful” female face. [16] used machine learning methods to investigate whether a machine can predict attractiveness ratings by learning a mapping from facial images to their attractiveness scores. Their predictor achieved a significant correlation of 0.6 with average human ratings, demonstrating that facial beauty can be learned by a machine, at least to some degree. However, as human raters still do significantly outperform the predictor of [16], the challenge of constructing a facial attractiveness machine with human level evaluation accuracy has remained open. A primary goal of this study is to surpass these results by developing a machine which obtains human level performance in predicting facial attractiveness. Having accomplished this, our second main goal is to conduct a series of simulated psychophysical experiments and study the resemblance between human and machine judgments. This latter task carries two potential rewards: A. To determine whether the machine can aid in understanding the psychophysics of human facial attractiveness, capitalizing on the ready accessibility of the analysis of its inner workings, and B. To study whether learning an explicit operational ratings prediction task also entails learning implicit humanlike biases, at least for the case of facial attractiveness. 2 The facial training database: Acquisition, preprocessing and representation 2.1 Rating facial attractiveness The chosen database was composed of 91 facial images of American females, taken by the Japanese photographer Akira Gomi. All 91 samples were frontal color photographs of young Caucasian females with a neutral expression. All samples were of similar age, skin color and gender. The subjects’ portraits had no accessories or other distracting items such as jewelry. All 91 facial images in the dataset were rated for attractiveness by 28 human raters (15 males, 13 females) on a 7-point Likert scale (1 = very unattractive, 7 = very attractive). Ratings were collected with a specifically designed html interface. Each rater was asked to view the entire set before rating in order to acquire a notion of attractiveness scale. There was no time limit for judging the attractiveness of each sample and raters could go back and adjust the ratings of already rated samples. The images were presented to each rater in a random order and each image was presented on a separate page. The final attractiveness rating of each sample was its mean rating across all raters. To validate that the number of ratings collected adequately represented the ``collective attractiveness rating'' we randomly divided the raters into two disjoint groups of equal size. For each facial image, we calculated the mean rating on each group, and calculated the Pearson correlation between the mean ratings of the two groups. This process was repeated 1,000 times. The mean correlation between two groups was 0.92 ( = 0.01). This corresponds well to the known level of consistency among groups of raters reported in the literature (e.g. [2]). Hence, the mean ratings collected are stable indicators of attractiveness that can be used for the learning task. The facial set contained faces in all ranges of attractiveness. Final attractiveness ratings range from 1.42 to 5.75 and the mean rating was 3.33 ( = 0.94). 2.2 Data preprocessing and representation Preliminary experimentation with various ways of representing a facial image have systematically shown that features based on measured proportions, distances and angles of faces are most effective in capturing the notion of facial attractiveness (e.g. [16]). To extract facial features we developed an automatic engine that is capable of identifying eyes, nose, lips, eyebrows, and head contour. In total, we measured 84 coordinates describing the locations of those facial features (Figure 1). Several regions are suggested for extracting mean hair color, mean skin color and skin texture. The feature extraction process was basically automatic but some coordinates needed to be manually adjusted in some of the images. The facial coordinates are used to create a distances-vector of all 3,486 distances between all pairs of coordinates in the complete graph created by all coordinates. For each image, all distances are normalized by face length. In a similar manner, a slopes-vector of all the 3,486 slopes of the lines connecting the facial coordinates is computed. Central fluctuating asymmetry (CFA), which is described in [6], is calculated from the coordinates as well. The application also provides, for each face, Hue, Saturation and Value (HSV) values of hair color and skin color, and a measurement of skin smoothness. Figure 1: Facial coordinates with hair and skin sample regions as represented by the facial feature extractor. Coordinates are used for calculating geometric features and asymmetry. Sample regions are used for calculating color values and smoothness. The sample image, used for illustration only, is of T.G. and is presented with her full consent. Combining the distances-vector and the slopes-vector yields a vector representation of 6,972 geometric features for each image. Since strong correlations are expected among the features in such representation, principal component analysis (PCA) was applied to these geometric features, producing 90 principal components which span the sub-space defined by the 91 image vector representations. The geometric features are projected on those 90 principal components and supply 90 orthogonal eigenfeatures representing the geometric features. Eight measured features were not included in the PCA analysis, including CFA, smoothness, hair color coordinates (HSV) and skin color coordinates. These features are assumed to be directly connected to human perception of facial attractiveness and are hence kept at their original values. These 8 features were added to the 90 geometric eigenfeatures, resulting in a total of 98 image-features representing each facial image in the dataset. 3 Experiments and results 3.1 Predictor construction and validation We experimented with several induction algorithms including simple Linear Regression, Least Squares Support Vector Machine (LS-SVM) (both linear as well as non-linear) and Gaussian Processes (GP). However, as the LS-SVM and GP showed no substantial advantage over Linear Regression, the latter was used and is presented in the sequel. A key ingredient in our methods is to use a proper image-features selection strategy. To this end we used subset feature selection, implemented by ranking the image-features by their Pearson correlation with the target. Other ranking functions produced no substantial gain. To measure the performance of our method we removed one sample from the whole dataset. This sample served as a test set. We found, for each left out sample, the optimal number of image-features by performing leave-one-out-cross-validation (LOOCV) on the remaining samples and selecting the number of features that minimizes the absolute difference between the algorithm's output and the targets of the training set. In other words, the score for a test example was predicted using a single model based on the training set only. This process was repeated n=91 times, once for each image sample. The vector of attractiveness predictions of all images is then compared with the true targets. These scores are found to be in a high Pearson correlation of 0.82 with the mean ratings of humans (P-value < 10-23), which corresponds to a normalized Mean Squared Error of 0.39. This accuracy is a marked improvement over the recently published performance results of a Pearson correlation of 0.6 on a similar dataset [16]. The average correlation of an individual human rater to the mean correlations of all other raters in our dataset is 0.67 and the average correlation between the mean ratings of groups of raters is 0.92 (section 2.1). It should be noted that we tried to use this feature selection and training procedure with the original geometric features instead of the eigenfeatures, ranking them by their correlation to the targets and selecting up to 300 best ranked features. This, however, has failed to produce good predictors due to strong correlations between the original geometric features (maximal Pearson correlation obtained was 0.26). 3.2 Similarity of machine and human judgments Each rater (human and machine) has a 91 dimensional rating vector describing its Figure 2: Distribution of mean Euclidean distance from each human rater to all other raters in the ratings space. The machine’s average distance form all other raters (left bar) is smaller than the average distance of each of the human raters to all others. attractiveness ratings of all 91 images. These vectors can be embedded in a 91 dimensional ratings space. The Euclidian distance between all raters (human and machine) in this space was computed. Compared with each of the human raters, the ratings of the machine were the closest, on average, to the ratings of all other human raters (Figure 2). To verify that the machine ratings are not outliers that fall out of clusters of human raters (even though their mean distance from the other ratings is small) we surrounded each of the rating vectors in the ratings space with multidimensional spheres of several radius sizes. The machine had more human neighbors than the mean number of neighbors of human raters, testifying that it does not fall between clusters. Finally, for a graphic display of machine ratings among human ratings we applied PCA to machine and human ratings in the rating space and projected all ratings onto the resulting first 2 and 3 principal components. Indeed, the machine is well placed in a mid-zone of human raters (Figure 3). -5 0 5 -7 0 7 -8 -4 0 Machine -10 -5 0 5 10 -10 -5 0 5 Machine (a) (b) Figure 3: Location of machine ratings among the 28 human ratings: Ratings were projected into 2 dimensions (a) and 3 dimensions (b) by performing PCA on all ratings and projecting them on the first principal components. The projected data explain 29.8% of the variance in (a) and 36.6% in (b). 3.3 Psychophysical experiments in silico A number of simulated psychophysical experiments reveal humanlike biases of the machine's performance. Rubenstein et al. discuss a morphing technique to create mathematically averaged faces from multiple face images [5]. They reported that averaged faces made of 16 and 32 original component images were rated higher in attractiveness than the mean attractiveness ratings of their component faces and higher than composites consisting of fewer faces. In their experiment, 32-component composites were found to be the most attractive. We used a similar technique to create averaged virtually-morphed faces with various numbers of components, nc, and have let the machine predict their attractiveness. To this end, coordinate values of the original component faces were averaged to create a new set of coordinates for the composite. These coordinates were used to calculate the geometrical features and CFA of the averaged face. Smoothness and HSV values for the composite faces were calculated by averaging the corresponding values of the component faces1. To study the effect of nc on the attractiveness score we produced 1,000 virtual morph images for each value of nc between 2 and 50, and used our attractiveness predictor (section 3.1) to compute the attractiveness scores of the resulting composites. In accordance with the experimental results of [5], the machine manifests a humanlike bias for higher scores of averaged composites over their components’ mean score. Figure 4a, presenting these results, shows the percent of components which were rated as less attractive than their corresponding composite, for each number of components nc. As evident, the attractiveness rating of a composite surpasses a larger percent of its components’ ratings as nc increases. Figure 4a also shows the mean scores of 1,000 1 HSV values are converted to RGB before averaging composites and the mean scores of their components, for each nc (scores are normalized to the range [0, 1]). Their actual attractiveness scores are reported in Table 1. As expected, the mean scores of the components images are independent of nc, while composites’ scores increase with nc. Mean values of smoothness and asymmetry of the composites are presented in Figure 4b. 2 10 20 30 40 50 0.55 0.6 0.65 0.7 0.75 0.8 Number of components in composite Fraction of less attractive components Composite's score (normalized) Components' mean score (normalized) 2 10 20 30 40 50 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 Number of components in composite Smoothness Asymmetry (a) (b) Figure 4: Mean results over 1,000 composites made of varying numbers of image components: (a) Percent of components which were rated as less attractive than their corresponding composite accompanied with mean scores of composites and the mean scores of their components (scores are normalized to the range [0, 1]. actual attractiveness scores are reported in Table 1). (b) Mean values of smoothness and asymmetry of 1,000 composites for each number of components, nc. Table 1: Mean results over 1,000 composites made of varying numbers of component images NUMBER OF COMPONENTS IN COMPOSITE COMPOSITE SCORE COMPONENTS MEAN SCORE COMPONENTS RATED LOWER THAN COMPOSITE (PERCENT) 2 3.46 3.34 55 % 4 3.66 3.33 64 % 12 3.74 3.32 70 % 25 3.82 3.32 75 % 50 3.94 3.33 81 % Recent studies have provided evidence that skin texture influences judgments of facial attractiveness [17]. Since blurring and smoothing of faces occur when faces are averaged together [5], the smooth complexion of composites may underlie the attractiveness of averaged composites. In our experiment, a preference for averageness is found even though our method of virtual-morphing does not produce the smoothening effect and the mean smoothness value of composites corresponds to the mean smoothness value in the original dataset, for all nc (see Figure 4b). Researchers have also suggested that averaged faces are attractive since they are exceptionally symmetric [18]. Figure 4b shows that the mean level of asymmetry is indeed highly correlated with the mean scores of the morphs (Pearson correlation of -0.91, P-value < 10-19). However, examining the correlation between the rest of the features and the composites' scores reveals that this high correlation is not at all unique to asymmetry. In fact, 45 of the 98 features are strongly correlated with attractiveness scores (|Pearson correlation| > 0.9). The high correlation between these numerous features and attractiveness scores of averaged faces indicates that symmetry level is not an exceptional factor in the machine’s preference for averaged faces. Instead, it suggests that averaging causes many features, including both geometric features and symmetry, to change in a direction which causes an increase in attractiveness. It has been argued that although averaged faces are found to be attractive, very attractive faces are not average [18]. A virtual composite made of the 12 most attractive faces in the set (as rated by humans) was rated by the machine with a high score of 5.6 while 1,000 composites made of 50 faces got a maximum score of only 5.3. This type of preference resembles the findings of an experiment by Perrett et al. in which a highly attractive composite, morphed from only attractive faces, was preferred by humans over a composite made of 60 images of all levels of attractiveness [13]. Another study by Zaidel et al. examined the asymmetry of attractiveness perception and offered a relationship between facial attractiveness and hemispheric specialization [19]. In this research right-right and left-left chimeric composites were created by attaching each half of the face to its mirror image. Subjects were asked to look at left-left and right-right composites of the same image and judge which one is more attractive. For women’s faces, right-right composites got twice as many ‘more attractive’ responses than left-left composites. Interestingly, similar results were found when simulating the same experiment with the machine: Right-right and left-left chimeric composites were created from the extracted coordinates of each image and the machine was used to predict their attractiveness ratings (taking care to exclude the original image used for the chimeric composition from the training set, as it contains many features which are identical to those of the composite). The machine gave 63 out of 91 right-right composites a higher rating than their matching left-left composite, while only 28 left-left composites were judged as more attractive. A paired t-test shows these results to be statistically significant with P-value < 10-7 (scores of chimeric composites are normally distributed). It is interesting to see that the machine manifests the same kind of asymmetry bias reported by Zaidel et al, though it has never been explicitly trained for that. 4 Discussion In this work we produced a high quality training set for learning facial attractiveness of human faces. Using supervised learning methodologies we were able to construct the first predictor that achieves accurate, humanlike performance for this task. Our results add the task of facial attractiveness prediction to a collection of abstract tasks that has been successfully accomplished with current machine learning techniques. Examining the machine and human raters' representations in the ratings space identifies the ratings of the machine in the center of human raters, and closest, in average, to other human raters. The similarity between human and machine preferences has prompted us to further study the machine’s operation in order to capitalize on the accessibility of its inner workings and learn more about human perception of facial attractiveness. To this end, we have found that that the machine favors averaged faces made of several component faces. While this preference is known to be common to humans as well, researchers have previously offered different reasons for favoring averageness. Our analysis has revealed that symmetry is strongly related to the attractiveness of averaged faces, but is definitely not the only factor in the equation since about half of the image-features relate to the ratings of averaged composites in a similar manner as the symmetry measure. This suggests that a general movement of features toward attractiveness, rather than a simple increase in symmetry, is responsible for the attractiveness of averaged faces. Obviously, strictly speaking this can be held true only for the machine, but, in due of the remarkable ``humnalike'' behavior of the machine, it also brings important support to the idea that this finding may well extend also to human perception of facial attractiveness. Overall, it is quite surprising and pleasing to see that a machine trained explicitly to capture an operational performance criteria such as rating, implicitly captures basic human psychophysical biases related to facial attractiveness. It is likely that while the machine learns the ratings in an explicit supervised manner, it also concomitantly and implicitly learns other basic characteristics of human facial ratings, as revealed by studying its "psychophysics". Acknowledgments We thank Dr. Bernhard Fink and the Ludwig-Boltzmann Institute for Urban Ethology at the Institute for Anthropology, University of Vienna, Austria, and Prof. Alice J. O'Toole from the University of Texas at Dallas, for kindly letting us use their face databases. References [1] Langlois, J.H., Roggman, L.A., Casey, R.J., Ritter, J.M., Rieser-Danner, L.A. & Jenkins, V.Y. (1987) Infant preferences for attractive faces: Rudiments of a stereotype? Developmental Psychology, 23, 363-369. [2] Cunningham, M.R., Roberts, A.R., Wu, C.-H., Barbee, A.P. & Druen, P.B. (1995) Their ideas of beauty are, on the whole, the same as ours: Consistency and variability in the cross-cultural perception of female physical attractiveness. Journal of Personality and Social Psychology, 68, 261-279. [3] Galton, F. (1878) Composite portraits. Journal of the Anthropological Institute of Great Britain and Ireland, 8, 132-142. [4] Langlois, J.H. & Roggman, L.A. (1990) Attractive faces are only average. Psychological Science, 1, 115-121. [5] Rubenstein, A.J., Langlois, J.H & Roggman, L.A. (2002) What makes a face attractive and why: The role of averageness in defining facial beauty. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 1-33. Westport, CT: Ablex. [6] Grammer, K. & Thornhill, R. (1994) Human (Homo sapiens) facial attrativness and sexual selection: The role of symmetry and averageness. Journal of Comparative Psychology, 108, 233-242. [7] Little, A.C., Penton-Voak, I.S., Burt, D.M. & Perrett, D.I. (2002) Evolution and individual differences in the perception of attractiveness: How cyclic hormonal changes and self-perceived attractiveness influence female preferences for male faces. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 59-90. Westport, CT: Ablex. [8] Cunningham, M.R., Barbee, A.P. & Philhower, C.L. (2002) Dimensions of facial physical attractiveness: The intersection of biology and culture. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 193-238. Westport, CT: Ablex. [9] Thornhill, R. & Gangsted, S.W. (1999) Facial Attractiveness. Trends in Cognitive Sciences, 3, 452-460. [10] Andersson, M. (1994) Sexual Selection. Princeton, NJ: Princeton University Press. [11] Møller, A.P. & Swaddle, J.P. (1997) Asymmetry, developmental stability, and evolution. Oxford: Oxford University Press. [12] Zebrowitz, L.A. & Rhodes, G. (2002) Nature let a hundred flowers bloom: The multiple ways and wherefores of attractiveness. In Rhodes, G. & Zebrowitz, L.A. (eds.), Advances in Visual Cognition, Vol. 1: Facial Attractiveness, pp. 261-293. Westport, CT: Ablex. [13] Perrett, D.I., May, K.A. & Yoshikawa, S. (1994) facial shape and judgments of female attractiveness. Nature, 368, 239-242. [14] O´Toole, A.J., Price, T., Vetter, T., Bartlett, J.C. & Blanz, V. (1999) 3D shape and 2D surface textures of human faces: the role of "averages" in attractiveness and age. Image and Vision Computing, 18, 9-19. [15] Johnston, V. S. & Franklin, M. (1993) Is beauty in the eye of the beholder? Ethology and Sociobiology, 14, 183-199. [16] Eisenthal, Y., Dror, G. & Ruppin, E. (2006) Facial attractiveness: Beauty and the Machine. Neural Computation, 18, 119-142. [17] Fink, B., Grammer, K. & Thornhill, R. (2001) Human (Homo sapiens) Facial Attractiveness in Relation to Skin Texture and Color. Journal of Comparative Psychology, 115, 92–99. [18] Alley, T.R. & Cunningham, M.R. (1991) Averaged faces are attractive but very attractive faces are not average. Psychological Science, 2, 123-125. [19] Zaidel, D.W., Chen, A.C. & German, C. (1995) She is not a beauty even when she smiles: possible evolutionary basis for a relationship between facial attractiveness and hemispheric specialization. Neuropsychologia, 33(5), 649-655
|
2006
|
118
|
2,942
|
Mutagenetic tree Fisher kernel improves prediction of HIV drug resistance from viral genotype Tobias Sing Department of Computational Biology Max Planck Institute for Informatics Saarbr¨ucken, Germany tobias.sing@mpi-sb.mpg.de Niko Beerenwinkel∗ Department of Mathematics University of California Berkeley, CA 94720 Abstract Starting with the work of Jaakkola and Haussler, a variety of approaches have been proposed for coupling domain-specific generative models with statistical learning methods. The link is established by a kernel function which provides a similarity measure based inherently on the underlying model. In computational biology, the full promise of this framework has rarely ever been exploited, as most kernels are derived from very generic models, such as sequence profiles or hidden Markov models. Here, we introduce the MTreeMix kernel, which is based on a generative model tailored to the underlying biological mechanism. Specifically, the kernel quantifies the similarity of evolutionary escape from antiviral drug pressure between two viral sequence samples. We compare this novel kernel to a standard, evolution-agnostic amino acid encoding in the prediction of HIV drug resistance from genotype, using support vector regression. The results show significant improvements in predictive performance across 17 anti-HIV drugs. Thus, in our study, the generative-discriminative paradigm is key to bridging the gap between population genetic modeling and clinical decision making. 1 Introduction Kernels provide a general framework of statistical learning that allows for integrating problemspecific background knowledge via the geometry of a feature space. Owing to this unifying characteristic, kernel methods enjoy increasing popularity in many application domains, particularly in computational biology [1]. Unfortunately, despite some basic results on the derivation of novel kernels from existing kernels or from more general similarity measures (e.g. via the empirical kernel map [1]), the field suffers from a lack of well-characterized design principles. As a consequence, most novel kernels are still developed in an ad hoc manner. One of the most promising developments in the recent search for a systematic kernel design methodology is the generative-discriminative paradigm [2], also known under the more general term of model-dependent feature extraction (MDFE) [3]. The central idea of MDFE is to derive kernels from generative probabilistic models of a given process or phenomenon. Starting with Jaakkola and Haussler [2] and the seminal work of Amari [4] on the differential geometric structure of probabilistic models, a number of studies have contributed to an emerging theoretical foundation of MDFE. However, the paradigm is also of immediate intuitive appeal, because mechanistic models of a process that are consistent with observed data and that provide falsifiable predictions often allow for more profound insights than purely discriminative approaches. Moreover, entities that are similar according to a mechanistic model should be expected to exhibit similar behavior in any related prop∗Current address: Program for Evolutionary Dynamics, Harvard University, Cambridge, MA 02138, beerenw@fas.harvard.edu erties. From this perspective, MDFE provides a natural bridge between mathematical modeling and statistical learning. To date, a variety of generic MDFE procedures have been proposed, including the Fisher kernel [2] and, more generally, marginalized kernels [5], as well as the TOP [3], heat [6], and probability product kernels [7], along with a number of variations. Surprisingly, however, instantiations of these procedures in bioinformatics have been confined to a very limited number of classical problems, namely protein fold recognition, DNA splice site prediction, exon detection, and phylogenetics. Furthermore, most approaches are based on standard graphical models, such as amino acid sequence profiles or hidden Markov models, that are not adapted in any specific way to the process at hand. For example, a first-order Markov chain along the primary structure of a protein is hardly related to the causal mechanisms underlying polypeptide evolution. Thus, the potential of combining biological modeling with kernelization in the framework of MDFE remains vastly unexplored. This paper is motivated by a regression problem from clinical bioinformatics that has recently attracted substantial attention due to its pivotal role in anti-HIV therapy: the prediction of phenotypic drug resistance from viral genotype (reviewed in [8]). Drug resistant viruses present a major cause of treatment failure and their occurrence renders many of the available drugs ineffective. Therefore, knowing the precise patterns of drug resistance is an important prerequisite for the choice of optimal drug combinations [9, 10]. Drug resistance arises as a virus population evolves under partially suppressive antiviral therapy. The extreme evolutionary dynamics of HIV quickly generate viral genetic variants that are selected for their ability to replicate in the presence of the applied drug cocktail. These advantageous mutants eventually outgrow the wild type population and lead to therapy failure. Thus, the resistance phenotype is determined by the viral genotype. The genotype-phenotype prediction problem is of considerable clinical relevance, because genotyping is much faster and cheaper, while treatment decisions are ultimately based on the viral phenotype (i.e. the level of resistance). From the perspective of MDFE, the interesting feature of HIV drug resistance lies in the structure of the underlying generative process. The development of resistance involves the stochastic accumulation of mutations in the viral genome along certain mutational pathways. Here, we demonstrate how to exploit this evolutionary structure in genotype-phenotype prediction by deriving a Fisher kernel for mixtures of mutagenetic trees, a family of graphical models designed to represent such genetic accumulation processes. The remainder of this paper is organized as follows. In the next section, we briefly summarize the mutagenetic trees mixture (MTreeMix) model, originally introduced in [11]. The Fisher kernel is derived in Section 3. In Section 4, the kernel is applied to the genotype-phenotype prediction problem introduced above. We conclude with some of the broader implications of our study, including directions for future work. 2 Mixture models of mutagenetic trees Consider n genetic events {1, . . . , n}. With each event v, we associate the binary random variable Xv, such that {Xv = 1} indicates the occurrence of v. In our applications, the set {1, . . . , n} will denote the mutations conferring resistance to a specific anti-HIV drug. Syntactically, a mutagenetic tree for n genetic events is a connected branching T = (V, E) on the vertices V = {0, 1, . . . , n} and rooted at 0, where E ⊆V × V denotes the edge set of T. Semantically, the mutagenetic tree model induced by T and the parameter vector θ = (θ1, . . . , θn) ∈(0, 1)n is the Bayesian network on T with constrained conditional probability tables of the form ϑv = 0 1 0 1 0 1 1 −θv θv , v = 1, . . . n. Thus, a mutagenetic tree model is the family of distributions of X = (X1, . . . , Xn) that factor as Pr(X = x | θ) = n Y v=1 ϑv,(xpa(v),xv). Here, x0 := 1 (indicating the wild type state without any resistance mutations), and pa(v) denotes the parent of vertex v in T. Figure 1 shows a mutagenetic tree for the development of resistance to the protease inhibitor nelfinavir. wild type 30N 0.52 36I 0.44 77I 0.21 10FI 0.10 82AFTS 0.14 46IL 0.99 71VT 0.48 88DS 0.26 84V 0.86 Figure 1: Mutagenetic tree for the development of resistance to the HIV protease inhibitor nelfinavir (NFV). Vertices of the tree are labeled with amino acid changes in the protease enzyme. Edges are labeled with conditional probabilities. The tree represents one component of the 6-trees mixture model estimated for this evolutionary process. The probability tables impose the constraint that a mutation can only be present if its predecessor in the topology is also present. This restriction sets mutagenetic trees apart from standard Bayesian networks in that it allows for an evolutionary interpretation of the tree topology. In particular, the model implies the existence of certain mutational pathways with distinct probabilities. Each pathway is required to respect the order of mutation accumulation that is encoded in the tree. Mutational patterns which do not respect these order constraints have probability zero in the model. We shall exclude these genotypes from the state space of the model. The state space then becomes the following subset of {0, 1}n, C = {x ∈{0, 1}n | (xpa(v), xv) ̸= (0, 1), for all v ∈V }, and the factorization of the joint distribution simplifies to Pr(X = x | θ) = Y {v|xpa(v)=1} θxv v (1 −θv)1−xv. The mutational pathway metaphor, originating in the virological literature, is generally considered to be a reasonable approximation to HIV evolution under drug pressure. However, sets of mutational patterns that support different tree topologies are commonly seen in clinical HIV databases. Thus, in order to allow for increased flexibility in modeling evolutionary pathways and to account for noise in the observed data, we consider the larger model class of mixtures of mutagenetic trees. Intuitively, these mixture models correspond to the assumption that a variety of evolutionary forces contribute additively in shaping HIV genetic variability in vivo. Consider K mutagenetic trees T1, . . . , TK with weights λ1, . . . , λK−1, and λK = 1 −PK−1 k=1 λk, respectively, such that 0 ≤λk ≤1 for all k = 1, . . . , K. Each tree Tk has parameters θk = (θk,v)v=1,...,n. The mutagenetic trees mixture model is the family of distributions of X of the form Pr(X = x | λ, θ) = K X k=1 λk Pr(X = x | θk). The state space C of this model is the union of the state spaces of the single tree models induced by T1, . . . , TK. In our applications, we will always fix the first tree to be a star, such that C = {0, 1}n (i.e., all mutational patterns have non-zero probability). The star accounts for the spontaneous and independent occurrence of genetic events. 3 The MTreeMix Fisher kernel We now derive a Fisher kernel for the mutagenetic trees mixture models introduced in the previous section. In this paper, our primary motivation is to improve the prediction of drug resistance x, x′ 0 1 2 0 1 2 0 2 1 00,00 (θ1 −1)−2 + (θ2 −1)−2 (θ1 −1)−2 (θ2 −1)−2 00,01 (θ1 −1)−2 + θ−1 2 (θ2 −1)−1 — θ−1 2 (θ2 −1)−1 00,10 θ−1 1 (θ1 −1)−1 + (θ2 −1)−2 θ−1 1 (θ1 −1)−1 — 00,11 θ−1 1 (θ1 −1)−1 + θ2(θ2 −1)−1 θ−1 1 (θ1 −1)−1 θ−1 2 (θ2 −1)−1 01,01 (θ1 −1)−2 + θ−2 2 — (θ1 −1)−2 + θ−2 2 01,10 θ−1 1 (θ1 −1)−1 + θ−1 2 (θ2 −1)−1 — — 01,11 θ−1 1 (θ1 −1)−1 + θ−2 2 — θ−1 1 (θ1 −1)−1 + θ−2 2 10,10 θ−2 1 + (θ2 −1)−2 θ−2 1 + (θ2 −1)−2 — 10,11 θ−2 1 + θ−1 2 (θ2 −1)−1 θ−2 1 + θ−1 2 (θ2 −1)−1 — 11,11 θ−2 1 θ−2 2 θ−2 1 θ−2 2 θ−2 1 θ−2 2 Table 1: Mutagenetic tree Fisher kernels for the three trees on the vertices {0, 1, 2}. The value of the kernel K(x, x′) is displayed for all possible pairs of mutational patterns (x, x′). Empty cells are indexed with genotypes that are not compatible with the tree. from viral genotype. However, we defer application-specific details to Section 4, to emphasize the broader applicability of the kernel itself, for example in kernelized principal components analysis or multidimensional scaling. As Jaakkola and Haussler [2] have suggested, the gradient of the log-likelihood function induced by a generative probabilistic model provides a natural comparison between samples. This is because the partial derivatives in the direction of the model parameters describe how each parameter contributes to the generation of that particular sample. Intuitively, two samples should be considered similar from this perspective, if they influence the likelihood surface in a similar way. The natural inner product for the statistical manifold induced by the log-likelihood gradient is given by the Fisher information matrix [4]. The computation of this matrix is straightforward, but for practical purposes, the Euclidean dot product ⟨· , · ⟩provides a suitable substitute for the Fisher metric [2] . We first derive the Fisher kernel for the single mutagenetic tree model. The log-likelihood of observing a mutational pattern x ∈{0, 1}n under this model is ℓx(θ) = X {v|xpa(v)=1} xv log(θv) + (1 −xv) log(1 −θv). Hence, the feature mapping of binary mutational patterns into Euclidean n-space, φ : C →Rn, x 7→∇ℓx(θ) = ∂ℓx(θ) ∂θ1 , . . . , ∂ℓx(θ) ∂θn , is given by the Fisher score consisting of the partial derivatives ∂ℓx(θ) ∂θw = θ−xw w (θw −1)xw−101−xpa(w) = θ−1 w , if (xpa(w), xw) = (1, 1) (θw −1)−1, if (xpa(w), xw) = (1, 0) 0, if (xpa(w), xw) = (0, 0). Thus, we can define the mutagenetic tree Fisher kernel as K(x, x′) = ⟨∇ℓx(θ), ∇ℓx′(θ)⟩= n X v=1 θ−(xv+x′ v) (θv −1)(xv+x′ v)−2 02−(xpa(v)+x′ pa(v)). For example, the Fisher kernels for the three mutagenetic trees on n = 2 genetic events are displayed in Table 1. To better understand the operation of the novel kernel, we rewrite the kernel function K as follows: K(x, x′) = n X v=1 κ(θv)(xpa(v),xv),(x′ pa(v),x′ v) , 0.0 0.2 0.4 0.6 0.8 1.0 −30 −20 −10 0 10 20 30 t kappa(t) (1,0),(1,0) (1,1),(1,1) (1,0),(1,1) = (1,1),(1,0) Figure 2: Non-zero entries of the matrix κ(t) that defines the mutagenetic tree Fisher kernel. The three graphs are indexed in the same way as the matrix, namely by pairs ((xpa(v), xv), (x′ pa(v), x′ v)) denoting the value of two genotypes x and x′ at an edge (pa(v), v) of the mutagenetic tree. The graphs illustrate that the largest contributions stem from shared, unlikely mutations (positive effect, solid and dashed line) and from differing, likely or unlikely mutations (negative effect, dash-dot line). with κ defined as κ(t) = (0, 0) (1, 0) (1, 1) (0, 0) 0 0 0 (1, 0) 0 (t −1)−2 t−1(t −1)−1 (1, 1) 0 t−1(t −1)−1 t−2 ! The matrix κ(t) is indexed by pairs of pairs ((xpa(v), xv), (x′ pa(v), x′ v)). The non-zero entries of κ are displayed in Figure 2 as functions of the parameter t. An edge contributes strongly to the kernel value, if the two genotypes agree on it, but the common event (occurrence or non-occurrence of the mutation) was unlikely (Figure 2, solid and dashed line). If the two genotypes disagree, the edge contributes negatively, especially for extreme parameters θv close to zero or one (Figure 2, dash-dot line), which make one of the events very likely and the other very unlikely. Thus, the application of the Fisher kernel idea to mutagenetic trees leads to a kernel that measures similarity of evolutionary escape in a way that corresponds well to virological intuition. Due to the linear mixing process, extending the Fisher kernel from a single mutagenetic tree to a mixture model is straightforward. Let ℓx(λ, θ) = log Pr(x | λ, θ)) be the log-likelihood function, and denote by γl(x | λ, θ) = λl Pr(x | θl) Pr(x | λ, θ) the responsibility of tree component Tl for the observation x. Then the partial derivatives with respect to θ can be expressed in terms of the partials obtained for the single tree models, weighted by the responsibilities of the trees, ∂ℓx(λ, θ) ∂θl,w = γl(x | λ, θ)∂ℓx(θl) ∂θl,w . Differentiation with respect to λ yields ∂ℓx(λ, θ) ∂λl = Pr(x | θl) −Pr(x | θK) Pr(x | λ, θ) . We obtain the mutagenetic trees mixture (MTreeMix) Fisher kernel K(x, x′) = ⟨∇ℓx(λ, θ), ∇ℓx′(λ, θ)⟩ = K−1 X l=1 [Pr(x | θl) −Pr(x | θK)] [Pr(x′ | θl) −Pr(x′ | θK)] Pr(x | λ, θ) Pr(x′ | λ, θ) + K X l=1 n X w=1 γl(x | λ, θ)γl(x′ | λ, θ)κ(θl,w)(xpa(w),xw),(x′ pa(w),x′ w). 4 Experimental results In this section, we use the Fisher kernel derived from mutagenetic tree mixtures for predicting HIV drug resistance from viral genotype. Briefly, resistance is the ability of a virus to replicate in the presence of drug. The degree of resistance is usually communicated as a non-negative number. This number indicates the fold-change increase in drug concentration that is necessary to inhibit viral replication by 50%, as compared to a fully susceptible reference virus. Thus, higher fold-changes correspond to increasing levels of resistance. We consider all fold-change values on a log10 scale. Information on phenotypic resistance strongly affects treatment decisions, but the experimental procedures are too expensive and time-consuming for routine clinical diagnostics. Instead, at the time of therapy failure, the genotypic makeup of the viral population is determined using standard sequencing methods, leaving the challenge of inferring the phenotypic implications from the observed genotypic alterations. It is also desirable to minimize the number of sequence positions required for reliable determination of drug resistance. With a small number of positions, sequencing could be replaced by the much cheaper line-probe assay (LiPA) technology [12], which focuses on the determination of mutations at a limited number of pre-selected sites. This method could bring resistance testing to resource-poor settings in which DNA sequencing is not affordable. All approaches to this problem described to date are based on a direct correlation between genotype and phenotype, without any further modelling involved. Application of the Fisher kernel to this task is motivated by the hypothesis that the traces of evolution present in the data and modelled by mutagenetic trees mixture models can provide additional information, leading to improved predictive performance. In a recent comparison of several statistical learning methods, support vector regression attained the highest average predictive performance across all drugs [13]. Accordingly, we have chosen this best-performing method to compare to the novel kernel. Specifically, our experimental setup is as follows. For each drug, we start with a genotype-phenotype data set [14] of size 305 to 858 (Table 2, column 3). Based on a list of resistance mutations maintained by the International AIDS Society [15], we extract the residues listed in column 2. The number indicates the position in the viral enzyme (reverse transcriptase for the first two groups of drugs, and protease for the third group), and the amino acids following the number denote the mutations at the respective site that are considered resistance-associated. For example, the feature vector for the drug zidovudine (ZDV) consists of six variables representing the reverse transcriptase mutations 41L, 67N, 70R, 210W, 215F or Y, and 219E or Q. In the naive indicator representation, a mutational pattern within these six mutations is transformed to a binary vector of length six, each entry encoding the presence or absence of the respective mutation. The Fisher kernel requires a mutagenetic trees mixture model for each of the evaluated drugs. Using the MTreeMix software package1, these models were estimated from an independent set of sequences derived from patients failing a therapy that contained the specific drug of interest. In 100 replicates of ten-fold cross-validation for each drug model, we then recorded the squared correlation coefficient (r2) of indicator variable-based versus Fisher kernel-based support vector regression. Avoiding both costly double cross-validation with the limited amount of data and overfitting with single cross-validation, we fixed standard parameters for both SVMs. As suggested by Jaakkola and Haussler [2], the Fisher kernel may be combined with additional transformations. Thus, we evaluated the standard kernels for both setups. For the indicator representation, the linear kernel performed best, whereas the Fisher scores performed best when combined with a Gaussian RBF kernel. We used these two kernels in the final comparison reported in Table 2. 1http://mtreemix.bioinf.mpi-sb.mpg.de The results displayed in columns 5 and 6 of Table 2 show the improvements attained via the Fisher kernel method as estimated by the squared correlation coefficient, r2. After correction for multiple comparisons, the null hypothesis of equal mean was rejected (P < 0.01, Wilcoxon test) in 15 out of 17 cases, a ratio that is highly unlikely to occur by chance (P < 0.0025, binomial test). The most drastic improvements were obtained for the drugs 3TC, NVP and NFV. Slight decreases were observed for ddC and APV. Interestingly, when we combined both feature vectors, the cross-validated performance of the combined predictor was consistently at least as good as the best individual predictor (data not shown). We obtained similar results when evaluating performance by the mean squared error instead of the correlation coefficient (data not shown). Table 2: Comparison of support vector regression performance for the MTreeMix Fisher kernel (F) versus a naive amino acid indicator (I) representation. The drugs (first column) are grouped into the three classes of nucleoside/nucleotide reverse transcriptase inhibitors (rows 1–7), nonnucleoside reverse transcriptase inhibitors (rows 8–10), and protease inhibitors (rows 11–17). MTreeMix models were estimated based on the mutations listed in the second column. The third column indicates the number N of available genotype-phenotype pairs, and the number K of trees in the mixture model is shown in column 4. Columns 5 and 6 indicate the squared correlation coefficients, averaged across 100 replicates of 10-fold cross-validation. P-values (last column) are obtained from Wilcoxon rank sum tests, correcting for multiple testing using the Benjamini-Hochberg method. DRUG MUTATIONS N K r2 F r2 I log10 P ZDV 41L, 67N, 70R, 210W, 215FY, 219EQ 856 5 0.61 0.57 < −15.0 3TC 44D, 118I, 184IV 817 5 0.71 0.64 < −15.0 ddI 65R, 67N, 70R, 74V, 184V, 210W, 215FY, 219EQ 858 4 0.28 0.24 < −15.0 ddC 41L, 65R, 67N, 70R, 74V, 184V 536 2 0.25 0.26 −0.3 d4T 41L, 67N, 70R, 75TMSA, 210W, 215YF, 219QE 857 4 0.22 0.21 −2.7 ABC 41L, 65R, 67N, 70R, 74V, 115F, 184V, 210W, 215YF 846 7 0.57 0.55 −9.0 TDF 41L, 65R, 67N, 70R, 210W, 215YF, 219QE 527 3 0.45 0.43 −7.0 NVP 100I, 103N, 106A, 108I, 181CI, 188CLH, 190A 857 5 0.58 0.49 < −15.0 EFV 100I, 103N, 108I, 181CI, 188L, 190SA 843 4 0.60 0.56 < −15.0 DLV 103N, 181C 856 2 0.49 0.48 −1.7 IDV 10IRV, 20MR, 24I, 32I, 36I, 46IL, 54V, 71VT, 73SA, 77I, 82AFT, 84V, 90M 851 4 0.65 0.63 −14.3 SQV 10IRV, 48V, 54VL, 71VT, 73S, 77I, 82A, 84V, 90M 854 4 0.68 0.66 −8.6 RTV 10FIRV, 20MR, 24I, 32I, 33F, 36I, 46IL, 54VL, 71VT, 77I, 82AFTS, 84V, 90M 855 4 0.77 0.75 −12.0 NFV 10FI, 30N, 36I, 46IL, 71VT, 77I, 82AFTS, 84V, 88DS 853 6 0.62 0.55 < −15.0 APV 10FIRV, 32I, 46IL, 47V, 50V, 54LVM, 73S, 84V, 90M 665 3 0.58 0.59 −2.0 LPV 10FIRV, 20MR, 24I, 32I, 33F, 46IL, 47V, 50V, 53L, 54LV, 63P, 71VT, 73S, 82AFTS, 84V, 90M 507 5 0.73 0.69 < −15.0 ATV 32I, 46I, 50L, 54L, 71V, 73S, 82A, 84V, 88S, 90M 305 2 0.54 0.52 −2.4 5 Conclusions The Fisher kernel derived in this paper allows for leveraging stochastic models of HIV evolution in many kernel-based scenarios. To our knowledge, this is the first study in which a probabilistic model tailored to a specific biological mechanism (namely, the evolution of drug resistance) is exploited in a discriminative context. Using the example of inferring drug resistance from viral genotype, we showed that significant improvements in predictive performance can be obtained for almost all currently available antiretroviral drugs. These results provide strong incentive for further exploitation of evolutionary models in clinical decision making. Moreover, they also underline the potential benefits from integrating several sources of data (genotype-phenotype, evolutionary). The high correlation that can be observed with a relatively small number of mutations was unexpected and suggests that reliable resistance predictions can also be obtained on the basis of LiPA assays which are much cheaper than standard sequencing technologies. While our choice of mutations was based on a selection from the literature, an interesting problem would be to design dedicated LiPA assays containing a set of mutations that allow for optimal prediction performance in this generative-discriminative setting. Finally, mixtures of mutagenetic trees have already been applied in other contexts, for example to model progressive chromosomal alterations in cancer [16], and we expect kernel methods to play an important role in this context, too. Acknowledgments N.B. was supported by the Deutsche Forschungsgemeinschaft (BE 3217/1-1), and T.S. by the German Academic Exchange Service (D/06/41866). T.S. would like to thank Thomas Lengauer for his support and advice. References [1] B. Sch¨olkopf, K. Tsuda, and J.-P. Vert, editors. Kernel methods in computational biology. MIT Press, Cambridge, MA, 2004. [2] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In M. J. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 487–493. MIT Press, Cambridge, MA, 1999. [3] K. Tsuda, M. Kawanabe, G. R¨atsch, S. Sonnenburg, and K. M¨uller. A new discriminative kernel from probabilistic models. In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 977–984. MIT Press, Cambridge, MA, 2002. [4] S. Amari and H. Nagaoka. Methods of Information Geometry. American Mathematical Society, Oxford University Press, 2000. [5] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics, 18 Suppl 1:S268–S275, 2002. [6] J. Lafferty and G. Lebanon. Information diffusion kernels. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 375–382. MIT Press, Cambridge, MA, 2003. [7] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. Journal of Machine Learning Research, 5:819–844, July 2004. [8] N. Beerenwinkel, T. Sing, T. Lengauer, J. Rahnenf¨uhrer, K. Roomp, I. Savenkov, R. Fischer, D. Hoffmann, J. Selbig, K. Korn, H. Walter, T. Berg, P. Braun, G. F¨atkenheuer, M. Oette, J. Rockstroh, B. Kupfer, R. Kaiser, and M. D¨aumer. Computational methods for the design of effective therapies against drug resistant HIV strains. Bioinformatics, 21(21):3943–3950, Sep 2005. [9] F. Clavel and A. J. Hance. HIV drug resistance. N Engl J Med, 350(10):1023–1035, Mar 2004. [10] R. W. Shafer and J. M. Schapiro. Drug resistance and antiretroviral drug development. J Antimicrob Chemother, 55(6):817–820, Jun 2005. [11] N. Beerenwinkel, J. Rahnenf¨uhrer, M. D¨aumer, D. Hoffmann, R. Kaiser, J. Selbig, and T. Lengauer. Learning multiple evolutionary pathways from cross-sectional data. J Comput Biol, 12(6):584–598, 2005. [12] J. C. Schmit, L. Ruiz, L. Stuyver, K. Van Laethem, I. Vanderlinden, T. Puig, R. Rossau, J. Desmyter, E. De Clercq, B. Clotet, and A. M. Vandamme. Comparison of the LiPA HIV-1 RT test, selective PCR and direct solid phase sequencing for the detection of HIV-1 drug resistance mutations. J Virol Methods, 73(1):77–82, Jul 1998. [13] M. Rabinowitz, L. Myers, M. Banjevic, A. Chan, J. Sweetkind-Singer, J. Haberer, K. McCann, and R. Wolkowicz. Accurate prediction of HIV-1 drug response from the reverse transcriptase and protease amino acid sequences using sparse models created by convex optimization. Bioinformatics, 22(5):541– 549, Mar 2006. [14] H. Walter, B. Schmidt, K. Korn, A. M. Vandamme, T. Harrer, and K. ¨Uberla. Rapid, phenotypic HIV-1 drug sensitivity assay for protease and reverse transcriptase inhibitors. J. Clin. Virol., 13:71–80, 1999. [15] V. A. Johnson, F. Brun-Vezinet, B. Clotet, B. Conway, D. R. Kuritzkes, D. Pillay, J. M. Schapiro, A. Telenti, and D. D. Richman. Update of the drug resistance mutations in HIV-1: Fall 2005. Topics in HIV Medicine, 13(4):125–131, 2005. [16] J. Rahnenf¨uhrer, N. Beerenwinkel, W. A. Schulz, C. Hartmann, A. von Deimling, B. Wullich, and T. Lengauer. Estimating cancer survival and clinical outcome based on genetic tumor progression scores. Bioinformatics, 21(10):2438–2446, May 2005.
|
2006
|
119
|
2,943
|
Modeling Human Motion Using Binary Latent Variables Graham W. Taylor, Geoffrey E. Hinton and Sam Roweis Dept. of Computer Science University of Toronto Toronto, M5S 2Z9 Canada {gwtaylor,hinton,roweis}@cs.toronto.edu Abstract We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued “visible” variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/∼gwtaylor/publications/nips2006mhmublv/ 1 Introduction Recent advances in motion capture technology have fueled interest in the analysis and synthesis of complex human motion for animation and tracking. Models based on the physics of masses and springs have produced some impressive results by using sophisticated “energy-based” learning methods[1] to estimate physical parameters from motion capture data[2]. But if we want to generate realistic human motion, we need to model all the complexities of the real dynamics and this is so difficult to do analytically that learning is likely to be essential. The simplest way to generate new motion sequences based on data is to concatenate parts of training sequences [3]. Another method is to transform motion in the training data to new sequences by learning to adjusting its style or other characteristics[4, 5, 6]. In this paper we focus on model driven analysis and synthesis but avoid the complexities involved in imposing physics-based constraints, relying instead on a “pure” learning approach in which all the knowledge in the model comes from the data. Data from modern motion capture systems is high-dimensional and contains complex non-linear relationships between the components of the observation vector, which usually represent joint angles with respect to some skeletal structure. Hidden Markov models cannot model such data efficiently because they rely on a single, discrete K-state multinomial to represent the history of the time series. To model N bits of information about the past history they require 2N hidden states. To avoid this exponential explosion, we need a model with distributed (i.e. componential) hidden state that has a representational capacity which is linear in the number of components. Linear dynamical systems satisfy this requirement, but they cannot model the complex non-linear dynamics created by the non-linear properties of muscles, contact forces of the foot on the ground and myriad other factors. 2 An energy-based model for vectors of real-values In general, using distributed binary representations for hidden state in directed models of time series makes inference difficult. If, however, we use a Restricted Boltzmann Machine (RBM) to model the probability distribution of the observation vector at each time frame, the posterior over latent variables factorizes completely, making inference easy. Typically, RBMs use binary logistic units for both the visible data and hidden variables, but in our application the data (comprised of joint angles) is continuous. We thus use a modified RBM in which the “visible units” are linear, realvalued variables that have Gaussian noise[7, 8]. The graphical model has a layer of visible units v and a layer of hidden units h; there are undirected connections between layers but no connections within a layer. For any setting of the hidden units, the distribution of each visible unit is defined by a parabolic log likelihood function that makes extreme values very improbable:1 −log p(v, h) = X i (vi −ci)2 2σ2 i − X j bjhj − X i,j vi σi hjwij + const, (1) where σi is the standard deviation of the Gaussian noise for visible unit i. (In practice, we rescale our data to have zero mean and unit variance. We have found that fixing σi at 1 makes the learning work well even though we would expect a good model to predict the data with much higher precision). The main advantage of using this undirected, “energy-based” model rather than a directed “belief net” is that inference is very easy because the hidden units become conditionally independent when the states of the visible units are observed. The conditional distributions (assuming σi = 1) are: p(hj = 1|v) = f(bj + X i viwij), (2) p(vi|h) = N(ci + X j hjwij, 1), (3) where f(·) is the logistic function, N(µ, V ) is a Gaussian, bj and ci are the “biases” of hidden unit j and visible unit i respectively, and wij is the symmetric weight between them. Maximum likelihood learning is slow in an RBM but learning still works well if we approximately follow the gradient of another function called the contrastive divergence[9]. The learning rule is: ∆wij ∝⟨vihj⟩data −⟨vihj⟩recon, (4) where the first expectation (over hidden unit activations) is with respect to the data distribution and the second expectation is with respect to the distribution of “reconstructed”data. The reconstructions are generated by starting a Markov chain at the data distribution, updating all the hidden units in parallel by sampling (Eq. 2) and then updating all the visible units in parallel by sampling (Eq. 3). For both expectations, the states of the hidden units are conditional on the states of the visible units, not vice versa. The learning rule for the hidden biases is just a simplified version of Eq. 4: ∆bj ∝⟨hj⟩data −⟨hj⟩recon. (5) 2.1 The conditional RBM model The RBM we have described above models static frames of data, but does not incorporate any temporal information. We can model temporal dependencies by treating the visible variables in the previous time slice as additional fixed inputs [10]. Fortunately, this does not complicate inference. We add two types of directed connections (Figure 2): autoregressive connections from the past n configurations (time steps) of the visible units to the current visible configuration, and connections from the past m visibles to the current hidden configuration. The addition of these directed connections turns the RBM into a conditional RBM (CRBM). In our experiments, we have chosen n = m = 3. These are, however, tunable parameters and need not be the same for both types of directed connections. To simplify discussion, we will assume n = m and refer to n as the order of the model. 1For any setting of the parameters, the gradient of the quadratic log likelihood will always overwhelm the gradient due to the weighted input from the binary hidden units provided the value vi of a visible unit is far enough from its bias, ci. Figure 1: In a trained model, probabilities of each feature being “on” conditional on the data at the visible units. Shown is a 100-hidden unit model, and a sequence which contains (in order) walking, sitting/standing (three times), walking, crouching, and running. Rows represent features, columns represent sequential frames. t-2 t-1 t i j Hidden layer Visible layer Figure 2: Architecture of our model (in our experiments, we use three previous time steps) Inference in the CRBM is no more difficult than in the standard RBM. Given the data at time t, t −1, . . . , t −n, the hidden units at time t are conditionally independent. We can still use contrastive divergence for training the CRBM. The only change is that when we update the visible and hidden units, we implement the directed connections by treating data from previous time steps as a dynamically changing bias. The contrastive divergence learning rule for hidden biases is given in Eq. 5 and the equivalent learning rule for the temporal connections that determine the dynamically changing hidden unit biases is: ∆d(t−q) ij ∝vt−q i ⟨ht j⟩data −⟨ht j⟩recon . (6) where d(t−q) ij is the log-linear parameter (weight) connecting visible unit i at time t −q to hidden unit j for q = 1..n. Similarly, the learning rule for the autoregressive connections that determine the dynamically changing visible unit biases is: ∆a(t−q) ki ∝vt−q k vt i −⟨vt i⟩recon . (7) where a(t−q) ki is the weight from visible unit k at time t −q to visible unit i. The autoregressive weights can model short-term temporal structure very well, leaving the hidden units to model longer-term, higher level structure. During training, the states of the hidden units are determined by both the input they receive from the observed data and the input they receive from the previous time slices. The learning rule for W remains the same as a standard RBM, but has a different effect because the states of the hidden units are now influenced by the previous visible units. We do not attempt to model the first n frames of each sequence. While learning a model of motion, we do not need to proceed sequentially through the training data sequences. The updates are only conditional on the past n time steps, not the entire sequence. As long as we isolate “chunks” of frames (the size depending on the order of the directed connections), these can be mixed and formed into mini-batches. To speed up the learning, we assemble these chunks of frames into “balanced” mini-batches of size 100. We randomly assign chunks to different mini-batches so that the chunks in each mini-batch are as uncorrelated as possible. To save computer memory, time frames are not actually replicated in mini-batches; we simply use indexing to simulate the “chunking” of frames. 2.2 Approximations Our training procedure relies on several approximations, most of which are chosen based on experience training similar networks. While training the CRBM, we replaced vi in Eq. 4 and Eq. 7 by its expected value and we also used the expected value of vi when computing the probability of activation of the hidden units. However, to compute the one-step reconstructions of the data, we used stochastically chosen binary values of the hidden units. This prevents the hidden activities from transmitting an unbounded amount of information from the data to the reconstruction [11]. While updating the directed visible-to-hidden connections (Eq. 6) and the symmetric undirected connections (Eq. 4) we used the stochastically chosen binary values of the hidden units in the first term (under the data), but replaced hj by its expected value in the second term (under the reconstruction). We took this approach because the reconstruction of the data depends on the binary choices made when selecting hidden state. Thus when we infer the hiddens from the reconstructed data, the probabilities are highly correlated with the binary hidden states inferred from the data. On the other hand, we stop after one reconstruction, so the binary choice of hiddens from the reconstruction doesn’t correlate with any other terms, and there is no point including this extra noise. Lastly, we note that the fine-tuning procedure as a whole is making a crude approximationin addition to the one made by contrastive divergence. The inference step, conditional on past visible states, is approximate because it ignores the future (it does not do smoothing). Because of the directed connections, exact inference within the model should include both a forward and backward pass through each sequence (we currently perform only a forward pass). We have avoided a backward pass because missing values create problems in undirected models, so it is hard to perform learning efficiently using the full posterior. Compared with an HMM, the lack of smoothing is a loss, but this is more than offset by the exponential gain in representational power. 3 Data gathering and preprocessing We used data from the CMU Graphics Lab Motion Capture Database as well as from [12] (see acknowledgments). The processed data consists of 3D joint angles derived from 30 (CMU) or 17 (MIT) markers plus a root (coccyx, near the base of the back) orientation and displacement. For both datasets, the original data was captured at 120Hz; we have downsampled it to 30Hz. Six of the joint angle dimensions in the original CMU data had constant values, so they were eliminated. Each of the remaining joint angles had between one and three degrees of freedom. All of the joint angles and the root orientation were converted from Euler angles to the “exponential map” parameterization [13]. This was done to avoid “gimbal lock” and discontinuities. (The MIT data was already expressed in exponential map form and did not need to be converted.) We treated the root specially because it encodes a transformation with respect to a fixed global coordinate system. In order to respect physics, we wanted our final representation to be invariant to ground-plane translation and to rotation about the gravitational vertical. We represented each ground-plane translation by an incremental “forwards” vector and an incremental “sideways” vector relative to the direction the person was currently facing, but we represented height non-incrementally by the distance above the ground plane. We represented orientation around the gravitational vertical by the incremental change, but we represented the other two rotational degrees of freedom by the absolute pitch and roll relative to the direction the person was currently facing. The final dimensionality of our data vectors was 62 (for the CMU data) and 49 (for the MIT data). Note that we eliminated exponential map dimensions that were constant zero (corresponding to joints with a single degree of freedom). As mentioned in Sec. 2, each component of the data was normalized to have zero mean and unit variance. One advantage of our model is the fact that the data does not need to be heavily preprocessed or dimensionality reduced. Brand and Hertzmann [4] apply PCA to reduce noise and dimensionality. The autoregressive connections in our model can be thought of as doing a kind of “whitening” of the data. Urtasun et al. [6] manually segment data into cycles and sample at regular time intervals using quaternion spherical interpolation. Dimensionality reduction becomes problematic when a wider range of motions is to be modeled. 4 Experiments After training our model using the updates described above, we can demonstrate in several ways what it has learned about the structure of human motion. Perhaps the most direct demonstration, which exploits the fact that it is a probability density model of sequences, is to use the model to generate de-novo a number of synthetic motion sequences. Video files of these sequences are available on the website mentioned in the abstract; these motions have not been retouched by hand in any motion editing software. Note that we also do not have to keep a reservoir of training data sequences around for generation - we only need the weights of the model and a few valid frames for initialization. Causal generation from a learned model can be done on-line with no smoothing, just like the learning procedure. The visible units at the last few time steps determine the effective biases of the visible and hidden units at the current time step. We always keep the previous visible states fixed and perform alternating Gibbs sampling to obtain a joint sample from the conditional RBM. This picks new hidden and visible states that are compatible with each other and with the recent (visible) history. Generation requires initialization with n time steps of the visible units, which implicitly determine the “mode” of motion in which the synthetic sequence will start. We used randomly drawn consecutive frames from the training data as an initial configuration. 4.1 Generation of walking and running sequences from a single model In our first demonstration, we train a single model on data containing both walking and running motions; we then use the learned model to generate both types of motion, depending on how it is initialized. We trained2 on 23 sequences of walking and 10 sequences of jogging (from subject 35 in the CMU database). After downsampling to 30Hz, the training data consisted of 2813 frames. Figure 3: After training, the same model can generate walking (top) and running (bottom) motion (see videos on the website). Each skeleton is 4 frames apart. Figure 3 shows a walking sequence and a running sequence generated by the same model, using alternating Gibbs sampling (with the probability of hidden units being “on” conditional on the current and previous three visible vectors). Since the training data does not contain any transitions between walking and running (and vice-versa), the model will continue to generate walking or running motions depending on where it is initialized. 4.2 Learning transitions between various styles In our second demonstration, we show that our model is capable of learning not only several homogeneous motion styles but also the transitions between them, when the training data itself contains 2A 200 hidden-unit CRBM was trained for 4000 passes through the training data, using a third-order model (for directed connections). Weight updates were made after each mini-batch of size 100. The order of the sequences was randomly permuted such that walking and running sequences were distributed throughout the training data. examples of such transitions. We trained on 9 sequences (from the MIT database, file Jog1 M) containing long examples of running and jogging, as well as a few transitions between the two styles. After downsampling to 30Hz, this provided us with 2515 frames. Training was done as before, except that after the model was trained, an identical 200 hidden-unit model was trained on top of the first model (see Sec. 5). The resulting two-level model was used to generate data. A video available on the website demonstrates our model’s ability to stochastically transition between various motion styles during a single generated sequence. 4.3 Introducing transitions using noise In our third demonstration, we show how transitions between motion styles can be generated even when such transitions are absent in the data. We use the same model and data as described in Sec. 4.1, where we have learned on separate sequences of walking and running. To generate, we use the same sampling procedure as before, except that at each time we stochastically choose the hidden states (given the current and previous three visible vectors) we add a small amount of Gaussian noise to the hidden state biases. This encourages the model to explore more of the hidden state space without deviating too far the current motion. Applying this “noisy” sampling approach, we see that the generated motion occasionally transitions between learned styles. These transitions appear natural (see the video on the website). 4.4 Filling in missing data Due to the nature of the motion capture process, which can be adversely affected by lighting and environmental effects, as well as noise during recording, motion capture data often contains missing or unusable data. Some markers may disappear (“dropout”) for long periods of time due to sensor failure or occlusion. The majority of motion editing software packages contain interpolation methods to fill in missing data, but this leaves the data unnaturally smooth. These methods also rely on the starting and end points of the missing data, so if a marker goes missing until the end of a sequence, na¨ıve interpolation will not work. Such methods often only use the past and future data from the single missing marker to fill in that marker’s missing values, but since joint angles are highly correlated, substantial information about the placement of one marker could be gained from the others. Our trained model has the ability to easily fill in such missing data, regardless of where the dropouts occur in a sequence. Due to its approximate inference method which does not rely on a backward pass through the sequence, it also has the ability to fill in such missing data on-line. Filling in missing data with our model is very similar to generation. We simply clamp the known data to the visible units, initialize the missing data to something reasonable (for example, the value at the previous frame), and alternate between stochastically updating the hidden and visible units, with the known visible states held fixed. To demonstrate filling in, we trained a model exactly as described in Sec. 4.1 except that one walking and one running sequence were left out of the training data to be used as test data. For each of these walking and running test sequences, we erased two different sets of joint angles, starting halfway through the test sequence. These sets were the joints in (1) the left leg, and (2) the entire upper body. As seen in the video files on the website, the quality of the filled-in data is excellent and is hardly distinguishable from the original ground truth of the test sequence. Figure 4 demonstrates the model’s ability to predict the three angles of rotation of the left hip. For the walking sequence (of length 124 frames), we compared our model’s performance to nearest neighbor interpolation, a simple method where for each frame, the values on known dimensions are compared to each example in the training set to find the closest match (measured by Euclidean distance in the normalized angle space). The unknown dimensions are then filled in using the matched example. As reconstruction from our model is stochastic, we repeated the experiment 100 times and report the mean. For the missing leg, mean squared reconstruction error per joint using our model was 8.78, measured in normalized joint angle space, and summed over the 62 frames of interest. Using nearest neighbor interpolation, the error was greater: 11.68. For the missing upper body, mean squared reconstruction error per joint using our model was 20.52. Using nearest neighbor interpolation, again the error was greater: 22.20. 0 20 40 60 80 100 120 140 −1.5 −1 −0.5 0 0.5 1 1.5 2 Normalized joint angle Frame 0 20 40 60 80 100 120 140 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 Normalized joint angle Frame Figure 4: The model successfully fills in missing data using only the previous values of the joint angles (through the temporal connections) and the current angles of other joints (through the RBM connections). Shown are two of the three angles of rotation for the left hip joint (the plot of the third is similar to the first). The original data is shown on a solid line, the model’s prediction is shown on a dashed line, and the results of nearest neighbor interpolation are shown on a dotted line (see a video on the website). 5 Higher level models i j t-2 t-1 t k Figure 5: Higherlevel models Once we have trained the model, we can add layers like in a Deep Belief Network [14]. The previous layer CRBM is kept, and the sequence of hidden state vectors, while driven by the data, is treated as a new kind of “fully observed” data. The next level CRBM has the same architecture as the first (though we can alter the number of its units) and is trained in the exact same way. Upper levels of the network can then model higher-order structure. This greedy procedure is justified using a variational bound [14]. A twolevel model is shown in Figure 5. We can also consider two special cases of the higher-level model. If we keep only the visible layer, and its n-th order directed connections, we have a standard AR(n) model with Gaussian noise. If we take the two-hidden layer model and delete the first-level autoregressive connections, as well as both sets of visible-to-hidden directed connections, we have a simplified model that can be trained in 2 stages: first learning a static (iid) model of pairs or triples of time frames, then using the inferred hidden states to train a “fully-observed” sigmoid belief net that captures the temporal structure of the hidden states. 6 Discussion We have introduced a generative model for human motion based on the idea that local constraints and global dynamics can be learned efficiently by a conditional Restricted Boltzmann Machine. Once trained, our models are able to efficiently capture complex non-linearities in the data without sophisticated pre-processing or dimensionality reduction. The model has been designed with human motion in mind, but should lend itself well to other high-dimensional time series. In relatively low-dimensional or unstructured data (for example if we were to model a single isolated joint) a single-layer model might be expected to have difficulty since such cyclic time series contain several subsequences which are locally very similar but occur in different phases of the overall cycle. It would be possible to preserve the global phase information by using a much higher order model, but for higher dimensional data such as full body motion capture this is unnecessary because the whole configuration of joint angles and angular velocities never has any phase ambiguity. So the single-layer version of our model actually performs much better on higher-dimensional data. Models with more hidden layers are able to implicitly model longer-term temporal information, and thus will mitigate this effect. We have demonstrated that our model can effectively learn different styles of motion, as well as the transitions between these styles. This differentiates our approach from PCA-based approaches which only accurately model cyclic motion, and additionally must build separate models for each type of motion. The ability of the model to transition smoothly, however, is dependent on having sufficient examples of such transitions in the training data. We plan to train on larger datasets encompassing such transitions between various styles of motion. If we augment the data with some static skeletal and identity parameters (in essence mapping a person’s unique identity to a set of features), we should be able to use the same generative model for many different people, and generalize individual characteristics from one type of motion to another. Finally, our model is not limited to a single source of data. In the future, we hope to integrate low-level vision data captured at the same time as motion; we could then learn the correlations between the vision stream and the joint angles. Acknowledgments The first data set used in this project was obtained from mocap.cs.cmu.edu. This database was created with funding from NSF EIA-0196217. The second data set used in this project was obtained from http://people.csail.mit.edu/ehsu/work/sig05stf/. For Matlab playback of motion and generation of videos, we have used Neil Lawrence’s motion capture toolbox (http://www.dcs.shef.ac.uk/∼neil/mocap/). References [1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, pp. 2278–2324, November 1998. [2] C. K. Liu, A. Hertzmann, and Z. Popovic, “Learning physics-based motion style with nonlinear inverse optimization,” ACM Trans. Graph., vol. 24, no. 3, pp. 1071–1081, 2005. [3] O. Arikan, D. A. Forsyth, and J. F. O’Brien, “Motion synthesis from annotations,” in Proc. SIGGRAPH, 2002. [4] M. Brand and A. Hertzmann, “Style machines.,” in Proc. SIGGRAPH, pp. 183–192, 2000. [5] Y. Li, T. Wang, and H.-Y. Shum, “Motion texture: a two-level statistical model for character motion synthesis,” in Proc. SIGGRAPH, pp. 465–472, 2002. [6] R. Urtasun, P. Glardon, R. Boulic, D. Thalmann, and P. Fua, “Style-based Motion Synthesis,” Computer Graphics Forum, vol. 23, no. 4, pp. 1–14, 2004. [7] M. Welling, M. Rosen-Zvi, and G. E. Hinton, “Exponential family harmoniums with an application to information retrieval.,” in Proc. NIPS 17, 2005. [8] Y. Freund and D. Haussler, “Unsupervised learning of distributions of binary vectors using 2-layer networks,” in Proc. NIPS 4, 1992. [9] G. E. Hinton, “Training products of experts by minimizing contrastive divergence.,” Neural Comput, vol. 14, pp. 1771–1800, Aug 2002. [10] I. Sutskever and G. E. Hinton, “Learning multilevel distributed representations for high-dimensional sequences,” Tech. Rep. UTML TR 2006-003, University of Toronto, 2006. [11] Y. W. Teh and G. E. Hinton, “Rate-coded restricted Boltzmann machines for face recognition,” in Proc. NIPS 13, 2001. [12] E. Hsu, K. Pulli, and J. Popoviˆc, “Style translation for human motion,” ACM Trans. Graph., vol. 24, no. 3, pp. 1082–1089, 2005. [13] F. S. Grassia, “Practical parameterization of rotations using the exponential map,” J. Graph. Tools, vol. 3, no. 3, pp. 29–48, 1998. [14] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comp., vol. 18, no. 7, pp. 1527–1554, 2006.
|
2006
|
12
|
2,944
|
Image Retrieval and Classification Using Local Distance Functions Andrea Frome Department of Computer Science UC Berkeley Berkeley, CA 94720 andrea.frome@gmail.com Yoram Singer Google, Inc. Mountain View, CA 94043 singer@google.com Jitendra Malik Department of Computer Science UC Berkeley malik@cs.berkeley.edu Abstract In this paper we introduce and experiment with a framework for learning local perceptual distance functions for visual recognition. We learn a distance function for each training image as a combination of elementary distances between patch-based visual features. We apply these combined local distance functions to the tasks of image retrieval and classification of novel images. On the Caltech 101 object recognition benchmark, we achieve 60.3% mean recognition across classes using 15 training images per class, which is better than the best published performance by Zhang, et al. 1 Introduction Visual categorization is a difficult task in large part due to the large variation seen between images belonging to the same class. Within one semantic class, there can be a large differences in shape, color, and texture, and objects can be scaled or translated within an image. For some rigid-body objects, appearance changes greatly with viewing angle, and for articulated objects, such as animals, the number of possible configurations can grow exponentially with the degrees of freedom. Furthermore, there is a large number of categories in the world between which humans are able to distinguish. One oft-cited, conservative estimate puts the total at about 30,000 categories [1], and this does not consider the identification problem (e.g. telling faces apart). One of the more successful tools used in visual classification is a class of patch-based shape and texture features that are invariant or robust to changes in scale, translation, and affine deformations. These include the Gaussian-derivative jet descriptors of [2], SIFT descriptors [3], shape contexts [4], and geometric blur [5]. The basic outline of most discriminative approaches which use these types of features is as follows: (1) given a training image, select a subset of locations or “interest points”, (2) for each location, select a patch surrounding it, often elliptical or rectangular in shape, (3) compute a fixed-length feature vector from each patch, usually a summary of edge responses or image gradients. This gives a set of fixed-length feature vectors for each training image. (4) Define a function which, given the two sets from two images, returns a value for the distance (or similarity) between the images. Then, (5) use distances between pairs of images as input to a learning algorithm, for example an SVM or nearest neighbor classifier. When given a test image, patches and features are extracted, distances between the test image and training images are computed, and a classification is made. Figure 1: These exemplars are all drawn from the cougar face category of the Caltech 101 dataset, but we can see a great deal of variation. The image on the left is a clear, color image of a cougar face. As with most cougar face exemplars, the locations and appearances of the eyes and ears are a strong signal for class membership, as well as the color pattern of the face. Now consider the grayscale center image, where the appearance of the eyes has changed, the ears are no longer visible, and hue is useless. For this image, the markings around the mouth and the texture of the fur become a better signal. The image on the right shows the ears, eyes, and mouth, but due to articulation, the appearance of all have changed again, perhaps representing a common visual subcategory. If we were to limit ourselves to learning one model of relative importance across these features for all images, or even for each category, it could reduce our ability to determine similarity to these exemplars. In most approaches, machine learning only comes to play in step (5), after the distances or similarities between training images are computed. In this work, we learn the function in step (4) from the training data. This is similar in spirit to the recent body of metric learning work in the machine learning community [6][7][8][9][10]. While these methods have been successfully applied to recognizing digits, there are a couple drawbacks in applying these methods to the general image classification problem. First, they would require representing each image as a fixed-length feature vector. We prefer to use sets of patch-based features, considering both the strong empirical evidence in their favor and the difficulties in capturing invariances in fixed-length feature vectors. Second, these metric-learning algorithms learn one deformation for the entire space of exemplars. To gain an intuition as to why this is a problem, consider Figure 1. The goal of this paper is to demonstrate that in the setting of visual categorization, it can be useful to determine the relative importance of visual features on a finer scale. In this work, we attack the problem from the other extreme, choosing to learn a distance function for each exemplar, where each function gives a distance value between its training image, or focal image, and any other image. These functions can be learned from either multi-way class labels or relative similarity information in the training data. The distance functions are built on top of elementary distance measures between patch-based features, and our problem is formulated such that we are learning a weighting over the features in each of our training images. This approach has two nice properties: (1) the output of the learning is a quantitative measure of the relative importance of the parts of an image; and (2) the framework allows us to naturally combine and select features of different types. We learn the weights using a generalization of the constrained optimization formulation proposed by Schultz and Joachims [7] for relative comparison data. Using these local distance functions, we address applications in image browsing, retrieval and classification. In order to perform retrieval and classification, we use an additional learning step that allows us to compare focal images to one another, and an inference procedure based on error-correcting output codes to make a class choice. We show classification results on the Caltech 101 object recognition benchmark, that for some time has been a de facto standard for multi-category classification. Our mean recognition rate on this benchmark is 60.3% using only fifteen exemplar images per category, which is an improvement over the best previously published recognition rate in [11]. 2 Distance Functions and Learning Procedure In this section we will describe the distance functions and the learning procedure in terms of abstract patch-based image features. Any patch-based features could be used with the framework we present, and we will wait to address our choice of features in Section 3. If we have N training images, we will be solving N separate learning problems. The training image for which a given learning problem is being solved will be referred to as its focal image. Each problem is trained with a subset of the remaining training images, which we will refer to as the learning set for that problem. In the rest of this section we will discuss one such learning problem and focal image, but keep in mind that in the full framework there are N of these. We define the distance function we are learning to be a combination of elementary patch-based distances, each of which are computed between a single patch-based feature in the focal image F and a set of features in a candidate image I, essentially giving us a patch-to-image distance. Any function between a patch feature and a set of features could be used to compute these elementary distances; we will discuss our choice in Section 3. If there are M patches in the focal image, we have M patch-to-image distances to compute between F and I, and we notate each distance in that set as dF j (I), where j ∈[1, M], and refer to the vector of these as dF(I). The image-to-image distance function D that we learn is a linear combination of these elementary distances. Where wF is a vector of weights with a weight corresponding to each patch feature: D(F, I) = M X j=1 wF j dF j (I) = wF · dF(I) (1) Our goal is to learn this weighting over the features in the focal image. We set up our algorithm to learn from “triplets” of images, each composed of (1) the focal image F, (2) an image labeled “less similar” to F, and (3) an image labeled “more similar” to F. This formulation has been used in other work for its flexibility [7]; it makes it possible to use a relative ranking over images as training input, but also works naturally with multi-class labels by considering exemplars of the same class as F to be “more similar” than those of another class. To set up the learning algorithm, we consider one such triplet: (F, Id, Is), where Id and Is refer to the dissimilar and similar images, respectively. If we could use our learned distance function for F to rank these two images relative to one another, we ideally would want Id to have a larger value than Is, i.e. D(F, Id) > D(F, Is). Using the formula from the last section, this is equivalent to wF · dF(Id) > wF · dF(Is) . Let xi = dF(Id) −dF(Is), the difference of the two elementary distance vectors for this triplet, now indexed by i. Now we can write the condition as wF · xi > 0. For a given focal image, we will construct T of these triplets from our training data (we will discuss how we choose triplets in Section 5.1). Since we will not be able to find one set of weights that meets this condition for all triplets, we use a maximal-margin formulation where we allow slack for triplets that do not meet the condition and try to minimize the total amount of slack allowed. We also increase the desired margin from zero to one, and constrain wF to have non-negative elements, which we denote using ⪰.1. arg minwF,ξ 1 2
wF
2 + C PT i=1 ξi s.t. : ∀(i) ∈[1, T] : wF · xi ≥1 −ξi, ξi ≥0 wF ⪰0 (2) We chose the L2 regularization in order to be more robust to outliers and noise. Sparsity is also desirable, and an L1 norm could give more sparse solutions. We do not yet have a direct comparison between the two within this framework. This optimization is a generalization of that proposed by Schultz and Joachims in [7] for distance metric learning. However, our setting is different from theirs in two ways. First, their triplets do not share the same focal image as they apply their method to learning one metric for all classes and instances. Second, they arrive at their formulation by assuming that (1) each exemplar is represented by a single fixed-length vector, and (2) a L2 2 distance between these vectors is used. This would appear to preclude our use of patch features and more interesting distance measures, but as we show, this is an unnecessary restriction for the optimization. Thus, a contribution of this paper is to show that the algorithm in [7] is more widely applicable than originally presented. We used a custom solver to find wF, which runs on the order of one to two seconds for about 2,000 triplets. While it closely resembles the form for support vector machines, it differs in two important ways: (1) we have a primal positivity constraint on wF, and (2) we do not have a bias term because 1This is based on the intuition that negative weights would mean that larger differences between features could make two images more similar, which is arguably an undesirable effect. we are using the relative relationship between our data vectors. The missing bias term means that, in the dual optimization problem, we do not have a constraint that ties together the dual variables for the margin constraints. Instead, they can be updated separately using an approach similar to the row action method described in [12], followed by a projection of the new wF to make it positive. Denoting the dual variables for the margin constraints by αi, we first initialize all αi to zero, then cycle through the triplets, performing these two steps for the ith triplet: wF ←max ( T X i=1 αixi, 0 ) , αi ←min ( max ( 1 − wF · xi ∥xi∥2 + αi, 0 ) , C ) where the first max is element-wise, and the min and max in the second line forces 0 ≤αi ≤C. We stop iterating when all KKT conditions are met, within some precision. 3 Visual Features and Elementary Distances The framework described above allows us to naturally combine different kinds of patch-based features, and we will make use of shape features at two different scales and a rudimentary color feature. Many papers have shown the benefits of using filter-based patch features such as SIFT [3] and geometric blur [13] for shape- or texture-based object matching and recognition [14][15][13]. We chose to use geometric blur descriptors, which were used by Zhang et al. in [11] in combination with their KNN-SVM method to give the best previously published results on the Caltech 101 image recognition benchmark. Like SIFT, geometric blur features summarize oriented edges within a patch of the image, but are designed to be more robust to affine transformation and differences in the periphery of the patch. In previous work using geometric blur descriptors on the Caltech 101 dataset [13][11], the patches used are centered at 400 or fewer edge points sampled from the image, and features are computed on patches of a fixed scale and orientation. We follow this methodology as well, though one could use an interest point operator to determine location, scale, and orientation from low-level information, as is typically done with SIFT features. We use two different scales of geometric blur features, the same used in separate experiments in [11]. The larger has a patch radius of 70 pixels, and the smaller a patch radius of 42 pixels. Both use four oriented channels and 51 sample points, for a total of 204 dimensions. As is done in [13], we default to normalizing the feature vector so that the L2 norm is equal to one. Our color features are histograms of eight-pixel radius patches also centered at edge pixels in the image. Any “pixels” in a patch off the edge of the image are counted in a “undefined” bin, and we convert the HSV coordinates of the remaining points to a Cartesian space where the z direction is value and (x, y) is the Cartesian projection of the hue/saturation dimensions. We divide the (x, y) space into an 11 × 11 grid, and make three divisions in the z direction. These were the only parameters that we tested with the color features, choosing not to tune the features to the Caltech 101 dataset. We normalize the bins by the total number of pixels in the patch. Using these features, we can compute elementary patch-to-image distances. If we are computing the distance between the jth patch in the focal image to a candidate image I, we find the closest feature of the same type in I using the L2 distance, and use that L2 distance as the jth elementary patch-toimage distance. We only compare features of the same type, so large geometric blur features are not compared to small geometric blur features. In our experiments we have not made use of geometric relationships between features, but this could be incorporated in a manner similar to that in [11] or [16]. 4 Image Browsing, Retrieval, and Classification The learned distance functions induce rankings that could naturally be the basis for a browsing application over a closed set of images. Consider a ranking of images with respect to one focal image, as in Figure 2. The user may see this and decide they want more sunflower images. Clicking on the sixth image shown would then take them to the ranking with that sunflower image as the focal image, which contains more sunflower results. In essence, we can allow a user to navigate “image space” by visual similarity.2 2To see a simple demo based on the functions learned for this paper, go to http://www.cs.berkeley. edu/∼afrome/caltech101/nips2006. We also can make use of these distance functions to perform image retrieval: given a new image Q, return a listing of the N training images (or the top K) in order of similarity to Q. If given class labels, we would want images ranked high to be in the same class as Q. While we can use the N distance functions to compute the distance from each of the focal images Fi to Q, these distances are not directly comparable. This is because (1) the weight vectors for each of the focal vectors are not constrained to share any properties other than non-negativity, (2) the number of elementary distance measures and their potential ranges are different for each focal image, and (3) some learned distance functions are simply better than others at characterizing similarity within their class. To address this in cases where we have multi-class labels, we do a second round of training for each focal image where we fit a logistic classifier to the binary (in-class versus out-of-class) training labels and learned distances. Now, given a query image Q, we can compute a probability that the query is in the same class as each of the focal (training) images, and we can use these probabilities to rank the training images relative to one another. The probabilities are on the same scale, and the logistic also helps to penalize poor focal rankings.34 To classify a query image, we first run the retrieval method above to get the probabilities for each training image. For each class, we sum the probabilities for all training images from that class, and the query is assigned to the class with the largest total. Formally, if pj is the probability for the jth training image Ij, and C is the set of classes, the chosen class is arg maxC P j:Ij∈C pj. This can be shown to be a relaxation of the Hamming decoding scheme for the error-correcting output codes in [17] in which the number of focal images is the same for each class. 5 Caltech101 Experiments We test our approach on the Caltech101 dataset [18]5. This dataset has artifacts that make a few classes easy, but many are quite difficult, and due to the important challenges it poses for scalable object recognition, it has up to this point been one of the de facto standard benchmarks for multi-class image categorization/object recognition. The dataset contains images from 101 different categories, with the number of images per category ranging from 31 to 800, with a median of about 50 images. We ignore the background class and work in a forced-choice scenario with the 101 object categories, where a query image must be assigned to one of the 101 categories. We use the same testing methodology and mean recognition reporting described in Grauman et al. [15]: we use varying numbers of training set sizes (given in number of examples per class), and in each training scenario, test with all other images in the Caltech101 dataset, except the BACKGROUND Google class. Recognition rate per class is computed, then averaged across classes. This normalizes the overall recognition rate so that the performance for categories with a larger number of test images does not skew the mean recognition rate. 5.1 Training data The images are first resized to speed feature computation. The aspect ratio is maintained, but all images are scaled down to be around 200 × 300. We computed features for each of these images as described in Section 3. We used up to 400 of each type of feature (two sizes of geometric blur and one color), for a maximum total of 1,200 features per image. For images with few edge points, we computed fewer features so that the features were not overly redundant. After computing elementary distances, we rescale the distances for each focal image and feature to have a standard deviation of 0.1. For each focal image we choose a set of triplets for training, and since we are learning similarity for the purposes image classification, we use the category labels on the images in the training set: images that have the same label as the focal image are considered “more similar” than all images that are out of class. Note that the training algorithm allows for a more nuanced training set where an image could be more similar with respect to one image and less similar with respect to another, but 3You can also see retrieval rankings with probabilities at the web page. 4We experimented with abandoning the max-margin optimization and just training a logistic for each focal image; the results were far worse, perhaps because the logistic was fitting noise in the tails. 5Information about the data set, images, and published results can be found at http://www.vision. caltech.edu/Image Datasets/Caltech101/Caltech101.html water lilly focal image water lilly 12.37 lotus 12.39 water lilly 12.44 water lilly 12.58 (pos) sunflower 12.70 lotus 12.72 water lilly 12.89 water lilly 12.96 (pos) water lilly 13.14 (pos) water lilly 13.16 lotus 13.21 (neg) sunflower 13.22 (neg) sunflower 13.23 water lilly 13.26 (pos) stegosaurus 13.28 Figure 2: The first 15 images from a ranking induced for the focal image in the upper-left corner, trained with 15 images/category. Each image is shown with its raw distance distance, and only those marked with (pos) or (neg) were in the learning set for this focal image. Full rankings for all experimental runs can be browsed at http://www.cs.berkeley.edu/∼afrome/caltech101/ nips2006. we are not fully exploiting that in these experiments. Instead of using the full pairwise combination of all in- and out-of-class images, we select triplets using elementary feature distances. Thus, we refer to all the images available for training as the training set and the set of images used to train with respect to a given focal image as its learning set. We want in our learning set those images that are similar to the focal image according to at least one elementary distance measure. For each of the M elementary patch distance measures, we find the top K closest images. If that group contains both in- and out-of-class images, then we make triplets out of the full bipartite match. If all K images are in-class, then we find the closest out-of-class image according to that distance measure and make K triplets with one out-of-class image and the K similar images. We do the converse if all K images are out of class. In our experiments, we used K = 5, and we have not yet performed experiments to determine the effect of the choice of K. The final set of triplets for F is the union of the triplets chosen by the M measures. On average, we used 2,210 triplets per focal image, and mean training time was 1-2 seconds (not including the time to compute the features, elementary distances, or choose the triplets). While we have to solve N of these learning problems, each can be run completely independently, so that for a training set of 1,515 images, we can complete this optimization on a cluster of 50 1GHz computers in about one minute. 5.2 Results We ran a series of experiments using all features, each with a different number of training images per category (either 5, 15, or 30), where we generated 10 independent random splits of the 8,677 images from the 101 categories into training and test sets. We report the average of the mean recognition rates across these splits as well as the standard deviations. We determined the C parameter of the training algorithm using leave-one-out cross-validation on a small random subset of 15 images per category, and our final results are reported using the best value of C found (0.1). In general, however, the method was robust to the choice of C, with only changes of about 1% in recognition with an order of magnitude change in C near the maximum. Figure 3 graphs these results with most of the published results for the Caltech 101 dataset. In the 15 training images per category setting, we also performed recognition experiments on each of our features separately, the combination of the two shape features, and the combination of two shape features with the color features, for a total of five different feature combinations. We performed another round of cross-validation to determine the C value for each feature combination6. Recognition in the color-only experiment was the poorest at 6% (0.8% standard deviation)7 The next best performance was from the bigger geometric blur features with 49.6% (±1.9%), followed by the smaller geometric blur features with 52.1% (±0.8%). Combining the two shape features together, we achieved 58.8% (±0.8%), and with color and shape, reached 60.3% (±0.7%), which 6For big geometric blur, small geometric blur, both together, and color alone, the values were C=5, 1, 0.5, and 50, respectively. 7Only seven categories did better than 33% recognition using only color: Faces easy, Leopards, car side, garfield, pizza, snoopy, and sunflower. Note that all car side exemplars are in black and white. Figure 3: Number of training exemplars versus average recognition rate across classes (based on the graph in [11]). Also shows results from [11], [14], [16], [15], [13], [19], [20], [21], and [18]. Figure 4: Average confusion matrix for 15 training examples per class, across 10 independent runs. Shown in color using Matlab’s jet scale, shown on the right side. is better than the best previously published performance for 15 training images on the Caltech 101 dataset [11]. Combining shape and color performed better than using the two shape features alone for 52 of the categories, while it degraded performance for 46 of the categories, and did not change performance in the remaining 3. In Figure 4 we show the confusion matrix for combined shape and color using 15 training images per category. The ten worst categories starting with the worst were cougar body, beaver, crocodile, ibis, bass, cannon, crayfish, sea horse, crab, and crocodile head, nine of which are animal categories. Almost all the processing at test time is the computation of the elementary distances between the focal images and the test image. In practice the weight vectors that we learn for our focal images are fairly sparse, with a median of 69% of the elements set to zero after learning, which greatly reduces the number of feature comparisons performed at test time. We measured that our unoptimized code takes about 300 seconds per test image.8 After comparisons are computed, we only need to compute linear combinations and compare scores across focal images, which amounts to negligible processing time. This is a benefit of our method compared to the KNN-SVM method of Zhang, et al. [11], which requires the training of a multiclass SVM for every test image, and must perform all feature comparisons. Acknowledgements We would like to thank Hao Zhang and Alex Berg for use of their precomputed geometric blur features, and Hao, Alex, Mike Maire, Adam Kirk, Mark Paskin, and Chuck Rosenberg for many helpful discussions. References [1] I. Biederman, “Recognition-by-components: A theory of human image understanding,” Psychological Review, vol. 94, no. 2, pp. 115–147, 1987. [2] C. Schmid and R. Mohr, “Combining greyvalue invariants with local constraints for object recognition,” in CVPR, 1996. [3] D. Lowe, “Object recognition from local scale-invariant features,” in ICCV, pp. 1000–1015, Sep 1999. [4] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” PAMI, vol. 24, pp. 509–522, April 2002. [5] A. Berg and J. Malik, “Geometric blur for template matching,” in CVPR, pp. 607–614, 2001. [6] E. Xing, A. Ng, and M. Jordan, “Distance metric learning with application to clustering with sideinformation,” in NIPS, 2002. [7] Schutlz and Joachims, “Learning a distance metric from relative comparisons,” in NIPS, 2003. [8] S. Shalev-Shwartz, Y. Singer, and A. Ng, “Online and batch learning of pseudo-metrics,” in ICML, 2004. [9] K. Q. Weinberger, J. Blitzer, and L. K. Saul, “Distance metric learning for large margin nearest neighbor classification,” in NIPS, 2005. [10] A. Globerson and S. Roweis, “Metric learning by collapsing classes,” in NIPS, 2005. [11] H. Zhang, A. Berg, M. Maire, and J. Malik, “SVM-KNN: Discriminative Nearset Neighbor Classification for Visual Category Recognition,” in CVPR, 2006. [12] Y. Censor and S. A. Zenios, Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, 1998. [13] A. Berg, T. Berg, and J. Malik, “Shape matching and object recognition using low distortion correspondence,” in CVPR, 2005. [14] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in CVPR, 2006. [15] K. Grauman and T. Darrell, “Pyramic match kernels: Discriminative classficiation with sets of image features (version 2),” Tech. Rep. MIT CSAIL TR 2006-020, MIT, March 2006. [16] J. Mutch and D. G. Lowe, “Multiclass object recognition with sparse, localized features,” in CVPR, 2006. [17] E. L. Allwein, R. E. Schapire, and Y. Singer, “Reducing multiclass to binary: A unifying approach for margin classifiers,” JMLR, vol. 1, pp. 113–141, 2000. [18] L. Fei-Fei, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: an incremental bayesian approach testing on 101 object categories.,” in Workshop on Generative-Model Based Vision, CVPR, 2004. [19] G. Wang, Y. Zhang, and L. Fei-Fei, “Using dependent regions for object categorization in a generative framework,” in CVPR, 2006. [20] A. D. Holub, M. Welling, and P. Perona, “Combining generative models and fisher kernels for object recognition,” in ICCV, 2005. [21] T. Serre, L. Wolf, and T. Poggio, “Object recognition with features inspired by visual cortex,” in CVPR, 2005. 8To further speed up comparisons, in place of an exact nearest neighbor computation, we could use approximate nearest neighbor algorithms such as locality-sensitive hashing or spill trees.
|
2006
|
120
|
2,945
|
Chained Boosting Christian R. Shelton University of California Riverside CA 92521 cshelton@cs.ucr.edu Wesley Huie University of California Riverside CA 92521 whuie@cs.ucr.edu Kin Fai Kan University of California Riverside CA 92521 kkan@cs.ucr.edu Abstract We describe a method to learn to make sequential stopping decisions, such as those made along a processing pipeline. We envision a scenario in which a series of decisions must be made as to whether to continue to process. Further processing costs time and resources, but may add value. Our goal is to create, based on historic data, a series of decision rules (one at each stage in the pipeline) that decide, based on information gathered up to that point, whether to continue processing the part. We demonstrate how our framework encompasses problems from manufacturing to vision processing. We derive a quadratic (in the number of decisions) bound on testing performance and provide empirical results on object detection. 1 Pipelined Decisions In many decision problems, all of the data do not arrive at the same time. Often further data collection can be expensive and we would like to make a decision without accruing the added cost. Consider silicon wafer manufacturing. The wafer is processed in a series of stages. After each stage some tests are performed to judge the quality of the wafer. If the wafer fails (due to flaws), then the processing time, energy, and materials are wasted. So, we would like to detect such a failure as early as possible in the production pipeline. A similar problem can occur in vision processing. Consider the case of object detection in images. Often low-level pixel operations (such as downsampling an image) can be performed in parallel by dedicated hardware (on a video capture board, for example). However, searching each subimage patch of the whole image to test whether it is the object in question takes time that is proportional to the number of pixels. Therefore, we can imagine a image pipeline in which low resolution versions of the whole image are scanned first. Subimages which are extremely unlikely to contain the desired object are rejected and only those which pass are processed at higher resolution. In this way, we save on many pixel operations and can reduce the cost in time to process an image. Even if downsampling is not possible through dedicated hardware, for most object detection schemes, the image must be downsampled to form an image pyramid in order to search for the object at different scales. Therefore, we can run the early stages of such a pipelined detector at the low resolution versions of the image and throw out large regions of the high resolution versions. Most of the processing is spent searching for small faces (at the high resolutions), so this method can save a lot of processing. Such chained decisions also occur if there is a human in the decision process (to ask further clarifying questions in database search, for instance). We propose a framework that can model all of these scenarios and allow such decision rules to be learned from historic data. We give a learning algorithm based on the minimization of the exponential loss and conclude with some experimental results. 1.1 Problem Formulation Let there be s stages to the processing pipeline. We assume that there is a static distribution from which the parts, objects, or units to be processed are drawn. Let p(x, c) represent this distribution in which x is a vector of the features of this unit and c represents the costs associated with this unit. In particular, let xi (1 ≤i ≤s) be the set of measurements (features) available to the decision maker immediately following stage i. Let ci (1 ≤i ≤s) be the cost of rejecting (or stopping the processing of) this unit immediately following stage i. Finally, let cs+1 be the cost of allowing the part to pass through all processing stages. Note that ci need not be monotonic in i. To take our wafer manufacturing example, for wafers that are good we might let ci = i for 1 ≤i ≤s, indicating that if a wafer is rejected at any stage, one unit of work has been invested for each stage of processing. For the same good wafers, we might let cs+1 = s −1000, indicating that the value of a completed wafer is 1000 units and therefore the total cost is the processing cost minus the resulting value. For a flawed wafer, the values might be the same, except for cs+1 which we would set to s, indicating that there is no value for a bad wafer. Note that the costs may be either positive or negative. However, only their relative values are important. Once a part has been drawn from the distribution, there is no way of affecting the “base level” for the value of the part. Therefore, we assume for the remainder of this paper that c i ≥0 for 1 ≤i ≤s + 1 and that ci = 0 for some value of i (between 1 and s + 1). Our goal is to produce a series of decision rules fi(xi) for 1 ≤i ≤s. We let fi have a range of {0, 1} and let 0 indicate that processing should continue and 1 indicate that processing should be halted. We let f denote the collection of these s decision rules and augment the collection with an additional rule fs+1 which is identically 1 (for ease of notation). The cost of using these rules to halt processing an example is therefore L(f(x), c) = s+1 i=1 cifi(xi) i−1 j=1 (1 −fj(xj)) . We would like to find a set of decision rules that minimize Ep[L(f(x), c)]. While p(x, c) is not known, we do have a series of samples (training set) D = {(x1, c1), (x2, c2), . . . , (xn, cn)} of n examples drawn from the distribution p. We use superscripts to denote the example index and subscripts to denote the stage index. 2 Boosting Solution For this paper, we consider constructing the rules fi from simpler decision rules, much as in the Adaboost algorithm [1, 2]. We assume that each decision f i(xi) is computed as the threshold of another function gi(xi): fi(xi) = I(gi(xi) > 0).1 We bound the empirical risk: n k=1 L(f(xk), ck) = n k=1 s+1 i=1 ck i I(gi(xk i ) > 0) i−1 j=1 I(gj(xk j ) ≤0) ≤ n k=1 s+1 i=1 ck i egi(xk i ) i−1 j=1 e−gj(xk j ) = n k=1 s+1 i=1 ck i egi(xk i )−Pi−1 j=1 gj(xk j ) . (1) Our decision to make all costs positive ensures that the bounds hold. Our decision to make the optimal cost zero helps to ensure that the bound is reasonably tight. As in boosting, we restrict gi(xi) to take the form mi l=1 αi,lhi,l(xi), the weighted sum of mi subclassifiers, each of which returns either −1 or +1. We will construct these weighted sums incrementally and greedily, adding one additional subclassifier and associated weight at each step. We will pick the stage, weight, and function of the subclassifier in order to make the largest negative change in the exponential bound to the empirical risk. The subclassifiers, h i,l will be drawn from a small class of hypotheses, H. 1I is the indicator function that equals 1 if the argument is true and 0 otherwise. 1. Initialize gi(x) = 0 for all stages i 2. Initialize wk i = ck i for all stages i and examples k. 3. For each stage i: (a) Calculate targets for each training example, as shown in equation 5. (b) Let h be the result of running the base learner on this set. (c) Calculate the corresponding α as per equation 3. (d) Score this classification as per equation 4 4. Select the stage ¯ı with the best (highest) score. Let ¯h and ¯α be the classifier and weight found at that stage. 5. Let g¯ı(x) ←g¯ı(x) + ¯α¯h(x). 6. Update the weights (see equation 2): • ∀1 ≤k ≤n, multiply wk ¯ı by e¯α¯h(xk ¯ı ). • ∀1 ≤k ≤n, j > ¯ı, multiply wk j by e−¯α¯h(xk ¯ı ). 7. Repeat from step 3 Figure 1: Chained Boosting Algorithm 2.1 Weight Optimization We first assume that the stage at which to add a new subclassifier and the subclassifier to add have already been chosen: ¯ı and ¯h, respectively. That is, ¯h will become h¯ı,m¯ı+1 but we simplify it for ease of expression. Our goal is to find α¯ı,m¯ı+1 which we similarly abbreviate to ¯α. We first define wk i = ck i egi(xk i )−Pi−1 j=1 gj(xk j ) (2) as the weight of example k at stage i, or its current contribution to our risk bound. If we let D + ¯h be the set of indexes of the members of D for which ¯h returns +1, and let D− ¯h be similarly defined for those for which ¯h returns −1, we can further define W + ¯ı = k∈D+ ¯h wk ¯ı + k∈D− ¯h s+1 i=¯ı+1 wk i W − ¯ı = k∈D− ¯h wk ¯ı + k∈D+ ¯h s+1 i=¯ı+1 wk i . We interpret W + ¯ı to be the sum of the weights which ¯h will emphasize. That is, it corresponds to the weights along the path that ¯h selects: For those examples for which ¯h recommends termination, we add the current weight (related to the cost of stopping the processing at this stage). For those examples for which ¯h recommends continued processing, we add in all future weights (related to all future costs associated with this example). W − ¯ı can be similarly interpreted to be the weights (or costs) that ¯h recommends skipping. If we optimize the loss bound of Equation 1 with respect to ¯α, we obtain ¯α = 1 2 log W − ¯ı W + ¯ı . (3) The more weight (cost) that the rule recommends to skip, the higher its α coefficient. 2.2 Full Optimization Using Equation 3 it is straight forward to show that the reduction in Equation 1 due to the addition of this new subclassifier will be W + ¯ı (1 −e¯α) + W − ¯ı (1 −e−¯α) . (4) We know of no efficient method for determining ¯ı, the stage at which to add a subclassifier, except by exhaustive search. However, within a stage, the choice of which subclassifier to use becomes one of maximizing n k=1 zk ¯ı ¯h(xk ¯ı ) , where zk ¯ı = s+1 i=¯ı+1 wk i −wk ¯ı (5) with respect to ¯h. This is equivalent to an weighted empirical risk minimization where the training set is {x1 ¯ı , x2 ¯ı , . . . , xn ¯ı }. The label of xk ¯ı is the sign of zk ¯ı , and the weight of the same example is the magnitude of zk ¯ı . 2.3 Algorithm The resulting algorithm is only slightly more complex than standard Adaboost. Instead of a weight vector (one weight for each data example), we now have a weight matrix (one weight for each data example for each stage). We initialize each weight to be the cost associated with halting the corresponding example at the corresponding stage. We start with all g i(x) = 0. The complete algorithm is as in Figure 1. Each time through steps 3 through 7, we complete one “round” and add one additional rule to one stage of the processing. We stop executing this loop when ¯α ≤0 or when an iteration counter exceeds a preset threshold. Bottom-Up Variation In situations where information is only gained after each stage (such as in section 4), we can also train the classifiers “bottom-up.” That is, we can start by only adding classifiers to the last stage. Once finished with it, we proceed to the previous stage, and so on. Thus instead of selecting the best stage, i, in each round, we systematically work our way backward through the stages, never revisiting previously set stages. 3 Performance Bounds Using the bounds in [3] we can provide a risk bound for this problem. We let E denote the expectation with respect to the true distribution p(x, c) and ˆEn denote the empirical average with respect to the n training samples. We first bound the indicator function with a piece-wise linear function, b θ, with a maximum slope of 1 θ: I(z > 0) ≤bθ(z) = max min 1, 1 + z θ , 0 . We then bound the loss: L(f(x), c) ≤φθ(f(x), c) where φθ(f(x), c) = s+1 i=1 ci min{bθ(gi(xi)), bθ(−gi−1(xi−1)), bθ(−gi−2(xi−2)), . . . , bθ(−g1(x1))} = s+1 i=1 ciBi θ(gi(xi), gi−1(xi−1), . . . , g1(x1)) We replaced the product of indicator functions with a minimization and then bounded each indicator with bθ. Bi θ is just a more compact presentation of the composition of the function b θ and the minimization. We assume that the weights α at each stage have been scaled to sum to 1. This has no affect on the resulting classifications, but is necessary for the derivation below. Before stating the theorem, for clarity, we state two standard definition: Definition 1. Let p(x) be a probability distribution on the set X and let {x1, x2, . . . , xn} be n independent samples from p(x). Let σ1, σ2, . . . , σn be n independent samples from a Rademacher random variable (a binary variable that takes on either +1 or −1 with equal probability). Let F be a class of functions mapping X to ℜ. Define the Rademacher Complexity of F to be Rn(F) = E sup f∈F 1 n n i=1 σif(xi) where the expectation is over the random draws of x1 through xn and σ1 through σn. Definition 2. Let p(x), {x1, x2, . . . , xn}, and F be as above. Let g1, g2, . . . , gn be n independent samples from a Gaussian distribution with mean 0 and variance 1. Analogous to the above definition, define the Gaussian Complexity of G to be Gn(F) = E sup f∈F 1 n n i=1 gif(xi) . We can now state our theorem, bounding the true risk by a function of the empirical risk: Theorem 3. Let H1, H2, . . . , Hs be the sequence of the sets of functions from which the base classifier draws for chain boosting. If Hi is closed under negation for all i, all costs are bounded between 0 and 1, and the weights for the classifiers at each stage sum to 1, then with probability 1 −δ, E [L(f(x), c)] ≤ˆEn [φθ(f(x), c)] + k θ s i=1 (i + 1)Gn(Hi) + 8 ln 2 δ n for some constant k. Proof. Theorem 8 of [3] states E [L(x, c)] ≤ˆEn (φθ(f(x), c)) + 2Rn(φθ ◦F) + 8 ln 2 δ n and therefore we need only bound the R n(φθ ◦F) term to demonstrate our theorem. For our case, we have Rn(φθ ◦F) = E sup f∈F 1 n n i=1 σiφθ(f(xi), ci) = E sup f∈F 1 n n i=1 σi s+1 j=1 ci jBs θ(gj(xi j), gj−1(xi j−1), . . . , g1(xi 1)) ≤ s+1 j=1 E sup f∈F 1 n n i=1 σiBs θ(gj(xi j), gj−1(xi j−1), . . . , g1(xi 1)) = s+1 j=1 Rn(Bs θ ◦Gj) where Gi is the space of convex combinations of functions from H i and Gi is the cross product of G1 through Gi. The inequality comes from switching the expectation and the maximization and then from dropping the ci j (see [4], lemma 5). Lemma 4 of [3] states that there exists a k such that Rn(Bs θ ◦Gj) ≤kGn(Bs θ ◦Gj). Theorem 14 of the same paper allows us to conclude that Gn(Bs θ ◦Gj) ≤2 θ j i=1 Gn(Gi). (Because Bs θ is the minimum over a set of functions with maximum slope of 1 θ, the maximum slope of B s θ is also 1 θ.) Theorem 12, part 2 states Gn(Gi) = Gn(Hi). Taken together, this proves our result. Note that this bound has only quadratic dependence on s, the length of the chain and does not explicitly depend on the number of rounds of boosting (the number of rounds affects φ θ which, in turn, affects the bound). 4 Application We tested our algorithm on the MIT face database [5]. This database contains 19-by-19 gray-scale images of faces and non-faces. The training set has 2429 face images and 4548 non-face images. The testing set has 472 faces and 23573 non-faces. We weighted the training set images so that the ratio of the weight of face images to non-face images matched the ratio in the testing set. 100 200 300 400 500 700 1000 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 number of rounds average cost/error per example training cost training error testing cost testing error 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 False negative rate False positive rate 0 0.2 0.4 0.6 0.8 10 50 100 150 200 0 0.2 0.4 0.6 0.8 10 50 100 150 200 0 0.2 0.4 0.6 0.8 10 50 100 150 200 0 0.2 0.4 0.6 0.8 10 50 100 150 200 Average number of pixels CB Global CB Bottom−up SVM Boosting (a) (b) Figure 2: (a) Accuracy verses the number of rounds for a typical run, (b) Error rates and average costs for a variety of cost settings. 4.1 Object Detection as Chained Boosting Our goal is to produce a classifier that can identify non-face images at very low resolutions, thereby allowing for quick processing of large images (as explained later). Most image patches (or subwindows) do not contain faces. We, therefore, built a multi-stage detection system where any early rejection is labeled as a non-face. The first stage looks at image patches of size 3-by-3 (i.e. a lowerresolution version of the 19-by-19 original image). The next stage looks at the same image, but at a resolution of 6-by-6. The third stage considers the image at 12-by-12. We did not present the full 19-by-19 images as the classification did not significantly improve over the 12-by-12 versions. We employ a simple base classifier: the set of all functions that look at a single pixel and predict the class by thresholding the pixel’s value. The total classifier at any stage is a linear combination of these simple classifiers. For a given stage, all of the base classifiers that target a particular pixel are added together producing a complex function of the value of the pixel. Yet, this pixel can only take on a finite number of values (256 in this case). Therefore, we can compile this set of base classifiers into a single look-up function that maps the brightness of the pixel into a real number. The total classifier for the whole stage is merely the sum of these look-up functions. Therefore, the total work necessary to compute the classification at a stage is proportional to the number of pixels in the image considered at that stage, regardless of the number of base classifiers used. We therefore assign a cost to each stage of processing proportional to the number of pixels at that stage. If the image is a face, we add a negative cost (i.e. bonus) if the image is allowed to pass through all of the processing stages (and is therefore “accepted” as a face). If the image is a nonface, we add a bonus if the image is rejected at any stage before completion (i.e. correctly labelled). While this dataset has only segmented image patches, in a real application, the classifier would be run on all sub-windows of an image. More importantly, it would also be run at multiple resolutions in order to detect faces of different sizes (or at different distances from the camera). The classifier chain could be run simultaneously at each of these resolutions. To wit, while running the final 12-by12 stage at one resolution of the image, the 6-by-6 (previous) stage could be run at the same image resolution. This 6-by-6 processing would be the necessary pre-processing step to running the 12-by12 stage at a higher resolution. As we run our final scan for big faces (at a low resolution), we can already (at the same image resolution) be performing initial tests to throw out portions of the image as not worthy of testing for smaller faces (at a higher resolution). Most of the work of detecting objects must be done at the high resolutions because there are many more overlapping subwindows. This chained method allows the culling of most of this high-resolution image processing. 4.2 Experiments For each example, we construct a vector of stage costs as above. We add a constant to this vector to ensure that the minimal element is zero, as per section 1.1. We scale all vectors by the same amount to ensure that the maximal value is 1.This means that the number of misclassifications is an upper bound on the total cost that the learning algorithm is trying to minimize. There are three flexible quantities in this problem formulation: the cost of a pixel evaluation, the bonus for a correct face classification, and the bonus for a correct non-face classification. Changing these quantities will control the trade-off between false positives and true positives, and between classification error and speed. Figure 2(a) shows the result of a typical run of the algorithm. As a function of the number of rounds, it plots the cost (that which the algorithm is trying to minimize) and the error (number of misclassified image patches), for both the training and testing sets (where the training set has been reweighted to have the same proportion of faces to non-faces as the testing set). We compared our algorithm’s performance to the performance of support vector machines (SVM) [6] and Adaboost [1] trained and tested on the highest resolution, 12-by-12, image patches. We employed SVM-light [7] with a linear kernels. Figure 2(b) compares the error rates for the methods (solid lines, read against the left vertical axis). Note that the error rates are almost identical for the methods. The dashed lines (read against the right vertical axis) show the average number of pixels evaluated (or total processing cost) for each of the methods. The SVM and Adaboost algorithms have a constant processing cost. Our method (by either training scheme) produces lower processing cost for most error rates. 5 Related Work Cascade detectors for vision processing (see [8] or [9] for example) may appear to be similar to the work in this paper. Especially at first glance for the area of object detection, they appear almost the same. However, cascade detection and this work (chained detection) are quite different. Cascade detectors are built one at a time. A coarse detector is first trained. The examples which pass that detector are then passed to a finer detector for training, and so on. A series of targets for false-positive rates define the increasing accuracy of the detector cascade. By contrast, our chain detectors are trained as an ensemble. This is necessary because of two differences in the problem formulation. First, we assume that the information available at each stage changes. Second, we assume there is an explicit cost model that dictates the cost of proceeding from stage to stage and the cost of rejection (or acceptance) at any particular stage. By contrast, cascade detectors are seeking to minimize computational power necessary for a fixed decision. Therefore, the information available to all of the stages is the same, and there are no fixed costs associated with each stage. The ability to train all of the classifiers at the same time is crucial to good performance in our framework. The first classifier in the chain cannot determine whether it is advantageous to send an example further along unless it knows how the later stages will process the example. Conversely, the later stages cannot construct optimal classifications until they know the distribution of examples that they will see. Section 4.1 may further confuse the matter. We demonstrated how chained boosting can be used to reduce the computational costs of object detection in images. Cascade detectors are often used for the same purpose. However, the reductions in computational time come from two different sources. In cascade detectors, the time taken to evaluate a given image patch is reduced. In our chained detector formulation, image patches are ignored completely based on analysis of lower resolution patches in the image pyramid. To further illustrate the difference, cascade detectors can always be used to speed up asymmetric classification tasks (and are often applied to image detection). By contrast, in Section 4.1 we have exploited the fact that object detection in images is typically performed at multiple scales to turn the problem into a pipeline and apply our framework. Cascade detectors address situations in which prior class probabilities are not equal, while chained detectors address situations in which information is gained at a cost. Both are valid (and separate) ways of tackling image processing (and other tasks as well). In many ways, they are complementary approaches. Classic sequence analysis [10, 11] also addresses the problem of optimal stopping. However, it assumes that the samples are drawn i.i.d. from (usually) a known distribution. Our problem is quite different in that each consecutive sample is drawn from a different (and related) distribution and our goal is to find a decision rule without producing a generative model. WaldBoost [12] is a boosting algorithm based on this. It builds a series of features and a ratio comparison test in order to decide when to stop. For WaldBoost, the available features (information) not change between stages. Rather, any feature is available for selection at any point in the chain. Again, this is a different problem than the one considered in this paper. 6 Conclusions We feel this framework of staged decision making is useful in a wide variety of areas. This paper demonstrated how the framework applies to one vision processing task. Obviously it also applies to manufacturing pipelines where errors can be introduced at different stages. It should also be applicable to scenarios where information gathering is costly. Our current formulation only allows for early negative detection. In the face detection example above, this means that in order to report “face,” the classifier must process each stage, even if the result is assured earlier. In Figure 2(b), clearly the upper-left corner (100% false positives and 0% false negatives) is reachable with little effort: classify everything positive without looking at any features. We would like to extend this framework to cover such two-sided early decisions. While perhaps not useful in manufacturing (or even face detection, where the interesting part of the ROC curve is far from the upper-left), it would make the framework more applicable to informationgathering applications. Acknowledgements This research was supported through the grant “Adaptive Decision Making for Silicon Wafer Testing” from Intel Research and UC MICRO. References [1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, pages 23–37, 1995. [2] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148–156, 1996. [3] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. JMLR, 2:463–482, 2002. [4] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. JMLR, 4:839–860, 2003. [5] MIT. CBCL face database #1, 2000. http://cbcl.mit.edu/cbcl/softwaredatasets/FaceData2.html. [6] Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for optimal margin classifiers. In COLT, pages 144–152, 1992. [7] T. Joachims. Making large-scale SVM learning practical. In B. Schlkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods — Support Vector Learning. MIT-Press, 1999. [8] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features. In CVPR, pages 511–518, 2001. [9] Jianxin Wu, Matthew D. Mullin, and James M. Rehg. Linear asymmetric classifier for cascade detectors. In ICML, pages 988–995, 2005. [10] Abraham Wald. Sequential Analysis. Chapman & Hall, Ltd., 1947. [11] K. S. Fu. Sequential Methods in Pattern Recognition and Machine Learning. Academic Press, 1968. [12] Jan ˇSochman and Jiˇr´ı Matas. Waldboost — learning for time constrained sequential detection. In CVPR, pages 150–156, 2005.
|
2006
|
121
|
2,946
|
Support Vector Machines on a Budget Ofer Dekel and Yoram Singer School of Computer Science and Engineering The Hebrew University Jerusalem 91904, Israel {oferd,singer}@cs.huji.ac.il Abstract The standard Support Vector Machine formulation does not provide its user with the ability to explicitly control the number of support vectors used to define the generated classifier. We present a modified version of SVM that allows the user to set a budget parameter B and focuses on minimizing the loss attained by the B worst-classified examples while ignoring the remaining examples. This idea can be used to derive sparse versions of both L1-SVM and L2-SVM. Technically, we obtain these new SVM variants by replacing the 1-norm in the standard SVM formulation with various interpolation-norms. We also adapt the SMO optimization algorithm to our setting and report on some preliminary experimental results. 1 Introduction The L1 Support Vector Machine (L1-SVM or SVM for short) [1, 2, 3] is a powerful technique for learning binary classifiers from examples. Given a training set {(xi, yi)}m i=1 and a positive semi-definite kernel K, the SVM solution is a hypothesis of the form h(x) = sign i∈S αiyiK(xi, x) + b , where S is a subset of {1, . . . , m}, {αi}i∈S are real valued weights, and b is a bias term. The set S defines the support of the classifier, namely, the set of examples that actively participate in the classifier’s definition. The examples in this set are called support vectors, and we say that the SVM solution is sparse if the fraction of support vectors (|S|/m) is reasonably small. Our first concern is usually with the accuracy of the classifier. However, in some applications, the size of the support is equally important. Assuming that the kernel operator K can be evaluated in constant time, the time-complexity of evaluating the classifier on a new instance is linear in the size of S. Therefore, a large support defines a slow classifier. Classification speed is often important and plays an especially critical role in real-time systems. For example, a classifier that drives a phoneme detector in a speech recognition system is evaluated hundreds of times a second. If this classifier does not manage to keep up with the rate at which the speech signal is acquired then its classifications are useless, regardless of their accuracy. The size of the support also naturally determines the amount of memory required to store the classifier. If a classifier is intended to run in a device with a limited memory, such as a mobile telephone, there may be a physical limit on the amount of memory available to store support vectors. The size of S may also effect the time required to train an SVM classifier. Most modern SVM learning algorithms are active set methods, namely, on every step of the training process, only a small set of active training examples are taken into account. Knowing the size of S ahead of time would enable us to optimize the size of the active set and possibly gain a significant speed-up in the training process. The SVM mechanism does not give us explicit control over the size of the support. The user-defined parameters of SVM have some influence on the size of S, but we often require more than this. Specifically, we would like the ability to specify a budget parameter, B, which directly controls the number of support vectors used to define the SVM solution. In this paper, we address this issue and present budget-SVM, a minor modification to the standard L1-SVM formulation that allows the user to set a budget parameter. The budget-SVM optimization problem focuses only on the B worst-classified examples in the training set, ignoring all other examples. The problem of sparsity becomes even more critical when it comes to L2-SVM [3], a variant of the SVM problem that tends to have dense solutions. L2-SVM is sometimes preferred over L1-SVM because it exhibits good generalization properties, as well as other desirable statistical characteristics [4]. We derive the budget-L2-SVM formulation by following the same technique used to derive budget-L1-SVM. The technique used to derive these SVM variants is as follows. We begin by generalizing the L1SVM formulation by replacing the 1-norm with an arbitrary norm. We obtain a general framework for SVM-type problems, which we nickname Any-Norm-SVM. Next, we turn to the K-method of norm interpolation to obtain the 1 −∞interpolation-norm and the 2 −∞interpolation-norm, and use these norms in the Any-Norm-SVM framework. These norms have the property that they depend only on the absolutely-largest elements of the vector. We rely on this property and show that our SVM variants construct sparse solutions. For each of these norms, we present a simple modification of the SMO algorithm [5], which efficiently solves the respective optimization problem. Related Work The problem of approximating the SVM solution using a reduced set of examples has received much previous attention [6, 7, 8, 9]. This technique takes a two-step approach: begin by training a standard SVM classifier, perhaps obtaining a dense solution. Then, try to find a sparse classifier which minimizes the L2 distance to the SVM solution. A potential drawback of this approach is that once the SVM solution has been found, the distribution from which the training set was sampled no longer plays a role in the learning process. This ignores the fact that shifting the SVM classifier by a fixed amount in different directions may have dramatically different consequences on classification accuracy. We overcome this problem by taking the approach of [10] and reformulating the SVM optimization problem itself in a way that promotes sparsity. Another technique used to obtain a sparse kernel-machine takes advantage of the inherent sparsity of linear programming solutions, and formalizes the kernel-machine learning problem as a linear program [11]. This approach, often called LP-SVM or Sparse-SVM, has been shown to generally construct sparse solutions, but still lacks the ability to introduce an explicit budget parameter. Yet another approach involves randomly selecting a subset of the training set to serve as support vectors [12]. The problem of learning a kernel-machine on a budget also appears in the online-learning mistake-bound framework, and it is there where the term “learning on a budget” was coined [13]. Two recent papers [14, 15] propose online kernel-methods on a budget with an accompanying theoretical mistake-bound. This paper is organized as follows. We present the generalized Any-Norm-SVM framework in Sec. 2. We discuss the K-method of norm interpolation in Sec. 3 and put various interpolation norms to use within the Any-Norm-SVM framework in Sec. 4. Then, in Sec. 5, we present some preliminary experiments that demonstrate how the theoretical properties of our approach translate into practice. We conclude with a discussion in Sec. 6. Due to the lack of space, some of the proofs are omitted from this paper. 2 Any-Norm SVM Let {(xi, yi)}m i=1 be a training set, where every xi belongs to an instance space X and every yi ∈ {−1, +1}. Let K : X × X →R be a positive semi-definite kernel, and let H be its corresponding Reproducing Kernel Hilbert Space (RKHS) [16], with inner product ⟨·, ·⟩H. The L1 Support Vector Machine is defined as the solution to the following convex optimization problem: min f∈H, b∈R, ξ≥0 1 2⟨f, f⟩H + C∥ξ∥1 s.t. ∀1 ≤i ≤m yi f(xi) + b ≥1 −ξi , (1) where ξ is a vector of m slack variables, and C is a positive constant that controls the tradeoff between the complexity of the learned classifier and how well it fits the training data. The value of ξi is sometimes referred to as the hinge-loss attained by the SVM classifier on example i. The 1-norm, defined by ∥ξ∥1 = m i=1 |ξi|, is used to combine the individual hinge-loss values into a single number. L2-SVM is a variant of the optimization problem defined above, defined as follows: min f∈H, b∈R, ξ≥0 1 2⟨f, f⟩H + C∥ξ∥2 2 s.t. ∀1 ≤i ≤m yi f(xi) + b ≥1 −ξi . This formulation differs from the L1 formulation in that the 1-norm is replaced by the squared 2norm, defined by ∥ξ∥2 2 = m i=1 ξ2 i . In this section, we take this idea even farther, and allow the 1-norm of L1-SVM to be replaced by any norm. Formally, let ∥· ∥be an arbitrary norm defined on Rm. Recall that a norm is a real valued operator such that for every v ∈Rm and λ ∈R it holds that ∥λv∥= |λ|∥v∥(positive homogeneity), ∥v∥≥0 and ∥v∥= 0 if and only if v = 0 (positive definiteness), and that satisfies the triangle inequality. Now consider the following optimization problem: min f∈H, b∈R, ξ≥0 1 2⟨f, f⟩H + C∥ξ∥ s.t. ∀1 ≤i ≤m yi f(xi) + b ≥1 −ξi . (2) L1-SVM is recovered by setting ∥· ∥to be the 1-norm. Setting ∥· ∥to be the 2-norm induces an optimization problem which is close in nature to L2-SVM, but not identical to it since the 2-norm is not squared. Combining the positive homogeneity property of ∥· ∥with the fact that it satisfies the triangle inequality ensures that the objective function of Eq. (2) is convex. An important class of norms used extensively in our derivation is the family of p-norms, defined for every p ≥1 by ∥v∥p = (m j=1 |vj|p)1/p. A special member of this family is the ∞-norm, which is defined by ∥v∥∞= limp→∞∥v∥p and can be shown to be equivalent to maxj |vj|. We also use the notion of norm duality. Every norm on Rm has a dual norm which is also defined on Rm. The dual norm of ∥· ∥is denoted by ∥· ∥⋆and given by ∥u∥⋆= max v∈Rm u · v ∥v∥ = max v∈Rm : ∥v∥=1 u · v . (3) As its name implies, ∥· ∥⋆also satisfies the requirements of a norm. For example, H¨older’s inequality [17] states that the dual of ∥· ∥p is the norm ∥· ∥q, where q = p/(p −1). The dual of the 1-norm is the ∞-norm and vice versa. Using the definition of the dual norm, we now state the dual optimization problem of Eq. (2): max α≥0 m i=1 αi −1 2 m i=1 m j=1 αiαjyiyjK(xi, xj) s.t. m i=1 yiαi = 0 and ∥α∥⋆≤C . (4) As a first sanity check, note that if ∥· ∥in Eq. (2) is chosen to be the 1-norm, then ∥· ∥⋆is the ∞norm, and the constraint ∥α∥⋆≤C reduces to the familiar box-constraint of L1-SVM [3]. The proof that Eq. (2) and Eq. (4) are indeed dual optimization problems relies on basic techniques in convex analysis [18], and is omitted due to the lack of space. Moreover, it can be shown that the solution to Eq. (2) takes the form f(·) = m i=1 αiyiK(xi, ·), and that strong duality holds regardless of the norm used. This allows us to forget about the primal problem in Eq. (2) and to focus on solving the dual problem in Eq. (4). As with L1-SVM, the bias term, b, cannot be directly extracted from the solution of the dual. The standard techniques used to find b in L1-SVM apply here as well [3]. We note that the Any-Norm-SVM formulation is not fundamentally different from the original L1SVM formulation. Both optimization problems have convex objective functions and linear constraints. More importantly, the only difference between their respective duals is in the dual-norm constraint. Specifically, the objective function in Eq. (4) is a concave quadratic function for any choice of ∥· ∥. These facts enable us to efficiently solve the problem in Eq. (4) for any kernel K and any norm using techniques similar to those used to solve the standard L1-SVM problem. 3 Interpolation Norms In the previous section, we acquired the ability to replace the 1-norm in the definition of L1-SVM with an arbitrary norm. We now use Peetre’s K-method of norm interpolation [19] to obtain norms that promote the sparsity of the generated classifier. The K-method is a technique for smoothly interpolating between a pair of norms. Let ∥· ∥p1 : Rm →R+ and ∥· ∥p2 : Rm →R+ be two p-norms, and let ∥· ∥q1 and ∥· ∥q2 be their respective duals. Peetre’s K-functional with respect to p1 and p2, and with respect to the constant t > 0, is defined to be ∥v∥K(p1,p2,t) = min w,z : w+z=v ∥w∥p1 + t∥z∥p2 . (5) Peetre’s J-functional with respect to q1, q2, and with respect to the constant s > 0, is given by ∥u∥J(q1,q2,s) = max ∥u∥q1, s ∥u∥q2 . (6) The J-functional is obviously a norm: the properties of a norm all follow immediately from the fact that ∥· ∥q1 and ∥· ∥q2 posses these properties. ∥· ∥K(p1,p2,t) is also a norm, and moreover, ∥· ∥K(p1,p2,t) and ∥· ∥J(q1,q2,s) are dual to each other when t = 1/s. This fact can be proven using elementary calculus, and this proof is omitted due to the lack of space. We use the K-method to interpolate between the 1-norm and the ∞-norm, and to interpolate between the 2-norm and the ∞-norm. To gain some intuition on the behavior of these interpolationnorms, first note that for any p ≥1 and any v ∈Rm it holds that maxi |vi|p ≤m i=1 |vi|p ≤ m maxi |vi|p, and therefore ∥v∥∞≤∥v∥p ≤m1/p∥v∥∞. An immediate consequence of this is that ∥· ∥K(p,∞,t) ≡∥· ∥∞when 0 < t ≤1 and that ∥· ∥K(p,∞,t) ≡∥· ∥p when m1/p ≤t. In other words, the interesting range of t for the 1 −∞interpolation-norm is [1, m], and for the 2 −∞ interpolation-norm is [1, √m]. Next, we prove a theorem which states that interpolating a p-norm with the ∞-norm is approximately equivalent to restricting that p-norm to the absolutely-largest components of the vector. Specifically, the 1 −∞interpolation norm with parameter t (with t chosen to be an integer in [1, m]) is precisely equivalent to taking the sum of the absolute values of the t absolutely-greatest elements of the vector. Theorem 1. Let v be an arbitrary vector in Rm and let π be a permutation on {1, . . . , m} such that |vπ(1)| ≥. . . ≥|vπ(m)|. Then for any integer B in {1, . . . , m} it holds that ∥v∥K(1,∞,B) = B i=1 |vπ(i)|, and for any 1 ≤p < ∞, if t = B1/p then it holds that B i=1 |vπ(i)|p1/p ≤∥v∥K(p,∞,t) ≤ B i=1 |vπ(i)|p1/p + B1/p|vπ(B)| . Proof. Beginning with the lower bound, let w and z be such that w + z = v. Then B i=1 |vπ(i)|p1/p = B i=1 |wπ(i) + zπ(i)|p1/p ≤ B i=1 |wπ(i)|p1/p + B i=1 |zπ(i)|p1/p ≤ B i=1 |wπ(i)|p1/p + (B maxi |zi|p)1/p ≤ m i=1 |wi|p1/p + t∥z∥∞, where the first inequality is the triangle inequality for the p-norm. Since the above holds for any w and z such that w + z = v, it also holds for the pair which minimizes (m i=1 |wi|p)1/p + t∥z∥∞, and which defines ∥v∥K(p,∞,t). Therefore, we have that, B i=1 |vπ(i)|p1/p ≤∥v∥K(p,∞,t) . (7) Turning to the upper bound, let φ = |vπ(B)|, and define for all 1 ≤ i ≤ m, ¯wi = sign(vi) max{0, |vi| −φ} and ¯zi = sign(vi) min{|vi|, φ}. Note that ¯w + ¯z = v, and that B i=1 |vπ(i)| = ∥¯w∥1 + B∥¯z∥∞. This proves that ∥v∥K(1,∞,B) ≤B i=1 |vπ(i)| and together with Eq. (7) we have proven our claim for p = 1. Moving on to the case of an arbitrary p, we have that ∥v∥K(p,∞,t) = min w+z=v(∥w∥p + t∥z∥∞) ≤∥¯w∥p + t∥¯z∥∞. Since the absolute value of each element in ¯w is at most as large as the absolute value of the corresponding element of v, and since ¯wπ(r+1) = . . . = ¯wπ(m) = 0, we have that ∥¯w∥p ≤ (B i=1 |vπ(i)|p)1/p. By definition, ∥¯z∥∞= φ = |vπ(B)|. This proves that ∥v∥K(p,∞,t) ≤ (B i=1 |vπ(i)|p)1/p + t|vπ(B)| and together with Eq. (7) this concludes our proof for arbitrary p. 4 Deriving Concrete Algorithms from the General Framework Our first concrete algorithm is budget-L1-SVM, obtained by plugging the 1−∞interpolation-norm with parameter B into the general Any-Norm-SVM framework. Relying on Thm. 1, we know that this norm takes into account only the B largest values in ξ. Since ξ measures how badly each example is misclassified, the budget-L1-SVM problem essentially optimizes the soft-margin with respect to the B worst-classified examples. We now show that this property promotes the sparsity of the budget-L1-SVM solution. If there are less than B examples for which yi(f(xi)+b) < 1, then the KKT conditions of optimality immediately imply that the number of support vectors is less than B. This holds true for every instance of the Any-Norm-SVM framework, and is proven for L1-SVM in [3]. Therefore, we focus on the more interesting case, where yi(f(xi) + b) < 1 for at least B examples. Theorem 2. Let B be an integer in {1, . . . , m}. Let (f, b, ξ, α) be an optimal primal-dual solution of the primal problem in Eq. (2) and the dual problem in Eq. (4), where ∥·∥is chosen to be the 1−∞ interpolation-norm with parameter B. Define µi = yi(f(xi) + b) and let π be a permutation of {1, . . . , m} such that µπ(1) ≤. . . ≤µπ(m). Assume that µπ(B) < 1. Then, αk = 0 if µπ(B) < µk. Proof. We begin the proof by redefining ξi = max{1 −µi, 0} for all 1 ≤i ≤m and noting that (f, b, ξ, α) remains a primal-dual solution to our problem. The benefit of starting with this specific solution is that ξπ(1) ≥. . . ≥ξπ(m). Let k be an index such that µπ(B) < µk and define ξ′ k = 1 2(ξk + ξπ(B)). Moreover, let ξ′ be the vector obtained by replacing the k’th coordinate in ξ with ξ′ k, or in other words, ξ′ = (ξ1, . . . , ξ′ k, . . . , ξm). Using the assumption that µπ(B) < 1, we know that ξπ(B) > 0, and since µk > µπ(B) we get that ξk < ξπ(B). We can now draw two conclusions. First, ξπ(1) ≥. . . ≥ξπ(B) > ξ′ k and therefore ∥ξ′∥K(1,∞,B) = ∥ξ∥K(1,∞,B). Second, ξk < ξ′ k and therefore ξ′ satisfies the constraints of Eq. (2). Overall, we obtain that (f, b, ξ′, α) is also a primal-dual solution to our problem. Moreover, we know that 1 −µk < ξ′ k. Using the KKT complementary slackness condition, it follows that αk, the Lagrange multiplier corresponding to this constraint, must equal 0. Defining µi and π as above, a simple corollary of Thm. 2 is that the number of support vectors is upper bounded by B in the case that µπ(B) ̸= µπ(B+1). From our discussion in Sec. 3, we know that the dual of the 1 −∞interpolation-norm is the function max{∥u∥∞, (1/B)∥u∥1}. Plugging this definition into Eq. (4) gives us the dual optimization problem of budget-L1-SVM. The constraint ∥α∥⋆≤C simplifies to αi ≤C for all i and m i=1 αi ≤BC. To numerically solve this optimization problem, we turn to the Sequential Minimal Optimization (SMO) [5] technique. We briefly describe the SMO technique, and then discuss its adaptation to our setting. SMO is an iterative process, which on every iteration selects a pair of dual variables, αk and αl, and optimizes the dual problem with respect to them, leaving all other variables fixed. The choice of the two variables is determined by a heuristic [5], and their optimal values are calculated analytically. Assume that we start with a vector α which is a feasible point of the optimization problem in Eq. (4). When restricted to the two active variables, αk and αl, the constraint i̸=k,l αiyi = 0 simplifies to αnew k yk + αnew l yl = αold k yk + αold l yl. Put another way, we can slightly overload our notation and define the linear functions αk(λ) = αk + λyk and αl(λ) = αl −λyl , (8) and find the single variable λ which maximizes our constrained optimization problem. Since the constraints in Eq. (4) define a convex and bounded feasible set, the intersection of the linear equalities in Eq. (8) with this feasible set restricts λ to an interval. The objective function, as a function of the single variable λ, takes the form O(λ) = Pλ2 + Qλ + c, where c is a constant, P = K(xk, xl) −1 2K(xk, xk) −1 2K(xl, xl) , Q = yk −f(xk) − yl −f(xl) , and f is the current function in the RKHS (f ≡m i=1 αiyiK(xi, ·)). Maximizing the objective function in Eq. (4) with respect to αk and αl is equivalent to maximizing O(λ) with respect to λ over an interval. P equals minus the Euclidean distance between the functions K(xk, ·) and K(xl, ·) in the RKHS, and is therefore a negative number. Therefore, O(λ) is a concave function which attains a single (unconstrained) maximum. This maximum can be found analytically by 0 = ∂O(λ) ∂λ = 2Pλ + Q ⇒ λ = −Q 2P . (9) If this unconstrained optimum falls inside the feasible interval, then it is equivalent to the constrained optimum. Otherwise, the constrained optimum falls on one of the two end-points of the interval. Thus, we are left with the task of finding these end-points. To do so, we consider the remaining constraints: (I) αk(λ) ≥0 αl(λ) ≥0 (II) αk(λ) ≤C αl(λ) ≤C (III) αk(λ) + αl(λ) ≤BC − i̸=k,l αi . The constraints in (I) translate to yk = −1 ⇒λ ≤αk yk = +1 ⇒λ ≥−αk yl = −1 ⇒λ ≥−αl yl = +1 ⇒λ ≤αl . (10) The constraints in (II) translate to yk = −1 ⇒λ ≥αk −C yk = +1 ⇒λ ≤C −αk yl = −1 ⇒λ ≤C −αl yl = +1 ⇒λ ≥αl −C . (11) Constraint (III) translates to yk = −1 ∧yl = +1 ⇒ λ ≥ 1 2 m i=1 αi −BC yk = +1 ∧yl = −1 ⇒ λ ≤ 1 2 BC −m i=1 αi . (12) Finding the end-points of the interval that confines λ amounts to finding the smallest upper bound and the greatest lower bound in Eqs. (10,11,12). This concludes the analytic derivation of the SMO update for budget-L1-SVM. L2-SVM on a budget Next, we use the 2 −∞interpolation-norm with parameter t = √ B in the Any-Norm-SVM framework, and obtain the budget-L2-SVM problem. Thm. 1 hints that setting t = √ B makes the 2 −∞interpolation-norm almost equivalent to restricting the 2-norm to the top B elements in the vector ξ. The support size of the budget-L2-SVM solution is strongly correlated with the parameter B although the exact relation between the two is not as clear as before. Again we begin with the dual formulation defined in Eq. (4), where the constraint ∥α∥⋆≤C becomes max{∥ξ∥2, (1/ √ B)∥ξ∥1} ≤C. The intersection of this constraint with the other constraints defines a convex and bounded feasible set, and its intersection with the linear equalities in Eq. (8) defines an interval. The objective function in Eq. (4) is the same as before, so the unconstrained maximum is once again given be Eq. (9). To obtain the constrained maximum, we must find the end-points of the interval that confines λ. The dual-norm constraint can be written more explicitly as (I) αk(λ) + αl(λ) ≤ √ BC − i̸=k,l αi (II) α2 k(λ) + α2 l (λ) ≤C2 − i̸=k,l α2 i . Constrain (I) is similar to the constraint we had in the budget-L1-SVM case, and is given in terms of λ by replacing B with √ B in Eq. (12). Constraint (II) is new, and can be written in terms of λ as λ2 + λβ + γ ≤0, where β = αkyk −αlyl and γ = 1 2(m i=1 α2 i −C2). It can be written even more explicitly as λ ≤ 1 2(−β + β2 −4γ) and λ ≥ 1 2(−β − β2 −4γ) . (13) In addition, we still have the constraint α ≥0, which is common to every instance of the AnyNorm-SVM framework. This constraint is given in terms of λ in Eq. (10). Overall, the end-points of the interval we are searching for are found by taking the smallest upper bound and the greatest lower bound in Eqs. (10,13) and Eq. (12) with B replaced by √ B. B s 200 400 600 800 1000 100 200 300 400 500 600 700 B s 200 400 600 800 1000 50 100 150 200 250 300 350 400 450 500 Figure 1: Average test error of budget-L1-SVM (left) and budget-L2-SVM (right) for different values of the budget parameter B and the pruning parameter s (all but s weights in α are set to zero). The test error in the darkest region is roughly 50%, and in the lightest region is roughly 5%. 5 Experiments Many existing solvers for the standard L1-SVM problem define a positive threshold value close to zero and replace every weight that falls below this threshold with zero. This heuristic significantly reduces the time required for the algorithm to converge. In our setting, a more natural way to speed up the learning process is to run the iterative SMO optimization algorithm for a fixed number of iterations and then to keep only the B largest weights, setting the m −B remaining weights to zero. This pruning heuristic enforces the budget constraint in a brute-force way, and can be equally applied to any kernel-machine. However, the natural question is how much will the pruning heuristic affect the classification accuracy of the kernel-machine it is applied to. If our technique indeed lives up to its theoretical promise, we expect the pruning heuristic to have little impact on classification accuracy. On the other hand, if we train an L1-SVM and it so happens that the number of large weights exceeds B, then applying the pruning heuristic should have a dramatic negative effect on classification accuracy. The goal of our experiments is to demonstrate that this behavior indeed occurs in practice. We conducted our experiments using the MNIST dataset, which contains handwritten digits from the 10 digit classes. We randomly generated 50 binary classification problems by first randomly partitioning the 10 classes into two equally sized sets, and then randomly choosing a training set of 1000 examples and a test set of 4000 examples. The results reported below are averaged over these 50 problems. Although MNIST is generally thought to induce easy learning problems, the method described above generates moderately difficult learning tasks. For each binary problem, we trained both the L1 and the L2 budget SVMs with B = 20, 40, . . . , 1000. Note that ∥ξ∥K(1,∞,B) grows roughly linearly with B, and that ∥ξ∥K(2,∞, √ B) grows roughly like the square root of B. To compensate for this, we set C = 10/B in the L1 case and C = 10/ √ B in the L2 case. This heuristic choice of C attempts to preserve the relative weight of the regularization term with respect to the norm term in Eq. (2), across the various values of B. In all of our experiments, we used a Gaussian kernel with σ = 1 (after scaling the data to have an average unit norm). For each classifier trained, we pruned away all but the s largest weights, with s = 20, 40, . . . , 1000, and calculated the test error. The average test error for every choice of B (the budget parameter in the optimization problem) and s (the number of non-zero weights kept) is summarized in Fig. 1. In practice, s and B should be equal, however we let s take different values in our experiment to illustrate the characteristics of our approach. Note that the test-error attained by L1-SVM (without a budget parameter) and L2-SVM are represented by the top-right corners of the respective plots. As expected, classification accuracy for any value of B deteriorates as s becomes small. However, the accuracy attained by L1-SVM and L2-SVM can be equally attained using significantly less support vectors. 6 Discussion Using the Any-Norm-SVM framework with interesting norms enabled us to introduce a budget parameter to the SVM formulation. However, the Any-Norm framework can be used for other tasks as well. For example, we can interpolate between L1-SVM and L2-SVM by using the 1 −2 interpolation-norm. This gives the user the explicit ability to balance the trade-off between the pros and cons of these two SVM variants. In [20] it is shown that there exists a constant c such that, c∥v∥K(1,2,√r) ≤r j=1 |vj| + √r m j=r+1 v2 j 1/2 ≤∥v∥K(1,2,√r) . These bounds give some insight into how such an interpolation would behave. Another possible norm that can be used in our framework is the Mahalanobis norm (∥v∥= (v⊤Mv)1/2, where M is a positive definite matrix), which would define a loss function that takes into account pair-wise relationships between examples. Regarding our experiments, the rule-of-thumb we used to choose the parameter C is not always optimal. It seems preferable to tune C individually for each B using cross-validation. We are currently exploring extensions to our SMO variant that would quickly converge to the sparse solution without the help of the pruning heuristic. We are also considering multiplicative update optimization algorithms as an alternative to SMO. References [1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proc. of the Fifth Annual ACM Workshop on Computational Learning Theory, pages 144–152, 1992. [2] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [3] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [4] P. Bartlett and A. Tewari. Sparseness vs estimating conditional probabilities: Some asymptotic results. In Proc. of the Seventeenth Annual Conference on Computational Learning Theory, pages 564–578, 2004. [5] J. C. Platt. Fast training of Support Vector Machines using sequential minimal optimization. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. [6] C.J.C. Burges. Simplified support vector decision rules. In Proc. of the Thirteenth International Conference on Machine Learning, pages 71–77, 1996. [7] E. Osuna and F. Girosi. Reducing the run-time complexity of support vector machines. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods: Support Vector Learning, pages 271–284. MIT Press, 1999. [8] B. Sch¨olkopf, S. Mika, C.J.C. Burges, P. Knirsch, K-R M¨uller, G. R¨atsch, and A.J. Smola. Input space versus feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000–1017, September 1999. [9] J-H. Chen and C-S. Chen. Reducing SVM classification time using multiple mirror classifiers. IEEE transactions on systems, man and cybernetics – part B: Cybernetics, 34(2):1173–1183, April 2004. [10] M. Wu, B. Sch¨olkopf, and G. Bakir. A direct method for building sparse kernel learning algorithms. Journal of Machine Learning Research, 7:603–624, 2006. [11] K.P. Bennett. Combining support vector and mathematical programming methods for classification. In Advances in kernel methods: support vector learning, pages 307–326. MIT Press, 1999. [12] Y. Lee and O.L. Mangasarian. RSVM: Reduced support vector machines. In Proc. of the First SIAM International Conference on Data Mining, 2001. [13] K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. In Advances in Neural Information Processing Systems 16, 2003. [14] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a fixed budget. In Advances in Neural Information Processing Systems 18, 2005. [15] N. Cesa-Bianchi and C. Gentile. Tracking the best hyperplane with a simple budget perceptron. In Proc. of the Nineteenth Annual Conference on Computational Learning Theory, 2006. [16] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3):337–404, May 1950. [17] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1985. [18] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [19] C. Bennett and R. Sharpley. Interpolation of Operators. Academic Press, 1998. [20] T. Holmstedt. Interpolation of quasi-normed spaces. Mathematica Scandinavica, 26:177–190, 1970.
|
2006
|
122
|
2,947
|
Manifold Denoising Matthias Hein Markus Maier Max Planck Institute for Biological Cybernetics T¨ubingen, Germany {first.last}@tuebingen.mpg.de Abstract We consider the problem of denoising a noisily sampled submanifold M in Rd, where the submanifold M is a priori unknown and we are only given a noisy point sample. The presented denoising algorithm is based on a graph-based diffusion process of the point sample. We analyze this diffusion process using recent results about the convergence of graph Laplacians. In the experiments we show that our method is capable of dealing with non-trivial high-dimensional noise. Moreover using the denoising algorithm as pre-processing method we can improve the results of a semi-supervised learning algorithm. 1 Introduction In the last years several new methods have been developed in the machine learning community which are based on the assumption that the data lies on a submanifold M in Rd. They have been used in semi-supervised learning [15], dimensionality reduction [14, 1] and clustering. However there exists a certain gap between theory and practice. Namely in practice the data lies almost never exactly on the submanifold but due to noise is scattered around it. Several of the existing algorithms in particular graph based methods are quite sensitive to noise. Often they fail in the presence of highdimensional noise since then the distance structure is non-discriminative. In this paper we tackle this problem by proposing a denoising method for manifold data. Given noisily sampled manifold data in Rd the objective is to ’project’ the sample onto the submanifold. There exist already some methods which have related objectives like principal curves [6] and the generative topographic mapping [2]. For both methods one has to know the intrinsic dimension of the submanifold M as a parameter of the algorithm. However in the presence of high-dimensional noise it is almost impossible to estimate the intrinsic dimension correctly. Moreover usually problems arise if there is more than one connected component. The algorithm we propose adresses these problems. It works well for low-dimensional submanifolds corrupted by high-dimensional noise and can deal with multiple connected components. The basic principle behind our denoising method has been proposed by [13] as a surface processing method in R3. The goal of this paper is twofold. First we extend this method to general submanifolds in Rd aimed at dealing in particular with high-dimensional noise. Second we provide an interpretation of the denoising algorithm which takes into account the probabilistic setting encountered in machine learning and which differs from the one usually given in the computer graphics community. 2 The noise model and problem statement We assume that the data lies on an abstract m-dimensional manifold M, where the dimension m can be seen as the number of independent parameters in the data. This data is mapped via a smooth, regular embedding i : M →Rd into the feature space Rd. In the following we will not distinguish between M and i(M) ⊂Rd, since it should be clear from the context which case we are considering. The Euclidean distance in Rd then induces a metric on M. This metric depends on the embedding/representation (e.g. scaling) of the data in Rd but is at least continuous with respect to the intrinsic parameters. Furthermore we assume that the manifold M is equipped with a probability measure PM which is absolutely continuous with respect to the natural volume element1 dV of M. With these definitions the model for the noisy data-generating process in Rd has the following form: X = i(Θ) + ϵ, where Θ ∼PM and ϵ ∼N(0, σ). Note that the probability measure of the noise ϵ has full support in Rd. We consider here for convenience a Gaussian noise model but also any other reasonably concentrated isotropic noise should work. The law PX of the noisy data X can be computed from the true data-generating probability measure PM: PX(x) = (2 π σ2)−d 2 Z M e−∥x−i(θ)∥2 2σ2 p(θ) dV (θ). (1) Now the Gaussian measure is equivalent to the heat kernel pt(x, y) = (4πt)−d 2 exp ¡ −∥x−y∥2 4t ¢ of the diffusion process on Rd, see e.g. [5], if we make the identification σ2 = 2t. An alternative point of view on PX is therefore to see PX as the result of a diffusion of the density function2 p(θ) of PM stopped at time t = 1 2σ2. The basic principle behind the denoising algorithm in this paper is to reverse this diffusion process. 3 The denoising algorithm In practice we have only an i.i.d. sample Xi, i = 1, . . . , n of PX. The ideal goal would be to find the corresponding set of points i(θi), i = 1, . . . , n on the submanifold M which generated the points Xi. However due to the random nature of the noise this is in principle impossible. Instead the goal is to find corresponding points Zi on the submanifold M which are close to the points Xi. However we are facing several problems. Since we are only given a finite sample, we do not know PX or even PM. Second as stated in the last section we would like to reverse this diffusion process which amounts to solving a PDE. However the usual technique to solve this PDE on a grid is unfeasible due to the high dimension of the ambient space Rd. Instead we solve the diffusion process directly on a graph generated by the sample Xi. This can be motivated by recent results in [7] where it was shown that the generator of the diffusion process, the Laplacian ∆Rd, can be approximated by the graph Laplacian of a random neighborhood graph. A similar setting for the denoising of two-dimensional meshes in R3 has been proposed in the seminal work of Taubin [13]. Since then several modifications of his original idea have been proposed in the computer graphics community, including the recent development in [11] to apply the algorithm directly to point cloud data in R3. In this paper we propose a modification of this diffusion process which allows us to deal with general noisy samples of arbitrary (low-dimensional) submanifolds in Rd. In particular the proposed algorithm can cope with high-dimensional noise. Moreover we give an interpretation of the algorithm, which differs from the one usually given in the computer graphics community and takes into account the probabilistic nature of the problem. 3.1 Structure on the sample-based graph We would like to define a diffusion process directly on the sample Xi. To this end we need the generator of the diffusion process, the graph Laplacian. We will construct this operator for a weighted, undirected graph. The graph vertices are the sample points Xi. With {h(Xi)}n i=1 being the k-nearest neighbor (k-NN) distances the weights of the k-NN graph are defined as w(Xi, Xj) = exp ³ − ∥Xi −Xj∥2 (max{h(Xi), h(Xj)})2 ´ , if ∥Xi −Xj∥≤max{h(Xi), h(Xj)}, and w(Xi, Xj) = 0 otherwise. Additionally we set w(Xi, Xi) = 0, so that the graph has no loops. Further we denote by d the degree function d(Xi) = Pn j=1 w(Xi, Xj) of the graph and 1In local coordinates θ1, . . . , θm the natural volume element dV is given as dV = √det g dθ1 . . . dθm, where det g is the determinant of the metric tensor g. 2Note that PM is not absolutely continuous with respect to the Lebesgue measure in Rd and therefore p(θ) is not a density in Rd. we introduce two Hilbert spaces HV , HE of functions on the vertices V and edges E. Their inner products are defined as ⟨f, g⟩HV = Xn i=1 f(Xi) g(Xi) d(Xi), ⟨φ, ψ⟩HE = Xn i,j=1 w(Xi, Xj) φ(Xi, Xj) ψ(Xi, Xj). Introducing the discrete differential ∇: HV →HE, (∇f)(Xi, Xj) = f(Xj) −f(Xi) the graph Laplacian is defined as ∆: HV →HV , ∆= ∇∗∇, (∆f)(Xi) = f(Xi) − 1 d(Xi) Xn j=1 w(Xi, Xj)f(Xj), where ∇∗is the adjoint of ∇. Defining the matrix D with the degree function on the diagonal the graph Laplacian in matrix form is given as ∆= −D−1W, see [7] for more details. Note that despite ∆is not a symmetric matrix it is a self-adjoint operator with respect to the inner product in HV . 3.2 The denoising algorithm Having defined the necessary structure on the graph it is straightforward to write down the backward diffusion process. In the next section we will analyze the geometric properties of this diffusion process and show why it is directed towards the submanifold M. Since the graph Laplacian is the generator of the diffusion process on the graph we can formulate the algorithm by the following differential equation on the graph: ∂tX = −γ ∆X, (2) where γ > 0 is the diffusion constant. Since the points change with time, the whole graph is dynamic in our setting. This is different to the diffusion processes on a fixed graph studied in semi-supervised learning. In order to solve the differential equation (2) we choose an implicit Euler-scheme, that is X(t + 1) −X(t) = −δt γ ∆X(t + 1), (3) where δt is the time-step. Since the implicit Euler is unconditionally stable we can choose the factor δt γ arbitrarily. We fix in the following γ = 1 so that the only free parameter remains to be δt, which is set to δ = 0.5 in the rest of the paper. The solution of the implicit Euler scheme for one timestep in Equation 3 can then be computed as: Xt+1 = ( + δt ∆)−1Xt. After each timestep the point configuration has changed so that one has to recompute the weight matrix W of the graph. Then the procedure is continued until a predefined stopping criterion is satisfied, see Section 3.4. The pseudo-code is given in Algorithm 1. In [12] it was pointed out that there exists a connection Algorithm 1 Manifold denoising 1: Choose δt, k 2: while Stopping criterion not satisfied do 3: Compute the k-NN distances h(Xi), i = 1, . . . , n, 4: Compute the weights w(Xi, Xj) of the graph with w(Xi, Xi) = 0, w(Xi, Xj) = exp ³ − ∥Xi−Xj∥2 (max{h(Xi),h(Xj)})2 ´ , if ∥Xi −Xj∥≤max{h(Xi), h(Xj)}, 5: Compute the graph Laplacian ∆, ∆= −D−1W, 6: Solve X(t + 1) −X(t) = −δt ∆X(t + 1) ⇒ X(t + 1) = ( + δt ∆)−1X(t). 7: end while between diffusion processes and Tikhonov regularization. Namely the result of one time step of the diffusion process with the implicit Euler scheme is equivalent to the solution of the following regularization problem on the graph: arg min Zα∈HV S(Zα) := arg min Zα∈HV d X α=1 ∥Zα −Xα(t)∥2 HV + δt d X α=1 ∥∇Zα∥2 HE , where Zα denotes the α-component of the vector Z ∈Rd. With ∥∇Zα∥2 HE = ⟨Zα, ∆Zα⟩HV the minimizer of the above functional with respect to Zα can be easily computed as ∂S(Zα) ∂Zα = 2(Zα −Xα(t)) + 2 δt ∆Zα = 0, α = 1, . . . , d, so that Z = ( + δt ∆)−1Xt. Each time-step of our diffusion process can therefore be seen as a regression problem, where we trade off between fitting the new points Z to the points X(t) and having a ’smooth’ point configuration Z measured with respect to the current graph built from X(t). 3.3 k-nearest neighbor graph versus h-neighborhood graph In the denoising algorithm we have chosen to use a weighted k-NN graph. It turns out that a k-NN graph has three advantages over an h-neighborhood graph3. The first advantage is that the graph has a better connectivity. Namely points in areas of different density have quite different neighborhood scales which leads for a fixed h to either disconnected or over-connected graphs. Second we usually have high-dimensional noise. In this case it is well-known that one has a drastic change in the distance statistic of a sample, which is illustrated by the following trivial lemma. Lemma 1 Let x, y ∈Rd and ϵ1, ϵ2 ∼N(0, σ2) and define X = x + ϵ1 and Y = y + ϵ2, then E ∥X −Y ∥2 = ∥x −y∥2 + 2 d σ2, and Var ∥X −Y ∥2 = 8σ2 ∥x −y∥2 + 8 d σ4. One can deduce that the expected squared distance of the noisy submanifold sample is dominated by the noise term if 2dσ2 > maxθ,θ′ ∥i(θ) −i(θ′)∥2, which is usually the case for large d. In this case it is quite difficult to adjust the average number of neighbors in a graph by a fixed neighborhood size h since the distances start to concentrate around their mean value. The third is that by choosing k we can control directly the sparsity of the weight matrix W and the Laplacian ∆= −D−1W so that the linear equation in each time step can be solved efficiently. 3.4 Stopping criterion The problem of choosing the correct number of iterations is very difficult if one has initially highdimensional noise and requires prior knowledge. We propose two stopping criterions. The first one is based on the effect that if the diffusion is done too long the data becomes disconnected and concentrates in local clusters. One therefore can stop if the number of connected components of the graph4 increases. The second one is based on prior knowledge about the intrinsic dimension of the data. In this case one can stop the denoising if the estimated dimension of the sample (e.g. via the correlation dimension, see [4]) is equal to the intrinsic one. Another less founded but very simple way is to stop the iterations if the changes in the sample are below some pre-defined threshold. 4 Large sample limit and theoretical analysis Our qualitative theoretical analysis of the denoising algorithm is based on recent results on the limit of graph Laplacians [7, 8] as the neighborhood size decreases and the sample size increases. We use this result to study the continuous limit of the diffusion process. The following theorem about the limit of the graph Laplacian applies to h-neighborhood graphs, whereas the denoising algorithm is based on a k-NN graph. Our conjecture5 is that the result carries over to k-NN graphs. Theorem 1 [7, 8] Let {Xi}n i=1 be an i.i.d. sample of a probability measure PM on a m-dimensional compact submanifold6 M of Rd, where PM has a density pM ∈C3(M). Let f ∈C3(M) and x ∈M\∂M, then if h →0 and nhm+2/ log n →∞, lim n→∞ 1 h2 (∆f)(x) ∼−(∆Mf)(x) −2 p ⟨∇f, ∇p⟩TxM , almost surely, where ∆M is the Laplace-Beltrami operator of M and ∼means up to a constant which depends on the kernel function k(∥x −y∥) used to define the weights W(x, y) = k(∥x −y∥) of the graph. 3In an h-neighborhood graph two sample points Xi, Xj have a common edge if ∥Xi −Xj∥≤h. 4The number of connected comp. is equal to the multiplicity of the first eigenvalue of the graph Laplacian. 5Partially we verified the conjecture however the proof would go beyond the scope of this paper. 6Note that the case where P has full support in Rd is a special case of this theorem. 4.1 The noise-free case We first derive in a non-rigorous way the continuum limit of our graph based diffusion process in the noise free case. To that end we do the usual argument made in physics to go from a difference equation on a grid to the differential equation. We rewrite our diffusion equation (2) on the graph as i(t + 1) −i(t) δt = −h2 δt 1 h2 ∆i Doing now the limit h →0 and δt →0 such that the diffusion constant D = h2 δt stays finite and using the limit of 1 h2 ∆given in Theorem 1 we get the following differential equation, ∂ti = D [∆Mi + 2 p ⟨∇p, ∇i⟩]. (4) Note that for the k-NN graph the neighborhood size h is a function of the local density which implies that the diffusion constant D also becomes a function of the local density D = D(p(x)). Lemma 2 ([9], Lemma 2.14) Let i : M →Rd be a regular, smooth embedding of an mdimensional manifold M, then ∆Mi = m H, where H is the mean curvature7 of M. Using the equation ∆Mi = mH we can establish equivalence of the continuous diffusion equation (4) to a generalized mean curvature flow. ∂ti = D [m H + 2 p ⟨∇p, ∇i⟩], (5) The equivalence to the mean curvature flow ∂ti = m H is usually given in computer graphics as the reason for the denoising effect, see [13, 11]. However as we have shown the diffusion has already an additional part if one has a non-uniform probability measure on M. 4.2 The noisy case The analysis of the noisy case is more complicated and we can only provide a rough analysis. The large sample limit n →∞of the graph Laplacian ∆at a sample point Xi is given as ∆Xi = Xi − R Rd kh(∥Xi −y∥) y pX(y)dy R Rd kh(∥Xi −y∥)pX(y)dy , (6) where kh(∥x −y∥) is the weight function used in the construction of the graph, that is in our case kh(∥x −y∥) = e−∥x−y∥2 2h2 ∥x−y∥≤h. In the following analysis we will assume three things, 1) the noise level σ is small compared to the neighborhood size h, 2) the curvature of M is small compared to h and 3) the density pM varies slowly along M. Under these conditions it is easy to see that the main contribution of −∆Xi in Equation 6 will be in the direction of the gradient of pX at Xi. In the following we try to separate this effect from the mean curvature part derived in the noise-free case. Under the above conditions we can do the following second order approximation of a convolution with a Gaussian, see [7], using the explicit form of pX of Equation 1 : Z Rd kh(∥X −y∥) y pX(y)dy = Z M 1 (2πσ2)d/2 Z Rd kh(∥X −y∥) y e−∥y−i(θ)∥2 2σ2 p(θ) dy dV (θ) = Z M kh(∥X −i(θ)∥) i(θ) p(θ) dV (θ) + O(σ2) Now define the closest point of the submanifold M to X: i(θmin) = arg mini(θ)∈M ∥X −i(θ)∥. Using the condition on the curvature we can approximate the diffusion step −∆X as follows: −∆X ≈i(θmin) −X | {z } I − à i(θmin) − R M kh(∥i(θmin) −i(θ)∥) i(θ) p(θ) dV (θ) R M kh(∥i(θmin) −i(θ)∥) p(θ) dV (θ) | {z } II ! , 7The mean curvature H is the trace of the second fundamental form. If M is a hypersurface in Rd the mean curvature at p is H = 1 d−1 Pd−1 i=1 κiN, where N is the normal vector and κi the principal curvatures at p. where we have omitted second-order terms. It follows from the proof of Theorem 1 that the term II is an approximation of −∆Mi(θmin) −2 p ⟨∇p, ∇i⟩= −mH −2 p ⟨∇p, ∇i⟩whereas the first term I leads to a movement of X towards M. We conclude from this rough analysis that in the denoising procedure we always have a tradeoff between reducing the noise via the term I and smoothing of the manifold via the mean curvature term II. Note that the term II is the same for all points X which have i(θmin) as their closest point on M. Therefore this term leads to a global flow which smoothes the submanifold. In the experiments we observe this as the shrinking phenomenon. 5 Experiments In the experimental section we test the performance of the denoising algorithm on three noisy datasets. Furthermore we explore the possibility to use the denoising method as a preprocessing step for semi-supervised learning. Due to lack of space we can not deal with further applications as preprocessing method for clustering or dimensionality reduction. 5.1 Denoising The first experiment is done on a toy-dataset. The manifold M is given as t →[sin(2πt), 2πt], t is sampled uniformly on [0, 1]. We embed M into R200 and put full isotropic Gaussian noise with σ = 0.4 on each datapoint resulting in the left part of Figure 5.1. We verify the effect of the denoising algorithm by estimating continuously the dimension over different scales (note that the dimension of a finite sample always depends on the scale at which one examines). We use for that purpose the correlation dimension estimator of [4]. The result of the denoising algorithm with k = 25 for the k-NN graph and 10 timesteps is given in the right part of Figure 5.1. One can observe visually and by inspecting the dimension estimate as well as by the histogram of distances that the algorithm has reduced the noise. One can also see two undesired effects. First as discussed in the last section the diffusion process has a component which moves the manifold in the direction of the mean curvature, which leads to a smoothing of the sinusoid. Second at the boundary the sinusoid shrinks due to the missing counterparts in the local averaging done by the graph Laplacian, see (6), which result in an inward tangential component. In the next experiment we apply the denoising to the handwritten digit datasets USPS and MNIST. −1 0 1 0 1 2 3 4 5 6 Data points 1.5 2 2.5 −100 0 100 200 300 Dimension vs. scale 6 8 10 12 0 2000 4000 6000 8000 10000 histogram of dist. −1 0 1 0 1 2 3 4 5 6 Data points −2 0 2 0 10 20 30 40 Dimension vs. scale 0 2 4 6 0 2000 4000 6000 8000 10000 12000 histogram of dist. Figure 1: Left: 500 samples of the noisy sinusoid in R200 as described in the text, Right: Result after 10 steps of the denoising method with k = 25, note that the estimated dimension is much smaller and the scale has changed as can be seen from the histogram of distances shown to the right For handwritten digits the underlying manifold corresponds to varying writing styles. In order to check if the denoising method can also handle several manifolds at the same time which would make the method useful for clustering and dimensionality reduction we fed all the 10 digits simultaneously into the algorithm. For USPS we used the 9298 digits in the training and test set and from MNIST a subsample of 1000 examples from each digit. We used the two-sided tangent distance in [10] which provides a certain invariance against translation, scaling, rotation and line thickness. In Figure 2 and 3 we show a sample of the result across all digits. In both cases digits are transformed wrongly. This happens since they are outliers with respect to their digit manifold and lie closer to another digit component. An improved handling of invariances should resolve at least partially this problem. 5.2 Denoising as pre-processing for semi-supervised learning Most semi-supervised learning (SSL) are based on the cluster assumption, that is the decision boundary should lie in a low-density region. The denoising algorithm is consistent with that assumption 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 Figure 2: Left: Original images from USPS, right: after 15 iterations with k = [9298/50]. 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 Figure 3: Left: Original images from MNIST, right: after 15 iterations with k = 100. since it moves data points towards high-density regions. This is in particular helpful if the original clusters are distorted by high-dimensional noise. In this case the distance structure of the data becomes less discriminative, see Lemma 1, and the identification of the low density regions is quite difficult. We expect that in such cases manifold denoising as a pre-processing step should improve the discriminative capacity of graph-based methods. However the denoising algorithm does not take into account label information. Therefore in the case where the cluster assumption is not fulfilled the denoising algorithm might decrease the performance. Therefore we add the number of iterations of the denoising process as an additional parameter in the SSL algorithm. For the evaluation of our denoising algorithm as a preprocessing step for SSL, we used the benchmark data sets from [3]. A description of the data sets and the results of several state-of-the-art SSL algorithms can be found there. As SSL-algorithm we use a slight variation of the one by Zhou et al. [15]. It can be formulated as the following regularized least squares problem. f ∗= argminf∈HV ∥f −y∥2 HV + µ ⟨f, ∆f⟩HV , where y is the given label vector and ⟨f, ∆f⟩HV is the smoothness functional induced by the graph Laplacian. The solution is given as f ∗= ( + µ∆)−1y. In order to be consistent with our denoising scheme we choose instead of the normalized graph Laplacian ˜∆= −D−1 2 WD−1 2 as suggested in [15] the graph Laplacian ∆= −D−1W and the graph structure as described in Section 3.1. As neighborhood graph for the SSL-algorithm we used a symmetric k-NN graph with the following weights: w(Xi, Xj) = exp(−γ ∥Xi −Xj∥2) if ∥Xi −Xj∥≤min{h(Xi), h(Xj)}. As suggested in [3] the distances are rescaled in each iteration such that the 1/c2-quantile of the distances equals 1 where c is the number of classes. The number of k-NN was chosen for denoising in {5, 10, 15, 25, 50, 100, 150, 200}, and for classification in {5, 10, 20, 50, 100}. The scaling parameter γ and the regularization parameter µ were selected from { 1 2, 1, 2} resp. {2, 20, 200}. The maximum of iterations was set to 20. Parameter values where not all data points have been classified, that is the graph is disconnected, were excluded. The best parameters were found by ten-fold cross validation. The final classification is done using a majority vote of the classifiers corresponding to the minimal cross validation test error. In Table 1 the results are shown for the standard case, that is no manifold denoising (No MD), and with manifold denoising (MD). For the datasets g241c, g241d and Text we get significantly better performance using denoising as a preprocessing step, whereas the results are indifferent for the other datasets. However compared to the results of the state of the art of SSL on all the datasets reported in [3], the denoising preprocessing has lead to a performance of the algorithm which is competitive uniformly over all datasets. This improvement is probably not limited to the employed SSL-algorithm but should also apply to other graph-based methods. Table 1: Manifold Denoising (MD) as preprocessing for SSL. The mean and standard deviation of the test error are shown for the datasets from [3] for 10 (top) and 100 (bottom) labeled points. g241c g241d Digit1 USPS COIL BCI Text No MD 47.9±2.67 47.2±4.0 14.1±5.4 19.2±2.1 66.2±7.8 50.0±1.1 41.9±7.0 MD 29.0±14.3 26.6±17.8 13.8±5.5 20.5±5.0 66.4±6.0 49.8±1.5 33.6±7.0 ø Iter. 12.3±3.8 11.7±4.4 9.6±2.4 7.3±2.9 4.9±2.7 8.2±3.5 5.6±4.4 No MD 38.9±6.3 34.2±4.1 3.0±1.6 6.2±1.2 15.5±2.6 46.5±1.9 27.0±1.9 MD 16.1±2.2 7.5±0.9 3.2±1.2 5.3±1.4 16.2±2.5 48.4±2.0 24.1±2.8 ø Iter. 15.0±0.8 14.5±1.5 8.0±3.2 8.3±3.8 1.6±1.8 8.4±4.3 6.0±3.5 References [1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comp., 15(6):1373–1396, 2003. [2] C. M. Bishop, M. Svensen, and C. K. I. Williams. GTM: The generative topographic mapping. Neural Computation, 10:215–234, 1998. [3] O. Chapelle, B. Sch¨olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, 2006. in press, http://www.kyb.tuebingen.mpg.de/ssl-book. [4] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D, 9:189–208, 1983. [5] A. Grigoryan. Heat kernels on weighted manifolds and applications. Cont. Math., 398:93–191, 2006. [6] T. Hastie and W. Stuetzle. Principal curves. J. Amer. Stat. Assoc., 84:502–516, 1989. [7] M. Hein, J.-Y. Audibert, and U. von Luxburg. From graphs to manifolds - weak and strong pointwise consistency of graph Laplacians. In P. Auer and R. Meir, editors, Proc. of the 18th Conf. on Learning Theory (COLT), pages 486–500, Berlin, 2005. Springer. [8] M. Hein, J.-Y. Audibert, and U. von Luxburg. Graph Laplacians and their convergence on random neighborhood graphs, 2006. accepted at JMLR, available at arXiv:math.ST/0608522. [9] M. Hein. Geometrical aspects of statistical learning theory. PhD thesis, MPI f¨ur biologische Kybernetik/Technische Universit¨at Darmstadt, 2005. [10] D. Keysers, W. Macherey, H. Ney, and J. Dahmen. Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. on Pattern Anal. and Machine Intel., 26:269–274, 2004. [11] C. Lange and K. Polthier. Anisotropic smoothing of point sets. Computer Aided Geometric Design, 22:680–692, 2005. [12] O. Scherzer and J. Weickert. Relations between regularization and diffusion imaging. J. of Mathematical Imaging and Vision, 12:43–63, 2000. [13] G. Taubin. A signal processing approach to fair surface design. In Proc. of the 22nd annual conf. on Computer graphics and interactive techniques (Siggraph), pages 351–358, 1995. [14] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [15] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Adv. in Neur. Inf. Proc. Syst. (NIPS), volume 16. MIT Press, 2004.
|
2006
|
123
|
2,948
|
Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields Chi-Hoon Lee Department of Computing Science University of Alberta chihoon@cs.ualberta.ca Shaojun Wang ∗ Department of Computer Science and Engineering Wright State University shaojun.wang@wright.edu Feng Jiao Department of Computing Science University of Waterloo fjiao@cs.uwaterloo.ca Dale Schuurmans, Russell Greiner Department of Computing Science University of Alberta {dale, greiner}@cs.ualberta.ca Abstract We present a novel, semi-supervised approach to training discriminative random fields (DRFs) that efficiently exploits labeled and unlabeled training data to achieve improved accuracy in a variety of image processing tasks. We formulate DRF training as a form of MAP estimation that combines conditional loglikelihood on labeled data, given a data-dependent prior, with a conditional entropy regularizer defined on unlabeled data. Although the training objective is no longer concave, we develop an efficient local optimization procedure that produces classifiers that are more accurate than ones based on standard supervised DRF training. We then apply our semi-supervised approach to train DRFs to segment both synthetic and real data sets, and demonstrate significant improvements over supervised DRFs in each case. 1 Introduction Random field models are a popular probabilistic framework for representing complex dependencies in natural image data. The two predominant types of random field models correspond to generative versus discriminative graphical models respectively. Classical Markov random fields (MRFs) [2] follow a traditional generative approach, where one models the joint probability of the observed image along with the hidden label field over the pixels. Discriminative random fields (DRFs) [11, 10], on the other hand, directly model the conditional probability over the pixel label field given an observed image. In this sense, a DRF is equivalent to a conditional random field [12] defined over a 2-D lattice. Following the basic tenet of Vapnik [18], it is natural to anticipate that learning an accurate joint model should be more challenging than learning an accurate conditional model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition [15] and object detection [11, 10, 19, 17]. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labels, the approach relies more heavily on supervised training. DRF training typically uses labeled image data where each pixel label has been assigned. However, it is considerably more difficult to obtain labeled data for image analysis than for other classification tasks, such as document classification, since hand-labeling the individual pixels of each image is much harder than assigning class labels to objects like text documents. ∗Work done while at University of Alberta Recently, semi-supervised training has taken on an important new role in many application areas due to the abundance of unlabeled data. Consequently, many researchers are now working on developing semi-supervised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23, 24]. Unfortunately, semi-supervised learning for structured classification problems, where the prediction variables are interdependent in complex ways, have not been as widely studied, with few exceptions [1, 9]. Current work on semi-supervised learning for structured predictors [1, 9] has focused primarily on simple sequence prediction tasks where learning and inference can be efficiently performed using standard dynamic programming. Unfortunately, the problem we address is more challenging, since the spatial correlations in a 2-D grid structure create numerous dependency cycles. That is, our graphical model structure prevents exact inference from being feasible. Kumar et al [10] and Vishwanathan et al [19] argue that learning a model in the context of approximate inference creates a greater risk of the over-fitting and over estimating. In this paper, we extend the work on semi-supervised learning for sequence predictors [1, 9], particularly the CRF based approach [9], to semi-supervised learning of DRFs. There are several advantages of our approach to semi-supervised DRFs. (1) We inherit the standard advantage of discriminative conditional versus joint model training, while still being able to exploit unlabeled data. (2) The use of unlabeled data enhances our ability to avoid parameter over-fitting and over-estimation in grid based random fields even when using a learner that uses only approximate inference methods. (3) We are still able to model spatial correlations in a 2-D lattice, despite the fact that this introduces dependency cycles in the model. That is, our semi-supervised training procedure can be interpreted as a MAP estimator, where the parameter prior for the model on labeled data is governed by the conditional entropy of the model on unlabeled data. This allows us to learn local potentials that capture spatial correlations while often avoiding local over-estimation. We demonstrate the robustness of our model by applying it to a pixel denoising problem on synthetic images, and also to a challenging real world problem of segmenting tumor in magnetic resonance images. In each case, we have obtained significant improvements over current baselines based on standard DRF training. 2 Semi-Supervised DRFs (SSDRFs) We formulate a new semi-supervised DRF training principle based on the standard supervised formulation of [11, 10]. Let x be an observed input image, represented by x = {xi}i∈S, where S is a set of the observed image pixels (nodes). Let y = {yi}i∈S be the joint set of labels over all pixels of an image. For simplicity we assume each component yi ∈y ranges over binary classes Y = {−1, 1}. For example, x might be a magnetic resonance image of a brain and y is a realization of a joint labeling over all pixels that indicates whether each pixel is normal or a tumor. In this case, Y would be the set of pre-defined pixel categories (e.g. tumor versus non-tumor). A DRF is a conditional random field defined on the pixel labels, conditioned on the observation x. More explicitly, the joint distribution over the labels y given the observations x is written pθ(y|x) = 1 Zθ(x) exp “ X i∈S Φw(yi, x) + X i∈S X j∈Ni Ψν(yi, yj, x) ” (1) Here Ni denotes the neighboring pixels of i. Φw(yi, x) = log σ(yiwT hi(x) denotes the node potential at pixel i, which quantifies the belief that the class label is yi for the pre-defined feature vextor hi(x), where σ(t) = 1 1+e−t . Ψν(yi, yj, x) = yiyjvT µij(x) is an edge potential that captures spatial correlations among neighboring pixels (here, the ones at positions i and j), such that µij(x) is the pre-defined feature vector associated with observation x. Zθ(x) is the normalizing factor, also known as a (conditional) partition function, which is Zθ(x) = X y exp “ X i∈S Φw(yi, x) + X i∈S X j∈Ni Ψν(yi, yj, x) ” (2) Finally, θ = (w, ν) are the model parameters. When the edge potentials are set to zero, a DRF yields a standard logistic regression classifier. The potentials in a DRF can use properties of the observed image, and thereby relax the conditional independence assumption of MRFs. Moreover, the edge potentials in a DRF can smooth discontinuities between heterogeneous class pixels, and also correct errors made by the node potentials. Assume we have a set of independent labeled images, Dl = “ (x(1), y(1))), · · · , (x(M), y(M)) ” , and a set of independent unlabeled images, Du = “ x(M+1), · · · , x(T )” . Our goal is to build a DRF model from the combined set of labeled and unlabeled examples, Dl ∪Du. The standard supervised DRF training procedure is based on maximizing the log of the posterior probability of the labeled examples in Dl CL(θ) = M X k=1 log P(y(k)|x(k)) −νT ν 2τ 2 (3) A Gaussian prior over the edge parameters ν is assumed and a uniform prior over parameters w. Here p(ν) = N(ν; 0, τ 2I), where I is the identity matrix. The hyperparameter τ 2 adds a regularization term. In effect, the Gaussian prior introduces a form of regularization to limit over-fitting on rare features and avoid degeneracy in the case of correlated features. There are a few issues regarding the supervised learning criteria (3). First, the value of τ 2 is critical to the final result, and unfortunately selecting the appropriate τ 2 is a non-trivial task, which in turn makes the learning procedures more challenging and costly [13]. Second, the Gaussian prior is data-independent, and is not associated with either the unlabeled or labeled observations a priori. Inspired by the work in [8] and [9], we propose a semi-supervised learning algorithm for DRFs that makes full use of the available data by exploiting a form of entropy regularization as a prior over the parameters on Du. Specifically, for a semi-supervised DRF, we attempt to find θ that maximizes the following objective function RL(θ) = M X m=1 log pθ(y(m)|x(m)) + γ T X m=M+1 X y pθ(y|x(m)) log pθ(y|x(m)) (4) The first term of (4) is the conditional likelihood over the labeled data set Dl, and the second term is a conditional entropy prior over the unlabeled data set Du, weighted by a tradeoff parameter γ. The resulting estimate is then formulated as a MAP estimate. The goal of the objective (4) is to minimize the uncertainty on possible configurations over parameters. That is, minimizing the conditional entropy over unlabeled instances provides more confidence to the algorithm that the hypothetical labellings for the unlabeled data are consistent with the supervised labels, as greater certainty on the estimated labellings coincides with greater conditional likelihood on the supervised labels, and vice versa. This criterion has been shown to be effective for univariate classification [8], and chain structured CRFs [9]; here we apply it to the 2-D lattice case. 3 Parameter Estimation Several factors constrain the form of training algorithm: Because of overhead and the risk of divergence, it was not practical to employ a Newton method. Iterative scaling was not possible because the updates no longer have a closed form. Although the criticism of the gradient descent’s principle is well taken, it is the most practical approach we will adopt to optimize the semi-supervised MAP formulation (4) and allows us to improve on standard supervised DRF training. To formulate a local optimization procedure, we need to compute the gradient of the objective (4) with respect to the parameters. Unfortunately, because of the nonlinear mapping function σ(.), we are not able to represent the gradient of objective function as compactly as [9], which was able to express the gradient as a product of the covariance matrix of features and the parameter vector θ. Nevertheless, it is straightforward to show that the derivatives of objective function with respect to the node parameters w is given by 1 1Note that the derivatives of objective function with respect to the edge parameters ν are computed analogously. ∂ ∂w RL(θ) = (5) M X m=1 X i∈Sm 0 @y(m) i “ 1 −σ(y(m) i wT hi(x(m)) ” − X y pθ(y|x(m))yi “ 1 −σ(yiwT hi(x(m)) ” 1 A hi(x(m)) +γ T X m=M+1 X i∈Sm 0 @X y pθ(y|x(m)) “ Φw(yi, x) + X j∈Ni Ψν(yi, yj, x) ” yi “ 1 −σ(yiwT hi(x(m)) ” − h X y pθ(y|x(m)) “ Φw(yi, x) + X j∈Ni Ψν(yi, yj, x) ”i h X y pθ(y|x(m))yi “ 1 −σ(yiwT hi(x(m)) ”i 1 A hi(x(m)), where the first term in (5) is the gradient of the supervised component of the DRF over labeled data, and the second term is the gradient of conditional entropy prior of the DRF over unlabeled data. Given the lattice structure of the joint labels, it is intractable to compute the exact expectation terms in the above derivatives. It is also intractable to compute the conditional partition function Zθ(x). Therefore, as in standard supervised DRFs, we need to incorporate some form of approximation. Following [2, 11, 10], we incorporate the pseudo-likelihood approximation, which assumes that the joint conditional distribution can be approximated as a product of the local posterior probabilities given the neighboring nodes and the observation pθ(y|x) ≈ Y i∈S pθ(yi|yNi, x) (6) pθ(yi|yNi, x) = 1 zi(x) exp “ Φw(yi, x) + X j∈Ni Ψν(yi, yj, x) ” (7) Using the factored approximation in (7), we can reformulate the training objective as RLP L(θ) = M X m=1 Sm X i=1 log pθ(y(m) i |y(m) Ni , x(m)) (8) +γ T X m=M+1 Sm X i=1 X yi pθ(yi|yNi, x(m)) log pθ(yi|yNix(m)) Here, the derivative of the second term in (8), with respect to the potential parameters w and ν, can be reformulated as a factored conditional entropy, yielding ∂ ∂w RLP L(θ) (9) = M X m=1 X i∈Sm 0 @y(m) i “ 1 −σ(y(m) i wT hi(x(m)) ” − X yi pθ(yi|yNi, x(m))yi “ 1 −σ(yiwT hi(x(m)) ” 1 A hi(x(m)) +γ T X m=M+1 X i∈Sm 0 @X yi pθ(yi|yNi, x(m)) “ Φw(yi, x) + X j∈Ni Ψν(yi, yj, x) ” yi “ 1 −σ(yiwT hi(x(m)) ” − h X yi pθ(yi|yNix(m)) “ Φw(yi, x) + X j∈Ni Ψν(yi, yj, x) ”i h X yi pθ(yi|yNi, x(m))yi “ 1 −σ(yiwT hi(x(m)) ”i 1 A hi(x(m)) Note that ∂ ∂ν RLPL(θ) is computed analogously. Assuming the factorization, the true conditional entropy and feature expectations can be computed in terms of local conditional distributions. This allows us efficiently to approximate the global conditional entropy over unlabeled data. Note that there may be an over-smoothing issue associated with the pseudo-likelihood approximation, as mentioned in [10, 19]. However, due to the fast and stable performance of this approximation in the supervised case [2, 10] we still employ it, but below show that the over-smoothing effect is mitigated by our data-dependent prior in the MAP objective (4). 4 Inference As a result of our formulation, the learning method is tightly coupled with the inference steps. That is, for the unlabeled data, XU, each time we compute the local conditional covariance (9), we perform inference steps for each node i and its neighboring nodes Ni. Our inference is based on iterative conditional modes (ICM) [2], and is given by y∗ i = argmax yi∈Y P(yi|yNi, X) (10) where, for each position i, we assume that the labels of all of its neighbors y′ ∈Ni are fixed. We could alternatively compute the marginal conditional probability P(yi|X) = P yS\i P(yi, yS\i|X) for each node using the sum-product algorithm (i.e. loopy belief propagation), which iteratively propagates the belief of each node to its neighbors. Clearly, there are a range of approximation methods available, each entailing different accuracy-complexity tradeoffs. However, we have found that ICM yields good performance at our tasks below, and is probably one of the simplest possible alternatives. 5 Experiments Using standard supervised DRF models, Kumar and Hebert [11, 10] reported interesting experimental results for joint classification tasks on a 2-D lattice, which represents an image with a DRF model. Since labeling image data is expensive and tedious, we believe that better results could be further obtained by formulating a MAP estimation of DRFs by also using the abundant unlabeled image data. In this section, we present a series of experiments on synthetic and real data sets using our novel semi-supervised DRFs(SSDRFs). In order to evaluate our model, we compare the results with those using maximum likelihood estimation of supervised DRFs [11]. There is a major reason that we consider the standard MLE DRF from [11] instead of the parameter regularized DRFs from [10]: that is, we want to show the difference between the ML and MAP principles without using any regularization term that can be problematic [10, 13]. To quantify the performance of each model, we used the Jaccard score J = T P (T P+F P+F N) , where TP denotes true positives, FP false positives, and FN false negatives. Although there are many accuracy measures available, we used this score to penalize the false negatives since many imaging tasks are very imbalanced: that is, only a small percentage of pixels are in the “positive” class. The tradeoff parameter, γ, was hand-tuned on one held out data set and then held fixed at 0.2 for all of the experiments. 5.1 Synthetic image sets Our primary goal in using synthetic data sets was to demonstrate how well different models classified pixels as a binary classification over a 2-D lattice in the presence of noise. We generated 18 synthetic data sets, each with its own shape. The intensities of pixels in each image were independently corrupted by noise generated from a Gaussian N(0, 1). Figure 1 shows the results of using supervised DRFs, as well as semi-supervised DRFs. [10, 19] reported over-smoothing effects from the local approximation approach of PL while our experiments indicate that the over-smoothing is caused not only by PL approximation, but also by the sensitivity of the regularization to the parameters. However, using our semi-supervised DRF as a MAP formulation, we have dramatically improved the performance over standard supervised DRF. Note that the first row in Figure 1 shows good results from the standard DRF, while the oversmoothed outputs are presented in the last row. Although the ML approach may learn proper parameters from some of data sets, unfortunately its performance has not been consistent since the standard DRF’s learning of the edge potential tends to be overestimated. For instance, the last row shows that overestimating parameters of the DRF segment almost all pixels into a class due to the complicated edges and structures containing non-target area within the target area, while semi-supervised DRF performance is not degraded at all. Overall, by learning more statistics from unlabeled data, our model dominates the standard DRF in most cases. This is because our MAP formulation avoids the overestimate of potentials and uses the edge potential to correct the errors made by the node potential. Figure 2(a) shows the results over 18 synthetic data sets. Each point above the diagonal line in Figure 2(a) indicates SSDRF producing higher Jaccard scores for a data set. Note that our model stably converged as we increased the ratio (nU/nL) of unlabeled data sets in our learning, J: 0.933890 J: 0.933377 J: 0.729527 J: 0.957983 J: 0.008178 J: 0.923836 Figure 1: Outputs from synthetic data sets. From left to right: Testing instance, Ground Truth, Logistic Regression (LR), DRF, and SSDRF. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 DRF SSDRF (a) Accuracy from DRF and SSDRF for all 18 synthetic data sets 1 2 3 4 5 6 7 8 9 10 3900 4000 4100 4200 4300 4400 4500 (b) Log likelihood values (Y axis) for a testing image by increasing ratio (X axis) of unlabeled instances for SSDRF Figure 2: Accuracy and Convergency as in Figure 2(b), where nU denotes the number of unlabeled images and nL the number of labeled images. Similar results have also been reported in simple single variable classification task [8]. 5.2 Brain Tumor Segmentation We have applied our semi-supervised DRF model to the challenging real world problem of segmenting tumor in medical images. Our goal here is to classify each pixel of an magnetic resonance (MR) image into a pre-defined category: tumor and non-tumor. This is a very important, yet notoriously difficult, task in surgical planning and radiation therapy which currently involves a significant amount of manual work by human medical experts. We applied three models to the classification of 9 studies from brain tumor MR images. For each study2, i, we divided the MR images into DL i , DU i , and DS i , where an MR image (a.k.a slice) has three modalities available — T1, T2, and T1 contrast. Note that each modality for each slice has 66, 564 pixels. As with much of the related work on automatic brain tumor segmentation (such as [7, 21]), our training is based on patient-specific data, where training MR images for a classifier are obtained from the patient to be tested. Note that the training sets and testing sets for a classifier are disjoint. Specifically, LR and DRF takes DL i as the training set and DU i and DS i for testing sets, while SSDRF takes DL i and DU i for training and DU i and DS i for testing. We segmented the “enhancing” tumor area, the region that appears hyper-intense after injecting the contrast agent (we also included non-enhancing areas contained within the enhancing contour). Table 1 and 2 present Jaccard scores of testing DU i and DS i for each study, pi, respectively. While the standard supervised DRF improves over its degenerate model LR by 1%, semi-supervised DRF significantly improves over the supervised DRF by 11%, which is significant at p < 0.00566 using a paired example t test. Considering the fact that MR images contain much noise and the three modalities are not consistent among slices of the same patient, our improvement is considerable. Figure 3 shows the segmentation results by overlaying the testing slices with segmented outputs from the three models. Each row demonstrates the segmentation for a slice, where the white blob areas for the slice correspond to the enhancing tumor area. 6 Conclusion We have proposed a new semi-supervised learning algorithm for DRFs, which was formulated as MAP estimation with conditional entropy over unlabeled data as a data-dependent prior regularization. Our approach is motivated by the information-theoretic argument [8, 16] that unlabeled examples can provide the most benefit when classes have small overlap. We introduced a simple approximation approach for this new learning procedure that exploits the local conditional probability to efficiently compute the derivative of objective function. 2Each study involves a number (typically 21) of images of a single patient – here parallel axial slices through the head. Table 1: Jaccard Scores for DU i . Testing from DU i Studies LR DRF SSDRF p1 53.84 59.81 59.81 p2 83.24 83.65 84.67 p3 30.72 30.17 75.76 p4 72.04 76.16 79.02 p5 73.26 73.59 75.25 p6 88.39 89.61 87.01 p7 69.33 69.91 75.60 p8 58.49 58.89 73.03 p9 60.85 56.49 83.91 Average 65.57 66.48 77.12 Table 2: Jaccard Scores for DS i . Testing from DS i Studies LR DRF SSDRF p1 68.01 68.75 68.75 p2 69.61 69.73 70.06 p3 23.11 21.90 71.13 p4 56.52 63.07 68.40 p5 51.38 52.36 51.29 p6 85.65 86.35 85.43 p7 66.71 68.68 70.27 p8 44.92 45.36 73.09 p9 21.11 20.16 38.06 Average 54.11 55.15 66.27 Figure 3: From Left to Right: Human Expert, LR, DRF, and SSDRF We have applied this new approach to the problem of image pixel classification tasks. By exploiting the availability of auxiliary unlabeled data, we are able to improve the performance of the state of the art supervised DRF approach. Our semi-supervised DRF approach shares all of the benefits of the standard DRF training, including the ability to exploit arbitrary potentials in the presence of dependency cycles, while improving accuracy through the use of the unlabeled data. The main drawback is the increased training time involved in computing the derivative of the conditional entropy over unlabeled data. Nevertheless, the algorithm is efficient to be trained on unlabeled data sets, and to obtain a significant improvement in classification accuracy over standard supervised training of DRFs as well as iid logistic regression classifiers. To further accelerate the performance with respect to accuracy, we may apply loopy belief propagation [20] or graph-cuts [4] as an inference tool. Since our model is tightly coupled with inference steps during the learning, the proper choice of an inference algorithm will most likely improve segmentation tasks. Acknowledgments This research is supported by the Alberta Ingenuity Centre for Machine Learning, Cross Cancer Institute, and NSERC. We gratefully acknowledge many helpful suggestions from members of the Brain Tumor Analysis Project, including Dr. A. Murtha and Dr. J Sander. References [1] Y. Altun, D. McAllester, and M. Belkin. Maximum margin semi-supervised learning for structured variables. In NIPS 18. 2006. [2] J. Besag. On the statistical analysis of dirty pictures. Journal of Royal Statistical Society. Series B, 48:3:259–302, 1986. [3] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998. [4] Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. In ICCV (1), pages 377–384, 1999. [5] G. Celeux and G. Govaert. A classification EM algorithm for clustering and two stochastic versions. Comput. Stat. Data Anal., 14(3):315–332, 1992. [6] A. Corduneanu and T. Jaakkola. Data dependent regularization. In O. Chapelle, B. Schoelkopf, and A. Zien, editors, Semi-Supervised Learning. MIT Press, 2006. [7] C. Garcia and J.A. Moreno. Kernel based method for segmentation and modeling of magnetic resonance images. LNCS, 3315:636–645, Oct 2004. [8] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS 17, 2004. [9] F. Jiao, S. Wang, C. Lee, R. Greiner, and D Schuurmans. Semi-supervised conditional random fields for improved sequence segmentation and labeling. In COLING/ACL, 2006. [10] S. Kumar and M. Hebert. Discriminative fields for modeling spatial dependencies in natural images. In NIPS 16, 2003. [11] S. Kumar and M. Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In CVPR, 2003. [12] J. Lafferty, F. Pereira, and A. McCallum. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [13] C. Lee, R. Greiner, and O. Za¨ıane. Efficient spatial classification using decoupled conditional random fields. In 10th European Conference on Principles and Practice of Knowledge Discovery in Databases, pages 272–283, 2006. [14] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134, 2000. [15] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In NIPS 17, 2004. [16] S. Roberts, R. Everson, and I. Rezek. Maximum certainty data partitioning, 2000. [17] A. Torralba, K. Murphy, and W. Freeman. Contextual models for object detection using boosted random fields. In NIPS 17, 2004. [18] V. Vapnik. Statistical Learning Theory. John-Wiley, 1998. [19] S.V.N. Vishwanathan, N. Schraudolph, M. Schmidt, and K. Murphy. Accelerated training of conditional random fields with stochastic gradient methods. In ICML, 2006. [20] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS 13, pages 689–695, 2000. [21] J. Zhang, K. Ma, M.H. Er, and V. Chong. Tumor segmentation from magnetic resonance imaging by learning via one-class support vector machine. Intl. Workshop on Advanced Image Technology, 2004. [22] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In NIPS 16, 2004. [23] D. Zhou, J. Huang, and B. Sch¨olkopf. Learning from labeled and unlabeled data on a directed graph. In ICML, 2005. [24] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, 2003.
|
2006
|
124
|
2,949
|
Shifting, One-Inclusion Mistake Bounds and Tight Multiclass Expected Risk Bounds Benjamin I. P. Rubinstein Computer Science Division University of California, Berkeley Berkeley, CA 94720-1776, U.S.A. benr@cs.berkeley.edu Peter L. Bartlett Computer Science Division and Department of Statistics University of California, Berkeley bartlett@cs.berkeley.edu J. Hyam Rubinstein Department of Mathematics & Statistics The University of Melbourne Parkville, Victoria 3010, Australia rubin@ms.unimelb.edu Abstract Under the prediction model of learning, a prediction strategy is presented with an i.i.d. sample of n −1 points in X and corresponding labels from a concept f ∈F, and aims to minimize the worst-case probability of erring on an nth point. By exploiting the structure of F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy, improving on bounds implied by PAC-type results by a O(log n) factor. The key data structure in their result is the natural subgraph of the hypercube—the one-inclusion graph; the key step is a d = VC(F) bound on one-inclusion graph density. The first main result of this paper is a density bound of n n−1 ≤d−1 / ( n ≤d ) < d, which positively resolves a conjecture of Kuzmin & Warmuth relating to their unlabeled Peeling compression scheme and also leads to an improved mistake bound for the randomized (deterministic) one-inclusion strategy for all d (for d ≈Θ(n)). The proof uses a new form of VC-invariant shifting and a group-theoretic symmetrization. Our second main result is a k-class analogue of the d/n mistake bound, replacing the VC-dimension by the Pollard pseudo-dimension and the one-inclusion strategy by its natural hypergraph generalization. This bound on expected risk improves on known PAC-based results by a factor of O(log n) and is shown to be optimal up to a O(log k) factor. The combinatorial technique of shifting takes a central role in understanding the one-inclusion (hyper)graph and is a running theme throughout. 1 Introduction In [4, 3] Haussler, Littlestone and Warmuth proposed the one-inclusion prediction strategy as a natural approach to the prediction (or mistake-driven) model of learning, in which a prediction strategy maps a training sample and test point to a test prediction with hopefully guaranteed low probability of erring. The significance of their contribution was two-fold. On the one hand the derived VC(F)/n upper-bound on the worst-case expected risk of the one-inclusion strategy learning from F ⊆{0, 1}X improved on the PAC-based previous-best by an order of log n. This was achieved by taking the structure of the underlying F into account—which had not been done in previous work— in order to break ties between hypotheses consistent with the training set but offering contradictory predictions on a given test point. At the same time Haussler [3] introduced the idea of shifting subsets of the n-cube down around the origin—an idea previously developed in Combinatorics—as a powerful tool for learning-theoretic results. In particular, shifting admitted deeply insightful proofs of Sauer’s Lemma and a VC-dimension bound on the density of the one-inclusion graph—the key result needed for the one-inclusion strategy’s expected risk bound. Recently shifting has impacted on work towards the sample compressibility conjecture of [7] e.g. in [5]. Here we continue to study the one-inclusion graph—the natural graph structure induced by a subset of the n-cube—and its related prediction strategy under the lens of shifting. After the necessary background, we develop the technique of shatter-invariant shifting in Section 3. While a subset’s VC-dimension cannot be increased by shifting, shatter-invariant shifting guarantees a finite sequence of shifts to a fixed-point under which the shattering of a chosen set remains invariant, thus preserving VC-dimension throughout. In Section 4 we apply a group-theoretic symmetrization to tighten the mistake bound—the worst-case expected risk bound—of the deterministic (randomized) oneinclusion strategy from d/n to ⌈Dd n⌉/n (Dd n/n), where Dd n < d for all n, d. The derived Dd n density bound positively resolves a conjecture of Kuzmin & Warmuth which was suggested as a step towards a correctness proof of the Peeling compression scheme [5]. Finally we generalize the prediction model, the one-inclusion strategy and its bounds from binary to k-class learning in Section 5. Where ΨG-dim (F) and ΨP-dim (F) denote the Graph and Pollard dimensions of F, the best bound on expected risk for k ∈N to-date is O(α log α) for α = ΨG-dim (F) /n, for consistent learners [8, 1, 2, 4]. For large n this is O(log nΨG-dim (F) /n); we derive an improved bound of ΨP-dim (F) /n which we show is at most a O(log k) factor from optimal. Thus, as in the binary case, exploiting class structure enables significantly better bounds on expected risk for multiclass prediction. As always some proofs have been omitted in the interest of flow or space. In such cases see [8]. 2 Definitions & background In this paper sets/random variables, scalars and vectors will be written in uppercase, lowercase and bolded typeface as in C, x, v. We define n ≤r = Pr i=0 n i , [n] = {1, . . . , n} and Sn to be the set of permutations on [n]. We write the density of graph G = (V, E) as dens (G) = |E|/|V |, the indicator of A as 1 [A], and ∃!x ∈X, P(x) to mean “there exists a unique x ∈X satisfying P.” 2.1 The prediction model of learning We begin with the basic setup of [4]. Set X is the domain and F ⊆{0, 1}X is a concept class on X. For notational convenience we write sam (x, f) = ((x1, f(x1)) , . . . , (xn, f(xn))) for x ∈X n, f ∈F. A prediction strategy is a mapping of the form Q : S n>1 (X × {0, 1})n−1 × X →{0, 1}. Definition 2.1 The prediction model of learning concerns the following scenario. Given full knowledge of strategy Q, an adversary picks a distribution P on X and concept f ∈F so as to maximize the probability of {Q (sam (X1, . . . , Xn−1, f) , Xn) ̸= f(Xn)} where Xi i.i.d. ∼P. Thus the measure of performance is the worst-case expected risk ˆ MQ,F(n) = sup f∈F sup P EX∼P n [1 [Q (sam ((X1, . . . , Xn−1), f) , Xn) ̸= f(Xn)]] . A mistake bound for Q with respect to F is an upper-bound on ˆ MQ,F. In contrast to Valiant’s PAC model, the prediction learning model is not interested in approximating f given an f-labeled sample, but instead in predicting f(Xn) with small worst-case probability of erring. The following allows us to derive mistake-bounds by bounding a worst-case average. Lemma 2.2 (Corollary 2.1 [4]) For any n > 1, concept class F and prediction strategy Q, ˆ MQ,F(n) ≤ sup f∈F sup x∈X n 1 n! X g∈Sn 1 Q sam xg(1), . . . , xg(n−1) , f , xg(n) ̸= f xg(n) = ˆˆ MQ,F(n) . A permutation mistake bound for Q with respect to F is an upper-bound on ˆˆ MQ,F. 2.2 The capacity of function classes contained in {0, . . . , k}X We denote by Πx (F) = {(f(x1), . . . , f(xn)) | f ∈F} the projection of F ⊆YX on x ∈X n. Definition 2.3 The Vapnik-Chervonenkis dimension of concept class F is defined as VC(F) = sup {n | ∃x ∈X n, Πx (F) = {0, 1}n}. An x witnessing VC(F) is said to be shattered by F. Lemma 2.4 (Sauer’s Lemma [9]) For any n ∈N and V ⊆{0, 1}n, |V | ≤ n ≤VC(V ) . A subset V meeting this with equality is called maximum. It is well-known that the VC-dimension is an inappropriate measure of capacity when |Y| > 2. The following unifying framework of class capacities for |Y| < ∞is due to [1]. Definition 2.5 Let k ∈N, F ⊆{0, . . . , k}X and Ψ be a family of mappings ψ : {0, . . . , k} → {0, 1, ∗} called translations. For x ∈X n, v ∈Πx (F) ⊆{0, . . . , k}n and ψ ∈Ψn we write ψ(v) = (ψ1(v1), . . . , ψn(vn)) and ψ(Πx (F)) = {ψ(v) : v ∈Πx (F)}. x ∈X n is Ψ-shattered by F if there exists a ψ ∈Ψn such that {0, 1}n ⊆ψ(Πx (F)). The Ψ-dimension of F is defined by Ψ-dim (F) = sup{n | ∃x ∈X n, ψ ∈Ψn s.t. {0, 1}n ⊆ψ(Πx (F))}. We next describe three important translation families used in this paper. Example 2.6 The families ΨP = {ψP,i : i ∈[k]}, ΨG = {ψG,i : i ∈{0, . . . , k}} and ΨN = {ψN,i,j : i, j ∈{0, . . . , k}, i ̸= j}, where ψP,i(a) = 1 [a < i], ψG,i(a) = 1 [a = i] and ψN,i,j(a) equals 1, 0, ∗if a = i, a = j, a /∈{i, j} respectively, define the Pollard pseudo-dimension ΨP-dim (V ), the Graph dimension ΨG-dim (V ) and the Natarajan dimension ΨN-dim (V ). 2.3 The one-inclusion prediction strategy A subset of the n-cube—the projection of some F—induces the one-inclusion graph, which underlies a natural prediction strategy. The following definition generalizes this to a subset of {0, . . . , k}n. Definition 2.7 The one-inclusion hypergraph G (V ) = (V, E) of V ⊆{0, . . . , k}n is the undirected graph with vertex-set V and hyperedge-set E of maximal (with respect to inclusion) sets of pairwise hamming-1 separated vertices. Algorithm 1 The deterministic multiclass one-inclusion prediction strategy QG,F Given: F ⊆{0, . . . , k}X , sam ((x1, . . . , xn−1), f) ∈(X × {0, 1})n−1, xn ∈X Returns: a prediction of f(xn) V ←−Πx (F) ; G ←−G (V ) ; −→ G ←−orient G to minimize the maximum outdegree ; Vspace ←−{v ∈V | v1 = f(x1), . . . , vn−1 = f(xn−1)} ; if Vspace = {v} then return vn ; else return the nth component of the head of hyperedge Vspace in −→ G ; The one-inclusion graph’s prediction strategy QG,F [4] immediately generalizes to the multiclass prediction strategy described by Algorithm 1. For the remainder of this and Section 4 we will restrict our discussion to the k = 1 case, on which the following main result of [4] focuses. Theorem 2.8 (Theorem 2.3 [4]) ˆ MQG,F,F(n) ≤VC(F) n for every concept class F and n > 1. A lower bound in [6] showed that the one-inclusion strategy’s performance is optimal within a factor of 1 + o(1). Replacing orientation with a distribution over each edge induces a randomized strategy QGrand,F. The key to proving Theorem 2.8 is the following. Lemma 2.9 (Lemma 2.4 [4]) For any n ∈N and V ⊆{0, 1}n, dens (G (V )) ≤VC(V ). An elegant proof of this deep result, due to Haussler [3], uses shifting. Consider any s ∈[n], v ∈V and let Ss(v; V ) be v shifted along s: if vs = 0, or if vs = 1 and there exists some u ∈V differing to v only in the sth coordinate, then Ss(v; V ) = v; otherwise v shifts down—its sth coordinate is decreased from 1 to 0. The entire family V can be shifted to Ss(V ) = {Ss(v; V ) | v ∈V } and this shifted vertex-set induces Ss(E) the edge-set of G (Ss(V )), where (V, E) = G (V ). Definition 2.10 Let I ⊆[n]. We call a subset V ⊆{0, 1}n I-closed-below if Ss(V ) = V for all s ∈I. If V is [n]-closed-below then we call it closed-below. A number of properties of shifting follow relatively easily: |Ss(V )| = |V | , by the injectivity of Ss( · ; V ) (1) VC(Ss(V )) ≤ VC(V ) , as Ss(V ) shatters I ⊆[n] ⇒V shatters I (2) |E| ≤ |V | · VC(V ) , as V closed-below ⇒maxv∈V ∥v∥l1 ≤VC(V ) (3) |Ss(E)| ≥ |E| , by cases (4) ∃T ∈N, s ∈[n]T s.t. SsT (. . . Ss1(V )) is closed-below (a fixed-point) . (5) Properties (1–2) and the justification of (3) together imply Sauer’s lemma; Properties (1–5) lead to |E| |V | ≤. . . ≤|SsT (...Ss1(E))| |SsT (...Ss1(V ))| ≤VC(SsT (. . . Ss1(V ))) ≤. . . ≤VC(V ) . 3 Shatter-invariant shifting While [3] shifts to bound density, the number of edges can increase and the VC-dimension can decrease—both contributing to the observed gap between graph density and capacity. The next result demonstrates that shifting can in fact be controlled to preserve VC-dimension. Lemma 3.1 Consider arbitrary n ∈N, I ⊆[n] and V ⊆{0, 1}n that shatters I. There exists a finite sequence s1, . . . , sT in [n] such that each Vt = Sst (. . . Ss1(V )) shatters I and VT is closedbelow. In particular VC(VT ) = VC(VT −1) = . . . = VC(V ). Proof: ΠI (·) is invariant to shifting on I = [n]\I. So some finite number of shifts on I will produce a I-closed-below family W that shatters I. Hence W must contain representatives for each element of {0, 1}|I| (embedded at I) with components equal to 0 outside I. Thus the shattering of I is invariant to the shifting of W on I, so that a finite number of shifts on I produces an I-closed-below W ′ that shatters I. Repeating the process a finite number of times until no non-trivial shifts are made produces a closed-below family that shatters I. The second claim follows from (2). 4 Tightly bounding graph density by symmetrization Kuzmin and Warmuth [5] introduced Dd n as a potential bound on the graph density of maximum classes. We begin with properties of Dd n, a technical lemma and then proceed to the main result. Definition 4.1 Define Dd n = n “ n−1 ≤d−1 ” ( n ≤d) for all n ∈N and d ∈[n]. Denote by V d n the VC-dimension d closed-below subset of {0, 1}n equal to the union of all n d closed-below embedded d-cubes. Lemma 4.2 Dd n (i) equals the graph density of V d n for each n ∈N and d ∈[n]; (ii) is strictly upper-bounded by d, for all n; (iii) equals d 2 for all n = d ∈N; (iv) is strictly monotonic increasing in d (with n fixed); (v) is strictly monotonic increasing in n (with d fixed); and (vi) limits to d as n →∞. Proof: By counting, for each d ≤n < ∞, the density of G V d n equals Dd n: E G V d n |V d n | = Pd i=1 i n i Pd i=0 n i = n Pd−1 i=0 i+1 n n i+1 n ≤d = n Pd−1 i=0 n−1 i n ≤d = n n−1 ≤d−1 n ≤d proving (i). Since for all A, B, C, D > 0, A B < A+C B+D iff A B < C D, it is sufficient for (iv) to prove that Dd−1 n < n( n−1 d−1) ( n d) . By (i) and Lemma 2.9 Dd n ≤d, and so Dd−1 n ≤d −1 < d = n · (n −1)! n! (n −d)! (n −d)! d! (d −1)! = n (n−1)! (n−d)!(d−1)! n! (n−d)!d! = n n−1 d−1 n d . Monotonicity in d, (i) and Lemma 2.9 together prove (ii). Properties (iii,v–vi) are proven in [8]. Lemma 4.3 For arbitrary U, V ⊆{0, 1}n with dens (G (V )) ≥ρ > 0, |U| ≤|V | and |E (G (U)) | ≥|E (G (V )) |, if dens (G (U ∩V )) < ρ then dens (G (U ∪V )) > ρ. Proof: If G (U ∩V ) has density less than ρ then |E (G (U ∪V )) | |U ∪V | ≥ |E (G (U)) | + |E (G (V )) | −|E (G (U ∩V )) | |U| + |V | −|U ∩V | ≥ 2|E (G (V )) | −|E (G (U ∩V )) | 2|V | −|U ∩V | > 2ρ|V | −ρ|U ∩V | 2|V | −|U ∩V | = ρ 0 20 40 60 80 0 2 4 6 8 10 n density d = 1 d = 2 d = 10 d Dn d Figure 1: The improved graph density bound of Theorem 4.4. The density bounding Dd n is plotted (dotted solid) alongside the previous best d (dashed), for each d ∈{1, 2, 10}. Theorem 4.4 Every family V ⊆{0, 1}n with d = VC(V ) has (V, E) = G (V ) with graph density |E| |V | ≤Dd n < d . (6) For n ∈N and d ∈[n], V d n is the unique closed-below VC-dimension d subset of {0, 1}n meeting (6) with equality. A VC-dimension d family V ⊆{0, 1}n meets (6) with equality only if V is maximum. Proof: Allow a permutation g ∈Sn to act on vector v ∈{0, 1}n and family V ⊆{0, 1}n by g(v) = vg(1), . . . , vg(n) and g(V ) = {g(v) | v ∈V }; and define Sn(V ) = S g∈Sn g(V ). Note that a closed-below VC-dimension d family V ⊆{0, 1}n satisfies Sn(V ) = V iff V = V d n , as VC(V ) ≥d implies V contains an embedded d-cube, invariance to Sn implies further that V contains all n d such cubes, and VC(V ) ≤d implies that V ⊆V d n . Consider now any V ∗ n,d ∈ arg min ( |U| U ∈ arg max {U⊆{0,1}n|VC(U)≤d,U closed-below} dens (G (U)) ) . For the purposes of contradiction assume that V ∗ n,d ̸= g(V ∗ n,d) for some g ∈Sn. Then if dens G V ∗ n,d ∩g(V ∗ n,d) ≥dens G V ∗ n,d then V ∗ n,d would not have been selected above (i.e. a closed-below family at least as small and dense as V ∗ n,d ∩g(V ∗ n,d) would have been chosen). Thus dens G V ∗ n,d ∪g(V ∗ n,d) > dens G V ∗ n,d by Lemma 4.3. But then again V ∗ n,d would not have been selected (i.e. a distinct family at least as dense as V ∗ n,d ∪g(V ∗ n,d) would have been selected instead, since every vector in this union contains no more than d 1’s). Hence V ∗ n,d = Sn(V ∗ n,d) and so V ∗ n,d = V d′ n and by Lemma 4.2.(i) dens G V ∗ n,d = Dd′ n , for d′ = VC(V ∗ n,d) ≤d. But by Lemma 4.2.(iv) this implies that d = d′ and (6) is true for all closed-below families; V d n uniquely maximizes density amongst all closed-below VC-dimension d families in the n-cube. For an arbitrary V ⊆{0, 1}n with d = VC(V ) consider any of its closed-below fixed-point (cf. (5)), W ⊆{0, 1}n. Noting that VC(W) ≤d and dens (G (V )) ≤dens (G (W)) by (2) and (1) & (4) respectively, the bound (6) follows directly for V . Furthermore if we shift to preserve VC-dimension then VC(W) = d while still |V | = |W|. And since dens (G (W)) = Dd n only if W = V d n , it follows that V maximizes density amongst all VC-dimension d families in the n-cube, with dens (G (V )) = Dd n, only if it is maximum. Theorem 4.4 improves on the VC-dimension density bound of Lemma 2.9 for low sample sizes (see Figure 1). This new result immediately implies the following one-inclusion mistake bounds. Theorem 4.5 Consider any n ∈N and F ⊆{0, 1}X with VC(F) = d < ∞. Then ˆ MQG,F,F(n) ≤ Dd n /n and ˆ MQGrand,F,F(n) ≤Dd n/n. For small d, n∗(d) = min n ≥d | d = Dd n —the first n for which the new and old deterministic one-inclusion mistake bounds coincide—appears to remain very close to 2.96d. The randomized strategy’s mistake bound of Theorem 4.5 offers a strict improvement over that of [4]. 5 Bounds for multiclass prediction As in the k = 1 case, the key to developing the multiclass one-inclusion mistake bound is in bounding hypergraph density. We proceed by shifting a graph induced by the one-inclusion hypergraph. Theorem 5.1 For any k, n ∈N and V ⊆{0, . . . , k}n, the one-inclusion hypergraph (V, E) = G (V ) satisfies |E| |V | ≤ΨP-dim (V ). Proof: We begin by replacing the hyperedge structure E with a related edge structure E′. Two vertices u, v ∈V are connected in the graph (V, E′) iff there exists an i ∈[n] such that u, v differ only at i and no w ∈V exists such that ui < wi < vi and wj = uj = vj on [n]\{i}. Trivially |E| |V | ≤|E′| |V | ≤k|E| |V | . (7) Consider now shifting vertex v ∈V at shift label t ∈[k] along shift coordinate s ∈[n] by Ss,t(v; V ) = vs(v′ s) where vs(i) = (v1, . . . , vs−1, i, vs+1, . . . , vn) for i ∈{0, . . . , k} v′ s = min x ∈{0, . . . , vs} vs(x) /∈V or x = vs if vs = t vs o.w. We shift V on s at t as usual; we shift V on s alone by bubbling vertices down to fill gaps below: Ss,t(V ) = {Ss,t(v; V ) | v ∈V } Ss(V ) = Ss,k(Ss,k−1(. . . Ss,1(V ))) . Let Ss(E′) denote the edge-set induced by Ss(V ). Ss on a vertex-set is injective implying that |Ss(V )| = |V | . (8) Consider any {u, v} ∈E′ with i ∈[n] denoting the index on which u, v differ. If i = s then no other vertex w ∈V can come between u and v during shifting by construction of E′, so {Ss(u; V ), Ss(v; V )} ∈Ss(E′). Now suppose that i ̸= s. If both vertices shift down by the same number of labels then they remain connected in Ss(E′). Otherwise assume WLOG that Ss(u; V )s < Ss(v; V )s then the shifted vertices will lose their edge, however since vs did not shift down to Ss(u; V )s there must have been some w ∈V different to u on {i, s} such that ws < vs with Ss(w; V )s = Ss(u; V )s. Thus Ss(w; V ), Ss(u; V ) differ only on {i} and a new edge {Ss(w; V ), Ss(u; V )} is in Ss(E′) that was not in E′ (otherwise u would not have shifted). Thus |Ss(E′)| ≥ |E′| . (9) Suppose that I ⊆[n] is ΨP -shattered by Ss(V ). If s /∈I then ΠI (Ss(V )) = ΠI (V ) and I is ΨP -shattered by V . If s ∈I then V ΨP -shatters I. Witnesses of Ss(V )’s ΨP -shattering of I equal to 1 at s, taking each value in {0, 1}|I|−1 on I\{s}, were not shifted and so are witnesses for V ; since these vertices were not shifted they were blocked by vertices of V of equal values on I\{s} but equal to 0 at s, these are the remaining half of the witnesses of V ’s ΨP -shattering of I. Thus Ss(V ) ΨP -shatters I ⊆[n] ⇒ V ΨP -shatters I . (10) In a finite number of shifts starting from (V, E′), a closed-below family W with induced edge-set F will be reached. If I ⊆[n] is ΨP -shattered by W and |I| = d = ΨP-dim (W), then since W is closed-below the translation vector (ψP,1, . . . , ψP,1) (·) = (1 [· < 1] , . . . , 1 [· < 1]) must witness this shattering. Hence each w ∈W has at most d non-zero components. Counting edges in F by upper-adjoining vertices we have proved that (V, E′) finitely shifts to closed-below graph (W, F) s.t. |F| ≤|W| · ΨP-dim (W) . (11) Combining properties (7)–(11) we have that |E| |V | ≤|E′| |V | ≤|F | |W | ≤ΨP-dim (W) ≤ΨP-dim (V ). The remaining arguments from the k = 1 case of [4, 3] now imply the multiclass mistake bound. Theorem 5.2 Consider any k, n ∈N and F ⊆{0, . . . , k}X with ΨP-dim (F) < ∞. The multiclass one-inclusion prediction strategy satisfies ˆ MQG,F,F(n) ≤ΨP-dim (F) /n. 5.1 A lower bound We now show that the preceding multiclass mistake bound is optimal to within a O(log k) factor, noting that ΨN is smaller than ΨP by at most such a factor [1, Theorem 10]. Definition 5.3 We call a family F ⊆{0, . . . , k}X trivial if either |F| = 1 or there exist no x1, x2 ∈ X and f1, f2 ∈F such that f1(x1) ̸= f2(x1) and f1(x2) = f2(x2). Theorem 5.4 Consider any deterministic or randomized prediction strategy Q and any F ⊆ {0, . . . , k}X that has 2 ≤ΨN-dim (F) < ∞or is non-trivial with ΨN-dim (F) < 2. Then for all n > ΨN-dim (F), ˆ MQ,F(n) ≥max{1, ΨN-dim (F) −1}/(2en). Proof: Following [2], we use the probabilistic method to prove the existence of a target in F for which prediction under a distribution P supported by a ΨN-shattered subset is hard. Consider d = ΨN-dim (F) ≥2 with n > d. Fix a Z = {z1, . . . , zd} ΨN-shattered by F and then a subset FZ ⊆F of 2d functions that ΨN-shatters Z. Define a distribution P on X by P({zi}) = n−1 for each i ∈[d −1], P({zd}) = 1 −(d −1)n−1 and P({x}) = 0 for all x ∈ X\Z. Observe that PrP n (∀i ∈[n −1], Xn ̸= Xi) ≥PrP n (Xn ̸= zd, ∀i ∈[n −1], Xn ̸= Xi) = d−1 n 1 −1 n n−1 ≥ d−1 en . For any f ∈FZ and x ∈Zn with xn ̸= xi for all i ∈ [n −1], exactly half of the functions in FZ consistent with sam ((x1, . . . , xn−1), f) output some i ∈{0, . . . , k} on xn and the remaining half output some j ∈{0, . . . , k}\{i}. Thus EUnif(FZ) [1 [Q(sam ((x1, . . . , xn−1, F) , xn) ̸= F(xn)]] = 0.5 for such an x and so ˆ MQ,F ≥ ˆ MQ,FZ ≥EUnif(FZ)×P n [1 [Q(sam ((X1, . . . , Xn−1, F) , Xn) ̸= F(Xn)]] ≥d −1 2en . The similar case of d < 2 is omitted here and shows that there is a distribution P on X and function f ∈F such that EP n [1 [Q(sam ((X1, . . . , Xn−1), f) , Xn) ̸= f(Xn)]] ≥(2en)−1. 6 Conclusions and open problems In this paper we have developed new shifting machinery and tightened the binary one-inclusion mistake bound from d/n to Dd n/n (⌈Dd n⌉/n for the deterministic strategy) representing a solid improvement for d ≈n. We have described the multiclass generalization of the prediction learning model and derived a mistake bound for the multiclass one-inclusion prediction strategy that improves on previous PAC-based expected risk bounds by O(log n) and that is within O(log k) of optimal. Here shifting with invariance to the shattering of a single set was described, however we are aware of invariance to more complex shatterings. Another serious application of shatter-invariant shifting, to appear in a sequel to this paper, is to the study of the cubical structure of maximum and maximal classes with connections to the compressibility conjecture of [7]. While Theorem 4.4 resolves one conjecture of Kuzmin & Warmuth [5], the remainder of the conjectured correctness proof for the Peeling compression scheme is known to be false [8]. The symmetrization method of Theorem 4.4 can be extended over subgroups G ⊂Sn to gain tighter density bounds. Just as the Sn-invariant V d n is the maximizer of density among all closed-below V ⊆V d n , there exist G-invariant families that maximize the density over all of their sub-families. In addition to Theorem 5.2 we have also proven the following special case in terms of ΨG; it is open as to whether this generalizes to n ∈N. While a general ΨG-based bound would allow direct comparison with the PAC-based expected risk bound, it should also be noted that ΨP and ΨG are in fact incomparable—neither ΨG ≤ΨP nor ΨP ≤ΨG singly holds for all classes [1, Theorem 1]. Lemma 6.1 ([8]) For any k ∈N and family V ⊆{0, . . . , k}2, dens (G (V )) ≤ΨG-dim (V ). Acknowledgments We gratefully acknowledge the support of the NSF under award DMS-0434383. References [1] Ben-David, S., Cesa-Bianchi, N., Haussler, D., Long, P. M.: Characterizations of learnability for classes of {0, . . . , n}-valued functions. Journal of Computer and System Sciences, 50(1) (1995) 74–86 [2] Ehrenfeucht, A., Haussler, D., Kearns, M., Valiant, L.: A general lower bound on the number of examples needed for learning. Information and Computation, 82(3) (1989) 247–261 [3] Haussler, D.: Sphere packing numbers for subsets of the boolean n-cube with bounded VapnikChervonenkis dimension. Journal of Combinatorial Theory (A) 69(2) (1995) 217–232 [4] Haussler, D., Littlestone, N., Warmuth, M. K.: Predicting {0, 1} functions on randomly drawn points. Information and Computation, 115(2) (1994) 284–293 [5] Kuzmin, D., Warmuth, M. K.: Unlabeled compression schemes for maximum classes. Journal of Machine Learning Research (2006) to appear [6] Li, Y., Long, P. M., Srinivasan, A.: The one-inclusion graph algorithm is near optimal for the prediction model of learning. IEEE Transactions on Information Theory, 47(3) (2002) 1257–1261 [7] Littlestone, N., Warmuth, M. K.: Relating data compression and learnability. Unpublished manuscript, http://www.cse.ucsc.edu/˜manfred/pubs/lrnk-olivier.pdf (1986) [8] Rubinstein, B. I. P., Bartlett, P. L., Rubinstein, J. H.: Shifting: One-Inclusion Mistake Bounds and Sample Compression. Technical report, EECS Department, UC Berkeley (2007) to appear [9] Sauer, N.: On the density of families of sets. Journal of Combinatorial Theory (A), 13 (1972) 145–147
|
2006
|
125
|
2,950
|
Learning to parse images of articulated bodies Deva Ramanan Toyota Technological Institute at Chicago Chicago, IL 60637 ramanan@tti-c.org Abstract We consider the machine vision task of pose estimation from static images, specifically for the case of articulated objects. This problem is hard because of the large number of degrees of freedom to be estimated. Following a established line of research, pose estimation is framed as inference in a probabilistic model. In our experience however, the success of many approaches often lie in the power of the features. Our primary contribution is a novel casting of visual inference as an iterative parsing process, where one sequentially learns better and better features tuned to a particular image. We show quantitative results for human pose estimation on a database of over 300 images that suggest our algorithm is competitive with or surpasses the state-of-the-art. Since our procedure is quite general (it does not rely on face or skin detection), we also use it to estimate the poses of horses in the Weizmann database. 1 Introduction We consider the machine vision task of pose estimation from static images, specifically for the case of articulated objects. This problem is hard because of the large number of degrees of freedom to be estimated. Following a established line of research, pose estimation is framed as inference in a probabilistic model. Most approaches tend to focus on algorithms for inference, but in our experience, the low-level image features often dictate success. When reliable features can be extracted (through say, background subtraction or skin detection), approaches tend to do well. This dependence on features tends to be under-emphasized in the literature – one does not want to appear to suffer from “feature-itis”. In contrast, we embrace it. Our primary contribution is a novel casting of visual inference as an iterative parsing process, where one sequentially learns better and better features tuned to a particular image. Since our approach is fairly general (we do not use any skin or face detectors), we also apply it to estimate horse poses from the Weizmann dataset [1]. Another practical difficulty, specifically with pose estimation, is that of reporting results. It is common for an algorithm to return a set of poses, and the correct one is manually selected. This is because the posterior of body poses is often multimodal, a single MAP/mode estimate won’t summarize it. Inspired by the language community, we propose a perplexity-based measure for evaluation. We calculate the probability of observing the actual pose under the distribution returned by our algorithm. With such an evaluation procedure, we can quantifiable demonstrate that our approach improves the state-of-the-art. Related Work: Human pose estimation from static images is a very active research area. Most approaches tend to use a people-specific features, such as face/skin/hair detection [6, 4, 12]. Our work relies on the conditional random field (CRF) notion of deformable matching in [9]. Our approach is related to those that simultaneously estimate pose and segment an image [7, 10, 2, 5], since we learn low-level segmentation cues to build part-specific region models. However, we compute no explicit segmentation. Figure 1: The curse of edges? Edges are attractive because of their invariance – they fire on dark objects in light backgrounds and vice-versa. But without a region model, it can be hard to separate the figure from the background. We describe an iterative algorithm for pose estimation that learns a region model for each body part and for the background. Our algorithm is initialized by the edge maps shown; we show results for these two images in Fig.7 and Fig.8. 1.1 Overview Assume we are given an image of a person, who happens to be a soccer player wearing a white shirt on a green playing field (Fig. 2). We want to estimate the figure’s pose. Since we do not know the appearance of the figure or the background, we must use a feature invariant to appearance (Fig.1). We match an edge-based deformable model to the image to obtain (soft) estimates of body part positions. In general, we expect these estimates to be poor because the model can be distracted by edges in the background (e.g., the hallunicated leg and the missed arm in Fig. 2). The algorithm uses the estimated body part positions to build a rough region model for each body part and the background – it might learn that the torso is white-ish and the background is green-ish. The algorithm then builds a region-based deformable model that looks for white torsos. Soft estimates of body position from the new model are then used to build new region models, and the process is repeated. As one might suspect, such an iterative procedure is quite sensitive to its starting point – the edgebased deformable model used for initialization and the region-based deformable model used in the first iteration prove crucial. As the iterative procedure is fairly straightforward (Fig.3), most of this paper deals with smart ways of building the deformable models. 2 Edge-based deformable model Our edge-based deformable model is an extension of the one proposed in [9]. The basic probabilistic model is a tree-structured conditional random field (CRF). Let the location of each part li be parammissing arm hallucinated leg initial parse torso ru−arm head ll−leg Figure 2: We build a deformable pose model based on edges. Given an image I, we use a edgebased deformable model (middle) to compute body part locations P(L|I). This defines an initial parse of the image into several body part regions right. It is easy to hallucinate extra arms or legs in the negatives spaces between actual body parts (the extra leg). When a body part is surrounded by clutter (the right arm), it is hard to localize. Intuitively, both problems can be solved with lowlevel segmentation cues. The green region in between the legs is a poor leg candidate because of figure/ground cues – it groups better with the background grass. Also, we can find left/right limb pairs by appealing to symmetry – if one limb is visible, we can build a model of its appearance, and use it to find the other one. We operationalize both these notions by our iterative parsing procedure in Fig.3. models fg/bg specific part− learn torso head lower l/r arms lower l/r legs hallucinated leg weak arm response initial posterior from edges false leg suppress arm found torso head lower l/r legs lower l/r arms re−parse with additional features iter1 iter2 iter3 Figure 3: Our iterative parsing procedure. We define a parse to be a soft labeling of pixels into a region type (bg,torso,left lower arm, etc.). We use the initial parse from Fig.2 to build a region model for each part. We learn foreground/background color histogram models. To exploit symmetry in appearance, we learn a single color model for left/right limb pairs. We then label each pixel using the color model (middle right). We then use these masks as features for a deformable model that re-computes P(L|I). This in-turn defines a new parse, and the procedure is repeated. torso ru−arm best pose final parse sample poses input head ll−leg Figure 4: The result of our procedure. Given P(L|I) from the final iteration, we obtain a clean parse for the image. We can also compute ˆLMAP (the most likely pose), and can sample directly from P(L|I). eterized by image position and orientation [xi, yi, θi]. We will assume parts are oriented patches of fixed size, where (xi, yi) is the location of the top of the patch. We denote the configuration of a K part model as L = (l1 . . . lK). We can write the deformable model as a log-linear model P(L|I) ∝exp X i,j∈E ψ(li −lj) + X i φ(li) (1) Ψ(li −lj) corresponds to a spatial prior on the relative arrangement of part i and j. For efficient inference, we assume the edge structure E is a tree; each part is connected to at most one parent. Unlike most approaches that assume gaussian shape priors [9, 3], we parameterize our shape model with discrete binning (Fig.5). ψ(li −li) =αT i bin(li −lj) (2) Doing so allows us to capture more intricate distributions, at the cost of having more parameters to fit. We write bin(·) for the vectorized count of spatial and angular histogram bins (a vector of all zeros with a single one for the occupied bin). Here αi is a model parameter that favors certain (relative) spatial and angular bins for part i with respect to its parent. Figure 5: We record the spatial configuration of an arm given the torso by placing a grid on the torso, and noting which bin the arm falls into. We center the grid at the average location of arm in the training data. We likewise bin the angular orientations to define a spatial distribution of arms given torsos. Φ(li) corresponds to the local image evidence for a part, which we define as φ(li) =βT i fi(I(li)) (3) We write fi(I(li)) for feature vector extracted from the oriented image patch at location li. In general, fi() might be part-specific; it could return a binary vector of skin pixels for the the head. In our case, f e i returns a binary vector of edges for all parts. We can visualize βi in Fig.6. Inference: The basic machinery we use for inference is message-passing (the sum-product algorithm). Since E is a tree, we first pass “upstream” messages from part i to its parent j We compute the message from part i to j as mi(lj) ∝ X lj ψ(li −lj)ai(li) (4) ai(li) ∝φ(li) Y k∈kidsi mk(li) (5) Message passing can be performed exhaustively and efficiently with convolutions. If we temporarily ignore orientation and think of li = (xi, yi), we can represent messages as 2D images. The image ai is obtained by multiplying together response images from the children of part i and from the imaging model φ(li). φ(li) can be computed by convolving the edge image with the filter βi. mi(lj) can be computed by convolving ai with a spatial filter extending over the bins from Fig.5 (with coefficients equal to αi). At the root, the image ai is the true conditional marginal P(li|I). When li is 3D, we perform 3D convolutions. We assume αi is separable so convolutions can be performed separately in each dimension. This means that in practice, computing φ(li) is the computational bottleneck, since that requires convolving the edge image repeatedly with rotated versions of filter βi. Starting from the root, we can pass messages downstream from part j to part i (again with convolutions) P(li|I) ∝ai(li) X lj ψ(li −lj)P(lj|I) (6) For numerical stability, we normalize images to 1 as they are computed. By keeping track of the normalization constants, we can also compute the partition function (which is needed for computing the evaluation score in Sec. 5). Learning: We learn the filters αi and βi by CRF parameter estimation, as in [9]. We label training images with body part locations L, and find the filters that maximize P(L|I) for the training set. This objective function is convex and so we tried various optimization packages, but found simple stochastic gradient ascent to work well. We define the model learned from the edge feature map f e i as Θe = {αe i , βe i }. 3 Building a region model One can use the marginals (for say, the head) to define a soft labeling for the image into head/nonhead pixels. One can do this by repeatedly sampling a head location (according to P(li|I)) and then rendering a head at the given location and orientation. Let the rendered appearance for part i be an image patch si; we use a simple rectangular mask. In the limit of infinite samples, one will obtain an image pi(x, y) = X xi,yi,θi P(xi, yi, θi|I)sθi i (x −xi, y −yi) (7) We call such an image a parse for part i (the images on the right from Fig. 2). It is readily computed by convolving P(li|I) with rotated versions of patch si. Given the parse image pi, we learn a color histogram model for part i and “its” background. P(fgi(k)) ∝ X x,y pi(x, y)δ(im(x, y) = k) (8) P(bgi(k)) ∝ X x,y (1 −pi(x, y))δ(im(x, y) = k) (9) We use the part-specific histogram models to label each pixel as foreground or background with a likelihood ratio test (as shown in Fig.3). To enforce symmetry in appearance, we learn a single color model for left/right limb pairs. 4 Region-based deformable model After an initial parse, our algorithm has built an initial region model for each part (and its background). We use these models to construct binary label images for part i: P(fgi(im)) > P(bgi(im)). We write the oriented patch features extracted from these label images as f r i (for “region”-based). We want to use these features to help re-estimate the pose in an image – we using training data to learn how to do so. We learn model parameters for a region-based deformable model Θr by CRF parameter estimation, as in Sec.2. When learning Θr from training data, defining f r i is tricky – should we use the ground-truth part locations to learn the color histogram models? Doing so might be unrealistic – it assumes at “runtime”, the edge-based deformable model will always correctly estimate part locations. Rather, we run the edge-based model on the training data, and use the resulting parses to learn the color histogram models. This better mimics the situation at run-time, when we are faced with a new image to parse. When applying the region-based deformable model, we have already computed the edge responses φe(li) = βe i T f e(I(li)) (to train the region model). With little additional computational cost, we can add them as an extra feature to the region-based map f r i . One might think that the regionfeatures eliminate the need for edges – once we know that a person is wearing a white shirt in a green background, why bother with edges? If this was the case, one would learn a zero weight for the edge feature when learning βr i from training data. We learn roughly equal weights for the edge and region features, indicating both cues are complementary rather than redundant. Given the parse from the region-based model, we can re-learn a color model for each part and the background (and re-parse given the new models, and iterate). In our experience, both the parses and the color models empirically converge after 1-2 iterations (see Fig. 3). 5 Results We have tested our parsing algorithm on two datasets. Most people datasets are quite small, limited to tens of images. We have amassed a dataset of 305 images of people in interesting poses (which will be available on the author’s webpage). It has been collected from previous datasets of sports figures and personal pictures. To our knowledge, it is the largest labeled dataset available for human pose recognition. We also have tested our algorithm on the Weizmann dataset of horses [1]. Evalutation: Given an image, our parsing procedure returns a distribution over poses P(L|I). Ideally, we want the true pose to have a high probability, and all other poses to have a low value. Given a set of T test images each with a labeled ground-truth pose ˆLt, we score performance by computing −1 T P t log P(ˆLt|It). This is equivalent to standard measures of perplexity (up to a log scale) [11]. Figure 6: We visualize the part models for our deformable templates – light areas correspond to positive βi weights, and dark corresponds to negative. It is crucial to initialize our iterative procedure with a good edge-based deformable model. Given a collection of training images with labeled body parts, one could build an edge template for each part by averaging (left) – this is the standard maximum likelihood (ML) solution. As in [9], we found better results by training βe i with a conditional random field (CRF) model (middle). The CRF edge templates seem to emphasize different features, such as the contours of the head, lower arms, and lower torso. The first re-parsing from Fig.3 is also very crucial – we similarly learn region-based part templates βr i with a CRF (right). These templates focus more on region cues rather than edges. These templates appear more sophisticated than rectangle-based limb detectors [8, 9] – for example, to find upper arms and legs, it seems important to emphasize the edge facing away from the body. Log-probability of images given model Iter 0 Iter 1 Iter2 PeopleAll 62.33 55.60 57.39 HorsesAll 51.81 47.76 45.80 Comparison with previous work Previous Iter 0 Iter 1 USCPeople 55.85 45.77 41.49 Table 1: Quantitative evaluation. For each image, our parsing procedure returns a distribution of poses. We evaluate our algorithm by looking at a perplexity-based score [11] – the negative log probability of the ground truth pose given the estimated distribution, averaged over the test set. On the left, we look at the large datasets of people and horses (each with 300 images). Iter0 corresponds to the distribution computed by the edge-based model, while Iter1 and Iter2 show the results after our iterative parsing with a region-based model. For people, we achieve the best performance after one iteration of the region-based model. For horses, we do better after two iterations. To compare with previous approaches, we look at performance on the 20 image dataset from USC [9, 6]. Compared to [9], our model does better at explaining the ground-truth data. People: We learned a model from the first 100 training images (and their mirror-flipped versions). We learn both Θe and Θr from the same training data. We have evaluated results on the 205 remaining images. We show sample image in Fig.7. We localize some difficult poses quite well, and furthermore, the estimated posterior P(L|I) oftentimes reflects actual ambiguity in the data (ie, if multiple people are present). We quantitatively evaluate results in Table 1. We also compare with a state-of-the-art algorithm from [9], and show better performance on dataset used in that work. Horses: We learn a model from the first 20 training images, and test it on the remaining 280 images. In general, we do quite well. The posterior pose distribution often captures the non-rigid deformations in the body. This suggests we can use the uncertainty in our deformable matching algorithm to recover extra information about the object. Looking at the numbers in Table 1, we see that the parses tend do significantly better at capturing the ground-truth poses. We also see that this dataset is easier overall than our set of 305 people poses. Discussion: We have described an iterative parsing approach to pose estimation. Starting with an edge-based detector, we obtain an initial parse and iteratively build better features with which to subsequently parse. We hope this approach of learning image-specific features will prove helpful in other vision tasks. References [1] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In ECCV, 2002. Figure 7: Sample results. We show the original image, the initial edge-based parse, and the final region-based parse. We are able to capture some extreme articulations. In many cases the posterior is ambiguous because the image is (ie, multiple people are present). In particular, it may be surprising that the pair in the bottom-right both are recognized by the region model – this suggests that the the iter-region dissimilarity learned by the color histograms is a much stronger than the foreground similarity. We quantify results in Table 1. [2] M. Bray, P. Kohli, and P. Torr. Posecut: simultaneous segmentation and 3d pose estimation of humans using dynamic graph-cuts. In ECCV, 2006. [3] P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial structures for object recognition. Int. J. Computer Vision, 61(1), January 2005. [4] M.-H. Y. Gang Hua and Y. Wu. Learning to estimate human pose with data driven belief propagation. In CVPR, 2005. [5] M. Kumar, P. Torr, and A. Zisserman. Objcut. In CVPR, 2005. Figure 8: Sample results for horses. Our results tend to be quite good across the entire dataset of 300 images. Even though the horse model is fairly simplistic – a collection of rectangles similar to Fig. 6 – the posterior can capture rich non-rigid deformations of body parts. The Weizmann set of horses seems to be easier than our people dataset - we quantify this with a perplexity score in Table 1. [6] M. Lee and I. Cohen. Proposal maps driven mcmc for estimating human body pose in static images. In CVPR, 2004. [7] G. Mori, X. Ren, A. Efros, and J. Malik. Recovering human body configurations: Combining segmentation and recognition. In CVPR, 2004. [8] D. Ramanan, D. Forsyth, and A. Zisserman. Strike a pose: Tracking people by finding stylized poses. In CVPR, June 2005. [9] D. Ramanan and C. Sminchisescu. Training deformable models for localization. In CVPR, 2006. [10] X. Ren, A. C. Berg, and J. Malik. Recovering human body configurations using pairwise constraints between parts. In ICCV, 2005. [11] S. Russell and P. Norvig. Artifical Intelligence: A Modern Approach, chapter 23, pages 835–836. Prentice Hall, 2nd edition edition, 2003. [12] J. Zhang, J. Luo, R. Collins, and Y. Liu. Body localization in still images using hierarchical models and hybrid search. In CVPR, 2006.
|
2006
|
126
|
2,951
|
Correcting Sample Selection Bias by Unlabeled Data Jiayuan Huang School of Computer Science Univ. of Waterloo, Canada j9huang@cs.uwaterloo.ca Alexander J. Smola NICTA, ANU Canberra, Australia Alex.Smola@anu.edu.au Arthur Gretton MPI for Biological Cybernetics T¨ubingen, Germany arthur@tuebingen.mpg.de Karsten M. Borgwardt Ludwig-Maximilians-University Munich, Germany kb@dbs.ifi.lmu.de Bernhard Sch¨olkopf MPI for Biological Cybernetics T¨ubingen, Germany bs@tuebingen.mpg.de Abstract We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice. 1 Introduction The default assumption in many learning scenarios is that training and test data are independently and identically (iid) drawn from the same distribution. When the distributions on training and test set do not match, we are facing sample selection bias or covariate shift. Specifically, given a domain of patterns X and labels Y, we obtain training samples Z = {(x1, y1), . . . , (xm, ym)} ⊆X×Y from a Borel probability distribution Pr(x, y), and test samples Z′ = {(x′ 1, y′ 1), . . . , (x′ m′, y′ m′)} ⊆X×Y drawn from another such distribution Pr′(x, y). Although there exists previous work addressing this problem [2, 5, 8, 9, 12, 16, 20], sample selection bias is typically ignored in standard estimation algorithms. Nonetheless, in reality the problem occurs rather frequently : While the available data have been collected in a biased manner, the test is usually performed over a more general target population. Below, we give two examples; but similar situations occur in many other domains. 1. Suppose we wish to generate a model to diagnose breast cancer. Suppose, moreover, that most women who participate in the breast screening test are middle-aged and likely to have attended the screening in the preceding 3 years. Consequently our sample includes mostly older women and those who have low risk of breast cancer because they have been tested before. The examples do not reflect the general population with respect to age (which amounts to a bias in Pr(x)) and they only contain very few diseased cases (i.e. a bias in Pr(y|x)). 2. Gene expression profile studies using DNA microarrays are used in tumor diagnosis. A common problem is that the samples are obtained using certain protocols, microarray platforms and analysis techniques. In addition, they typically have small sample sizes. The test cases are recorded under different conditions, resulting in a different distribution of gene expression values. In this paper, we utilize the availability of unlabeled data to direct a sample selection de-biasing procedure for various learning methods. Unlike previous work we infer the resampling weight directly by distribution matching between training and testing sets in feature space in a non-parametric manner. We do not require the estimation of biased densities or selection probabilities [20, 2, 12], or the assumption that probabilities of the different classes are known [8]. Rather, we account for the difference between Pr(x, y) and Pr′(x, y) by reweighting the training points such that the means of the training and test points in a reproducing kernel Hilbert space (RKHS) are close. We call this reweighting process kernel mean matching (KMM). When the RKHS is universal [14], the population solution to this miminisation is exactly the ratio Pr′(x, y)/ Pr(x, y); however, we also derive a cautionary result, which states that even granted this ideal population reweighting, the convergence of the empirical means in the RKHS depends on an upper bound on the ratio of distributions (but not on the dimension of the space), and will be extremely slow if this ratio is large. The required optimisation is a simple QP problem, and the reweighted sample can be incorporated straightforwardly into several different regression and classification algorithms. We apply our method to a variety of regression and classification benchmarks from UCI and elsewhere, as well as to classification of microarrays from prostate and breast cancer patients. These experiments demonstrate that KMM greatly improves learning performance compared with training on unweighted data, and that our reweighting scheme can in some cases outperform reweighting using the true sample bias distribution. Key Assumption 1: In general, the estimation problem with two different distributions Pr(x, y) and Pr′(x, y) is unsolvable, as the two terms could be arbitrarily far apart. In particular, for arbitrary Pr(y|x) and Pr′(y|x), there is no way we could infer a good estimator based on the training sample. Hence we make the simplifying assumption that Pr(x, y) and Pr′(x, y) only differ via Pr(x, y) = Pr(y|x) Pr(x) and Pr(y|x) Pr′(x). In other words, the conditional probabilities of y|x remain unchanged (this particular case of sample selection bias has been termed covariate shift [12]). However, we will see experimentally that even in situations where our key assumption is not valid, our method can nonetheless perform well (see Section 4). 2 Sample Reweighting We begin by stating the problem of regularized risk minimization. In general a learning method minimizes the expected risk R[Pr, θ, l(x, y, θ)] = E(x,y)∼Pr [l(x, y, θ)] (1) of a loss function l(x, y, θ) that depends on a parameter θ. For instance, the loss function could be the negative log-likelihood −log Pr(y|x, θ), a misclassification loss, or some form of regression loss. However, since typically we only observe examples (x, y) drawn from Pr(x, y) rather than Pr′(x, y), we resort to computing the empirical average Remp[Z, θ, l(x, y, θ)] = 1 m m X i=1 l(xi, yi, θ). (2) To avoid overfitting, instead of minimizing Remp directly we often minimize a regularized variant Rreg[Z, θ, l(x, y, θ)] := Remp[Z, θ, l(x, y, θ)] + λΩ[θ], where Ω[θ] is a regularizer. 2.1 Sample Correction The problem is more involved if Pr(x, y) and Pr′(x, y) are different. The training set is drawn from Pr, however what we would really like is to minimize R[Pr′, θ, l] as we wish to generalize to test examples drawn from Pr′. An observation from the field of importance sampling is that R[Pr ′, θ, l(x, y, θ)] = E(x,y)∼Pr′ [l(x, y, θ)] = E(x,y)∼Pr h Pr′(x,y) Pr(x,y) | {z } :=β(x,y) l(x, y, θ) i (3) = R[Pr, θ, β(x, y)l(x, y, θ)], (4) provided that the support of Pr′ is contained in the support of Pr. Given β(x, y), we can thus compute the risk with respect to Pr′ using Pr. Similarly, we can estimate the risk with respect to Pr′ by computing Remp[Z, θ, β(x, y)l(x, y, θ)]. The key problem is that the coefficients β(x, y) are usually unknown, and we need to estimate them from the data. When Pr and Pr′ differ only in Pr(x) and Pr′(x), we have β(x, y) = Pr′(x)/Pr(x), where β is a reweighting factor for the training examples. We thus reweight every observation (x, y) such that observations that are under-represented in Pr obtain a higher weight, whereas overrepresented cases are downweighted. Now we could estimate Pr and Pr′ and subsequently compute β based on those estimates. This is closely related to the methods in [20, 8], as they have to either estimate the selection probabilities or have prior knowledge of the class distributions. Although intuitive, this approach has two major problems: first, it only works whenever the density estimates for Pr and Pr′(or potentially, the selection probabilities or class distributions) are good. In particular, small errors in estimating Pr can lead to large coefficients β and consequently to a serious overweighting of the corresponding observations. Second, estimating both densities just for the purpose of computing reweighting coefficients may be overkill: we may be able to directly estimate the coefficients βi := β(xi, yi) without having to estimate the two distributions. Furthermore, we can regularize βi directly with more flexibility, taking prior knowledge into account similar to learning methods for other problems. 2.2 Using the sample reweighting in learning algorithms Before we describe how we will estimate the reweighting coefficients βi, let us briefly discuss how to minimize the reweighted regularized risk Rreg[Z, β, l(x, y, θ)] := 1 m m X i=1 βil(xi, yi, θ) + λΩ[θ], (5) in the classification and regression settings (an additional classification method is discussed in the accompanying technical report [7]). Support Vector Classification: Utilizing the setting of [17]we can have the following minimization problem (the original SVMs can be formulated in the same way): minimize θ,ξ 1 2 ∥θ∥2 + C m X i=1 βiξi (6a) subject to ⟨φ(xi, yi) −φ(xi, y), θ⟩≥1 −ξi/∆(yi, y) for all y ∈Y, and ξi ≥0. (6b) Here, φ(x, y) is a feature map from X × Y into a feature space F, where θ ∈F and ∆(y, y′) denotes a discrepancy function between y and y′. The dual of (6) is given by minimize α 1 2 m X i,j=1;y,y′∈Y αiyαjy′k(xi, y, xj, y′) − m X i=1;y∈Y αiy (7a) subject to αiy ≥0 for all i, y and X y∈Y αiy/∆(yi, y) ≤βiC. (7b) Here k(x, y, x′, y′) := ⟨φ(x, y), φ(x′, y′)⟩denotes the inner product between the feature maps. This generalizes the observation-dependent binary SV classification described in [10]. Modifications of existing solvers, such as SVMStruct [17], are straightforward. Penalized LMS Regression: Assume l(x, y, θ) = (y −⟨φ(x), θ⟩)2 and Ω[θ] = ∥θ∥2. Here we minimize m X i=1 βi(yi −⟨φ(xi), θ⟩)2 + λ ∥θ∥2 . (8) Denote by ¯β the diagonal matrix with diagonal (β1, . . . , βm) and let K ∈Rm×m be the kernel matrix Kij = k(xi, xj). In this case minimizing (8) is equivalent to minimizing (y −Kα)⊤¯β(y − Kα) + λα⊤Kα with respect to α. Assuming that K and ¯β have full rank, the minimization yields α = (λ¯β−1 + K)−1y. The advantage of this formulation is that it can be solved as easily as solving the standard penalized regression problem. Essentially, we rescale the regularizer depending on the pattern weights: the higher the weight of an observation, the less we regularize. 3 Distribution Matching 3.1 Kernel Mean Matching and its relation to importance sampling Let Φ : X →F be a map into a feature space F and denote by µ : P →F the expectation operator µ(Pr) := Ex∼Pr(x) [Φ(x)] . (9) Clearly µ is a linear operator mapping the space of all probability distributions P into feature space. Denote by M(Φ) := {µ(Pr) where Pr ∈P} the image of P under µ. This set is also often referred to as the marginal polytope. We have the following theorem (proved in [7]): Theorem 1 The operator µ is bijective if F is an RKHS with a universal kernel k(x, x′) = ⟨Φ(x), Φ(x′)⟩in the sense of Steinwart [15]. The use of feature space means to compare distributions is further explored in [3]. The practical consequence of this (rather abstract) result is that if we know µ(Pr′), we can infer a suitable β by solving the following minimization problem: minimize β
µ(Pr ′) −Ex∼Pr(x) [β(x)Φ(x)]
subject to β(x) ≥0 and Ex∼Pr(x) [β(x)] = 1. (10) This is the kernel mean matching (KMM) procedure. For a proof of the following (and further results in the paper) see [7]. Lemma 2 The problem (10) is convex. Moreover, assume that Pr′ is absolutely continuous with respect to Pr (so Pr(A) = 0 implies Pr′(A) = 0). Finally assume that k is universal. Then the solution β(x) of (10) is Pr′(x) = β(x)Pr(x). 3.2 Convergence of reweighted means in feature space Lemma 2 shows that in principle, if we knew Pr and µ[Pr′], we could fully recover Pr′ by solving a simple quadratic program. In practice, however, neither µ(Pr′) nor Pr is known. Instead, we only have samples X and X′ of size m and m′, drawn iid from Pr and Pr′ respectively. Naively we could just replace the expectations in (10) by empirical averages and hope that the resulting optimization problem provides us with a good estimate of β. However, it is to be expected that empirical averages will differ from each other due to finite sample size effects. In this section, we explore two such effects. First, we demonstrate that in the finite sample case, for a fixed β, the empirical estimate of the expectation of β is normally distributed: this provides a natural limit on the precision with which we should enforce the constraint R β(x)d Pr(x) = 1 when using empirical expectations (we will return to this point in the next section). Lemma 3 If β(x) ∈[0, B] is some fixed function of x ∈X, then given xi ∼Pr iid such that β(xi) has finite mean and non-zero variance, the sample mean 1 m P i β(xi) converges in distribution to a Gaussian with mean R β(x)d Pr(x) and standard deviation bounded by B 2√m. This lemma is a direct consequence of the central limit theorem [1, Theorem 5.5.15]. Alternatively, it is straightforward to get a large deviation bound that likewise converges as 1/√m [6]. Our second result demonstrates the deviation between the empirical means of Pr′ and β(x) Pr in feature space, given β(x) is chosen perfectly in the population sense. In particular, this result shows that convergence of these two means will be slow if there is a large difference in the probability mass of Pr′ and Pr (and thus the bound B on the ratio of probability masses is large). Lemma 4 In addition to the Lemma 3 conditions, assume that we draw X′ := {x′ 1, . . . , x′ m′} iid from X using Pr′ = β(x) Pr, and ∥Φ(x)∥≤R for all x ∈X. Then with probability at least 1 −δ
1 m m X i=1 β(xi)Φ(xi) −1 m′ m′ X i=1 Φ(x′ i)
≤ 1 + p −2 log δ/2 R p B2/m + 1/m′ (11) Note that this lemma shows that for a given β(x), which is correct in the population sense, we can bound the deviation between the feature space mean of Pr′ and the reweighted feature space mean of Pr. It is not a guarantee that we will find coefficients βi that are close to β(xi), but it gives us a useful upper bound on the outcome of the optimization. Lemma 4 implies that we have O(B p 1/m + 1/m′B2) convergence in m, m′ and B. This means that, for very different distributions we need a large equivalent sample size to get reasonable convergence. Our result also implies that it is unrealistic to assume that the empirical means (reweighted or not) should match exactly. 3.3 Empirical KMM optimization To find suitable values of β ∈Rm we want to minimize the discrepancy between means subject to constraints βi ∈[0, B] and | 1 m Pm i=1 βi −1| ≤ǫ. The former limits the scope of discrepancy between Pr and Pr′ whereas the latter ensures that the measure β(x) Pr(x) is close to a probability distribution. The objective function is given by the discrepancy term between the two empirical means. Using Kij := k(xi, xj) and κi := m m′ Pm′ j=1 k(xi, x′ j) one may check that
1 m m X i=1 βiΦ(xi) −1 m′ m′ X i=1 Φ(x′ i)
2 = 1 m2 β⊤Kβ −2 m2 κ⊤β + const. We now have all necessary ingredients to formulate a quadratic problem to find suitable β via minimize β 1 2β⊤Kβ −κ⊤β subject to βi ∈[0, B] and m X i=1 βi −m ≤mǫ. (12) In accordance with Lemma 3, we conclude that a good choice of ǫ should be O(B/√m). Note that (12) is a quadratic program which can be solved efficiently using interior point methods or any other successive optimization procedure. We also point out that (12) resembles Single Class SVM [11] using the ν-trick. Besides the approximate equality constraint, the main difference is the linear correction term by means of κ. Large values of κi correspond to particularly important observations xi and are likely to lead to large βi. 4 Experiments 4.1 Toy regression example Our first experiment is on toy data, and is intended mainly to provide a comparison with the approach of [12]. This method uses an information criterion to optimise the weights, under certain restrictions on Pr and Pr′ (namely, Pr′ must be known, while Pr can be either known exactly, Gaussian with unknown parameters, or approximated via kernel density estimation). Our data is generated according to the polynomial regression example from [12, Section 2], for which Pr ∼N(0.5, 0.52) and Pr′ ∼N(0, 0.32) are two normal distributions. The observations are generated according to y = −x + x3, and are observed in Gaussian noise with standard deviation 0.3 (see Figure 1(a); the blue curve is the noise-free signal). We sampled 100 training (blue circles) and testing (red circles) points from Pr and Pr′ respectively. We attempted to model the observations with a degree 1 polynomial. The black dashed line is a best-case scenario, which is shown for reference purposes: it represents the model fit using ordinary least squared (OLS) on the labeled test points. The red line is a second reference result, derived only from the training data via OLS, and predicts the test data very poorly. The other three dashed lines are fit with weighted ordinary least square (WOLS), using one of three weighting schemes: the ratio of the underlying training and test densities, KMM, and the information criterion of [12]. A summary of the performance over 100 trials is shown in Figure 1(b). Our method outperforms the two other reweighting methods. −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 −1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 x from q0 true fitting model OLS fitting xq0 x from q1 OLS fitting xq1 WOLS by ratio WOLS by KMM WOLS by min IC (a) ratio KMM IC OLS 0 0.2 0.4 0.6 0.8 1 Sum of square loss (b) Figure 1: (a) Polynomial models of degree 1 fit with OLS and WOLS;(b) Average performances of three WOLS methods and OLS on the test data in (a). Labels are Ratio for ratio of test to training density; KMM for our approach; min IC for the approach of [12]; and OLS for the model trained on the labeled test points. 4.2 Real world datasets We next test our approach on real world data sets, from which we select training examples using a deliberately biased procedure (as in [20, 9]). To describe our biased selection scheme, we need to define an additional random variable si for each point in the pool of possible training samples, where si = 1 means the ith sample is included, and si = 0 indicates an excluded sample. Two situations are considered: the selection bias corresponds to our assumption regarding the relation between the training and test distributions, and P(si = 1|xi, yi) = P(si|xi); or si is dependent only on yi, i.e. P(si|xi, yi) = P(si|yi), which potentially creates a greater challenge since it violates our key assumption 1. In the following, we compare our method (labeled KMM) against two others: a baseline unweighted method (unweighted), in which no modification is made, and a weighting by the inverse of the true sampling distribution (importance sampling), as in [20, 9]. We emphasise, however, that our method does not require any prior knowledge of the true sampling probabilities. In our experiments, we used a Gaussian kernel exp(−σ∥xi −xj∥2) in our kernel classification and regression algorithms, and parameters ǫ = (√m −1)/√m and B = 1000 in the optimization (12). 1 2 3 4 5 6 7 8 9 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 biased feature test error unweighted importance sampling KMM (a) Simple bias on features 0.1 0.2 0.3 0.4 0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 training set proportion test error unweighted importance sampling KMM (b) Joint bias on features 1 2 3 4 5 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 training set proportion test error unweighted importance sampling KMM (c) Bias on labels 0 10 20 30 40 50 0 2 4 6 8 10 12 optimal weights inverse of true sampling probabilites (d) β vs inverse sampling prob. Figure 2: Classification performance analysis on breast cancer dataset from UCI. 4.2.1 Breast Cancer Dataset This dataset is from the UCI Archive, and is a binary classification task. It includes 699 examples from 2 classes: benign (positive label) and malignant (negative label). The data are randomly split into training and test sets, where the proportion of examples used for training varies from 10% to 50%. Test results are averaged over 30 trials, and were obtained using a support vector classifier with kernel size σ = 0.1. First, we consider a biased sampling scheme based on the input features, of which there are nine, with integer values from 0 to 9. Since smaller feature values predominate in the unbiased data, we sample according to P(s = 1|x ≤5) = 0.2 and P(s = 1|x > 5) = 0.8, repeating the experiment for each of the features in turn. Results are an average over 30 random training/test splits, with 1/4 of the data used for training and 3/4 for testing. Performance is shown in Figure 2(a): we consistently outperform the unweighted method, and match or exceed the performance obtained using the known distribution ratio. Next, we consider a sampling bias that operates jointly across multiple features. We select samples less often when they are further from the sample mean x over the training data, i.e. P(si|xi) ∝exp(−σ∥xi −x∥2) where σ = 1/20. Performance of our method in 2(b) is again better than the unweighted case, and as good as or better than reweighting using the sampling model. Finally, we consider a simple biased sampling scheme which depends only on the label y: P(s = 1|y = 1) = 0.1 and P(s = 1|y = −1) = 0.9 (the data has on average twice as many positive as negative examples when uniformly sampled). Average performance for different training/testing split proportions is in Figure 2(c); remarkably, despite our assumption regarding the difference between the training and test distributions being violated, our method still improves the test performance, and outperforms the reweighting by density ratio for large training set sizes. Figure 2(d) shows the weights β are proportional to the inverse of true sampling probabilities: positive examples have higher weights and negative ones have lower weights. 4.2.2 Further Benchmark Datasets We next compare the performance on further benchmark datasets1 by selecting training data via various biased sampling schemes. Specifically, for the sampling distribution bias on labels, we use P(s = 1|y) = exp(a + by)/(1 + exp(a + by)) (datasets 1 to 5), or the simple step distribution P(s = 1|y = 1) = a, P(s = 1|y = −1) = b (datasets 6 and 7). For the remaining datasets, we generate biased sampling schemes over their features. We first do PCA, selecting the first principal component of the training data and the corresponding projection values. Denoting the minimum value of the projection as m and the mean as m, we apply a normal distribution with mean m + (m −m)/a and variance (m −m)/b as the biased sampling scheme. Please refer to [7] for detailed parameter settings. We use penalized LMS for regression problems and SVM for classification problems. To evaluate generalization performance, we utilize the normalized mean square error (NMSE) given by 1 n Pn i=1 (yi−µi) var y for regression problems, and the average test error for classification problems. In 13 out of 23 experiments, our reweighting approach is the most accurate (see Table 1), despite having no prior information about the bias of the test sample (and, in some cases, despite the additional fact that the data reweighting does not conform to our key assumption 1). In addition, the KMM always improves test performance compared with the unweighted case. Two additional points should be borne in mind: first, we use the same σ for the kernel mean matching and the SVM, as listed in Table 1. Performance might be improved by decoupling these kernel sizes: indeed, we employ kernels that are somewhat large, suggesting that the KMM procedure is helpful in the case of relatively smooth classification/regresssion functions. Second, we did not find a performance improvement in the case of data sets with smaller sample sizes. This is not surprising, since a reweighting would further reduce the effective number of points used for training, resulting in insufficient data for learning. Table 1: Test results for three methods on 18 datasets with different sampling schemes. The results are averages over 10 trials for regression problems (marked *) and 30 trials for classification problems. We used a Gaussian kernel of size σ for both the kernel mean matching and the SVM/LMS regression, and set B = 1000. NMSE / Test err. DataSet σ ntr selected ntst unweighted importance samp. KMM 1. Abalone* 1e −1 2000 853 2177 1.00 ± 0.08 1.1 ± 0.2 0.6 ± 0.1 2. CA Housing* 1e −1 16512 3470 4128 2.29 ± 0.01 1.72 ± 0.04 1.24 ± 0.09 3. Delta Ailerons(1)* 1e3 4000 1678 3129 0.51 ± 0.01 0.51 ± 0.01 0.401 ± 0.007 4. Ailerons* 1e −5 7154 925 6596 1.50 ± 0.06 0.7 ± 0.1 1.2 ± 0.2 5. haberman(1) 1e −2 150 52 156 0.50 ± 0.09 0.37 ± 0.03 0.30 ± 0.05 6. USPS(6vs8)(1) 1/128 500 260 1042 0.13 ± 0.18 0.1 ± 0.2 0.1 ± 0.1 7. USPS(3vs9)(1) 1/128 500 252 1145 0.016 ± 0.006 0.012 ± 0.005 0.013 ± 0.005 8. Bank8FM* 1e −1 4500 654 3692 0.5 ± 0.1 0.45 ± 0.06 0.47 ± 0.05 9. Bank32nh* 1e −2 4500 740 3692 23 ± 4.0 19 ± 2 19 ± 2 10. cpu-act* 1e −12 4000 1462 4192 10 ± 1 4.0 ± 0.2 1.9 ± 0.2 11. cpu-small* 1e −12 4000 1488 4192 9 ± 2 4.0 ± 0.2 2.0 ± 0.5 12. Delta Ailerons(2)* 1e3 4000 634 3129 2 ± 2 1.5 ± 1.5 1.7 ± 0.9 13. Boston house* 1e −4 300 108 206 0.8 ± 0.2 0.74 ± 0.09 0.76 ± 0.07 14. kin8nm* 1e −1 5000 428 3192 0.85 ± 0.2 0.81 ± 0.1 0.81 ± 0.2 15. puma8nh* 1e −1 4499 823 3693 1.1 ± 0.1 0.77 ± 0.05 0.83 ± 0.03 16. haberman(2) 1e −2 150 90 156 0.27 ± 0.01 0.39 ± 0.04 0.25 ± 0.2 17. USPS(6vs8) (2) 1/128 500 156 1042 0.23 ± 0.2 0.23 ± 0.2 0.16 ± 0.08 18. USPS(6vs8) (3) 1/128 500 104 1042 0.54 ± 0.0002 0.5 ± 0.2 0.16 ± 0.04 19. USPS(3vs9)(2) 1/128 500 252 1145 0.46 ± 0.09 0.5 ± 0.2 0.2 ± 0.1 20. Breast Cancer 1e −1 280 96 419 0.05 ± 0.01 0.036 ± 0.005 0.033 ± 0.004 21. India diabetes 1e −4 200 97 568 0.32 ± 0.02 0.30 ± 0.02 0.30 ± 0.02 22. ionosphere 1e −1 150 64 201 0.32 ± 0.06 0.31 ± 0.07 0.28 ± 0.06 23. German credit 1e −4 400 214 600 0.283 ± 0.004 0.282 ± 0.004 0.280 ± 0.004 4.2.3 Tumor Diagnosis using Microarrays Our next benchmark is a dataset of 102 microarrays from prostate cancer patients [13]. Each of these microarrays measures the expression levels of 12,600 genes. The dataset comprises 50 samples from normal tissues (positive label) and 52 from tumor tissues (negative label). We simulate the realisitc scenario that two sets of microarrays A and B are given with dissimilar proportions of tumor samples, and we want to perform cancer diagnosis via classification, training on A and predicting 1Regression data from http://www.liacc.up.pt/∼ltorgo/Regression/DataSets.html; classification data from UCI. Sets with numbers in brackets are examined by different sampling schemes. on B. We select training examples via the biased selection scheme P(s = 1|y = 1) = 0.85 and P(s = 1|y = −1) = 0.15. The remaining data points form the test set. We then perform SVM classification for the unweighted, KMM, and importance sampling approaches. The experiment was repeated over 500 independent draws from the dataset according to our biased scheme; the 500 resulting test errors are plotted in [7]. The KMM achieves much higher accuracy levels than the unweighted approach, and is very close to the importance sampling approach. We study a very similar scenario on two breast cancer microarray datasets from [4] and [19], measuring the expression levels of 2,166 common genes for normal and cancer patients [18]. We train an SVM on one of them and test on the other. Our reweighting method achieves significant improvement in classification accuracy over the unweighted SVM (see [7]). Hence our method promises to be a valuable tool for cross-platform microarray classification. Acknowledgements: The authors thank Patrick Warnat (DKFZ, Heidelberg) for providing the microarray datasets, and Olivier Chapelle and Matthias Hein for helpful discussions. The work is partially supported by by the BMBF under grant 031U112F within the BFAM project, which is part of the German Genome Analysis Network. NICTA is funded through the Australian Government’s Backing Australia’s Ability initiative, in part through the ARC. This work was supported in part by the IST Programme of the EC, under the PASCAL Network of Excellence, IST-2002-506778. References [1] G. Casella and R. Berger. Statistical Inference. Duxbury, Pacific Grove, CA, 2nd edition, 2002. [2] M. Dudik, R.E. Schapire, and S.J. Phillips. Correcting sample selection bias in maximum entropy density estimation. In Advances in Neural Information Processing Systems 17, 2005. [3] A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for the two-sampleproblem. In NIPS. MIT Press, 2006. [4] S. Gruvberger, M. Ringner, Y.Chen, S.Panavally, L.H. Saal, C. Peterson A.Borg, M. Ferno, and P.S.Meltzer. Estrogen receptor status in breast cancer is associated with remarkably distinct gene expression patterns. Cancer Research, 61, 2001. [5] J. Heckman. Sample selection bias as a specification error. Econometrica, 47(1):153–161, 1979. [6] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13–30, 1963. [7] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Sch¨olkopf. Correcting sample selection bias by unlabeled data. Technical report, CS-2006-44, University of Waterloo, 2006. [8] Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in nonstandard situations. Machine Learning, 46:191–202, 2002. [9] S. Rosset, J. Zhu, H. Zou, and T. Hastie. A method for inferring label sampling mechanisms in semisupervised learning. In Advances in Neural Information Processing Systems 17, 2004. [10] M. Schmidt and H. Gish. Speaker identification via support vector classifiers. In Proc. ICASSP ’96, pages 105–108, Atlanta, GA, May 1996. [11] B. Sch¨olkopf, J. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, 2001. [12] H. Shimodaira. Improving predictive inference under convariance shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90, 2000. [13] D. Singh, P. Febbo, K. Ross, D. Jackson, J. Manola, C. Ladd, P. Tamayo, A. Renshaw, A. DAmico, and J. Richie. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell, 1(2), 2002. [14] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67–93, 2002. [15] I. Steinwart. Support vector machines are universally consistent. J. Compl., 18:768–791, 2002. [16] M. Sugiyama and K.-R. M¨uller. Input-dependent estimation of generalization error under covariate shift. Statistics and Decisions, 23:249–279, 2005. [17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 2005. [18] P. Warnat, R. Eils, and B. Brors. Cross-platform analysis of cancer microarray data improves gene expression based classification of phenotypes. BMC Bioinformatics, 6:265, Nov 2005. [19] M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R. Spang, H Zuzan, J.A. Olson Jr, J.R.Marks, and J.R.Nevins. Predicting the clinical status of human breast cancer by using gene expression profiles. PNAS, 98(20), 2001. [20] B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In International Conference on Machine Learning ICML’04, 2004.
|
2006
|
127
|
2,952
|
A Nonparametric Approach to Bottom-Up Visual Saliency Wolf Kienzle, Felix A. Wichmann, Bernhard Sch¨olkopf, and Matthias O. Franz Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 T¨ubingen, Germany {kienzle,felix,bs,mof}@tuebingen.mpg.de Abstract This paper addresses the bottom-up influence of local image information on human eye movements. Most existing computational models use a set of biologically plausible linear filters, e.g., Gabor or Difference-of-Gaussians filters as a front-end, the outputs of which are nonlinearly combined into a real number that indicates visual saliency. Unfortunately, this requires many design parameters such as the number, type, and size of the front-end filters, as well as the choice of nonlinearities, weighting and normalization schemes etc., for which biological plausibility cannot always be justified. As a result, these parameters have to be chosen in a more or less ad hoc way. Here, we propose to learn a visual saliency model directly from human eye movement data. The model is rather simplistic and essentially parameter-free, and therefore contrasts recent developments in the field that usually aim at higher prediction rates at the cost of additional parameters and increasing model complexity. Experimental results show that—despite the lack of any biological prior knowledge—our model performs comparably to existing approaches, and in fact learns image features that resemble findings from several previous studies. In particular, its maximally excitatory stimuli have center-surround structure, similar to receptive fields in the early human visual system. 1 Introduction The human visual system samples images through saccadic eye movements, which rapidly change the point of fixation. It is believed that the underlying mechanism is driven by both top-down strategies, such as the observer’s task, thoughts, or intentions, and by bottom-up effects. The latter are usually attributed to early vision, i.e., to a system that responds to simple, and often local image features, such as a bright spot in a dark scene. During the past decade, several studies have explored which image features attract eye movements. For example, Reinagel and Zador [18] found that contrast was substantially higher at gaze positions, Krieger et al. [10] reported differences in the intensity bispectra. Parkhurst, Law, and Niebur [13] showed that a saliency map [9], computed by a model similar to the widely used framework by Itti, Koch and Niebur [3, 4], is significantly correlated with human fixation patterns. Numerous other hypotheses were tested [1, 5, 6, 10, 12, 14, 16, 17, 19, 21], including intensity, edge content, orientation, symmetry, and entropy. Each of the above models is built on a particular choice of image features that are believed to be relevant to visual saliency. A common approach is to compute several feature maps from linear filters that are biologically plausible, e.g., Difference of Gaussians (DoG) or Gabor filters, and nonlinearly combine the feature maps into a single saliency map [1, 3, 4, 13, 16, 21]. This makes it straightforward to construct complex models from simple, biologically plausible components. A downside of this parametric approach, however, is that the feature maps are chosen manually by the designer. As a consequence, any such model is biased to certain image structure, and therefore discriminates features that might not seem plausible at first sight, but may well play a significant role. (a) (b) Figure 1: Eye movement data. (a) shows 20 (out of 200) of the natural scenes that were presented to the 14 subjects. (b) shows the top right image from (a), together with the recorded fixation locations from all 14 subjects. The average viewing time per subject was approximately 3 seconds. Another problem comes from the large number of additional design parameters that are necessary in any implementation, such as the precise filter shapes, sizes, weights, nonlinearities, etc. While choices for these parameters are often only vaguely justified in terms of their biological plausibility, they greatly affect the behavior of the system as a whole and thus its predictive power. The latter, however, is often used as a measure of plausibility. This is clearly an undesirable situation, since it makes a fair comparison between models very difficult. In fact, we believe that this may explain the conflicting results in the debate about whether edges or contrast filters are more relevant [1, 6, 13]. In this paper we present a nonparametric approach to bottom-up saliency, which does not (or to a far lesser extent) suffer from the shortcomings described above. Instead of using a predefined set of feature maps, our saliency model is learned directly from human eye movement data. The model consists of a nonlinear mapping from an image patch to a real value, trained to yield positive outputs on fixated, and negative outputs on randomly selected image patches. The main difference to previous models is that our saliency function is essentially determined by the fact that it maximizes the prediction performance on the observed data. Below, we show that the prediction performance of our model is comparable to that of biologically motivated models. Furthermore, we analyze the system in terms of the features it has learned, and compare our findings to previous results. 2 Eye Movement Data Eye movement data were taken from [8]. They consist of 200 natural images (1024×768, 8bit grayscale) and 18,065 fixation locations recorded from 14 na¨ıve subjects. The subjects freely viewed each image for about three seconds on a 19 inch CRT at full screen size and 60cm distance, which corresponds to 37◦× 27◦of visual angle. For more details about the recording setup, please refer to [8]. Figure 1 illustrates the data set.1 Below, we are going to formulate saliency learning as a classification problem. This requires negative examples, i.e., a set of non-fixated, or background locations. As pointed out in [18, 21], care must be taken that no spurious differences in the local image statistics are generated by using different spatial distributions for positive and negative examples. As an example, fixation locations are usually biased towards the center of the image, probably due to the reduced physical effort when looking straight. At the same time, it is known that local image statistics can be correlated with 1In our initial study [8], these data were preprocessed further. In order to reduce the noise due to varying top-down effects, only those locations that are consistent among subjects were used. Unfortunately, while this leads to higher prediction scores, the resulting model is only valid for the reduced data set, which in that case is less than ten percent of the fixations. To better explain the entire data set, in the present work we instead retain all 18,065 fixations, i.e., we trade performance for generality. image location [18, 21], e.g., due to the photographer’s bias of keeping objects at the center of the image. If we sampled background locations uniformly over the image, our system might learn the difference between pixel statistics at the image center and towards the boundary, instead of the desired difference between fixated and non-fixated locations. Moreover, the learning algorithm might be mislead by simple boundary effects. To avoid this effect, we use the 18,065 fixation locations to generate an equal number of background locations by using the same image coordinates, but with the corresponding image numbers shuffled. This ensures that the spatial distributions of both classes are identical. The proposed model computes saliency based on local image structure. To represent fixations and background locations accordingly, we cut out a square image patch at each location and stored the pixel values in a feature vector xi together with a label yi ∈{1; −1}, indicating fixation or background. Unfortunately, choosing an appropriate patch size and resolution is not straightforward, as there might be a wide range of reasonable values. To remedy this, we follow the approach proposed in [8], which is a simple compromise between computational tractability and generality: we fix the resolution to 13 × 13 pixels, but leave the patch size d unspecified, i.e., we construct a separate data set for various values of d. Later, we determine the size d which leads to the best generalization performance estimate. For each image location, 11 patches were extracted, with sizes ranging between d = 0.47◦and d = 27◦visual angle, equally spaced on a logarithmic scale. Each patch was subsampled to 13×13 pixels, after low-pass filtering to reduce aliasing effects. The range of sizes was chosen such that pixels in the smallest patch correspond to image pixels at full screen resolution, and that the largest patch has full screen height. Finally, for each patch we subtracted the mean intensity, and stored the normalized pixel values in a 169-dimensional feature vector xi. The data were divided into a training (two thirds) and a test set (one third). This was done such that both sets contained data from all 200 images, but never from the same subject on the same image. For model selection (Section 4.1) and assessment (Section 4.2), which rely on cross-validation estimates of the generalization error, further splits were required. These splits were done image-wise, i.e., such that no validation or test fold contained any data from images in the corresponding training fold. This is necessary, since image patches from different locations can overlap, leading to a severe over-estimation of the generalization performance. 3 Model and Learning Method From the eye movement described in Section 2, we learn a bottom-up saliency map f(x) : R169 → R using a support vector machine (SVM) [2]. We model saliency as a linear combination of Gaussian radial basis functions (RBFs), centered at the training points xi, f(x) = m X i=1 αiyi exp −∥x −xi∥2 2σ2 . (1) The SVM algorithm determines non-negative coefficients αi such that the regularized risk R(f) = D(f) + λS(f) is minimized. Here, D(f) denotes the data fit Pm i=1 max(0, 1 −yif(xi)), and S(f) is the standard SVM regularizer 1 2∥f∥2 [2]. The tradeoff between data fit and smoothness is controlled by the parameter λ. As described in Section 4.1, this design parameter, as well as the RBF bandwidth σ and the patch size d is determined by maximizing the model’s estimated prediction performance. It is insightful to compare our model (1) to existing models. Similar to most existing approaches, our model is based on linear filters whose outputs are nonlinearly combined into a real-valued saliency measure. This is a common model for the early visual system, and receptive-field estimation techniques such as reverse-correlation usually make the same assumptions. It differs from existing approaches in terms of its nonparametric nature, i.e., the basic linear filters are the training samples themselves. That way, the system is not restricted to the designer’s choice of feature maps, but learns relevant structure from the data. For the nonlinear component, we found the Gaussian RBF appropriate for two reasons: first, it is a universal SVM kernel [20], allowing the model to approximate any smooth function on the data points; second, it carries no information about the spatial ordering of the pixels within an image patch x: if we consistently permuted the pixels of the training and test patches, the model output would be identical. This implies that the system is has no a priori preference for particular image structures. The SVM algorithm was chosen primarily since it is a d=0.47° 4 2 0 −2 −4 −2 −1 0 1 2 d=0.7° 4 2 0 −2 −4 −2 −1 0 1 2 d=1.1° 4 2 0 −2 −4 −2 −1 0 1 2 d=1.6° 4 2 0 −2 −4 −2 −1 0 1 2 d=2.4° 4 2 0 −2 −4 −2 −1 0 1 2 d=3.6° 4 2 0 −2 −4 −2 −1 0 1 2 d=5.4° 4 2 0 −2 −4 −2 −1 0 1 2 d=8.1° 4 2 0 −2 −4 −2 −1 0 1 2 d=12° 4 2 0 −2 −4 −2 −1 0 1 2 d=18° 4 2 0 −2 −4 −2 −1 0 1 2 d=27° 4 2 0 −2 −4 −2 −1 0 1 2 0.500 0.517 0.533 0.550 colormap Figure 2: Selection of the parameters d, σ and λ. Each panel shows the estimated model performance for a fixed d, and all σ (vertical axes, label values denote log10 σ) and λ (horizontal axes, label values denote log10 λ). Darker shades of gray denote higher accuracy; a legend is shown on the lower right. Based on these results, we fixed d = 5.4◦, log10 σ = 0, and log10 λ = 0. powerful standard method for binary classification. In light of its resemblance to regularized logistic regression, our method is therefore related to the one proposed in [1]. Their model is parametric, however. 4 Experiments 4.1 Selection of d, σ, and λ For fixing d, σ, and λ, we conducted an exhaustive search on a 11 × 9 × 13 grid with the grid points equally spaced on a log scale such that d = 0.47◦, . . . , 27◦, σ = 0.01, . . . , 100, and λ = 0.001, . . . , 10, 000. In order to make the search computationally tractable, we divided the training set (Section 2) into eight parts. Within each part, and for each point on the parameter grid, we computed a cross-validation estimate of the classification accuracy (i.e., the relative frequency of signf(xi) = yi). The eight estimates were then averaged to yield one performance estimate for each grid point. Figure 2 illustrates the results. Each panel shows the model performance for one (σ, λ)slice of the parameter space. The performance peaks at 0.55 (0.013 standard error of mean, SEM) at d = 5.4◦, σ = 1, λ = 1, which is in agreement with [8], up to their slightly different d = 6.2◦.2 Note that while 0.55 is not much, it is four standard errors above chance level. Furthermore, all (σ, λ) plots show the same, smooth pattern which is known to be characteristic for RBF-SVM model selection [7]. This further suggests that, despite the low absolute performance, our choice of parameters is well justified. Model performance (Section 4.2) and interpretation (Section 4.3) were qualitatively stable within at least one step in any direction of the parameter grid. 2Due to the subsampling (Section 2), the optimal patch size of d = 5.4◦leads to an effective saliency map resolution of 89 × 66 (the original image is 1024 × 768), which corresponds to 2.4 pixels per visual degree. While this might seem low, note that similar resolutions have been suggested for bottom-up saliency: using Itti’s model with default parameters leads to a resolution of 64 × 48. (a) (b) (c) Figure 3: Saliency maps. (a) shows a natural scene from our database, together with the recorded eye movements from all 14 subjects. Itti’s saliency map, using ”standard” normalization is shown in (b). Brighter regions denote more salient areas. The picture in (c) shows our learned saliency map, which was re-built for this example, with the image in (a) excluded from the training data. Note that the differing boundary effects are of no concern for our performance measurements, since hardly any fixations are that close to the boundary. 4.2 Model Performance To test the model’s performance with the optimal parameters (d = 5.4◦, σ = 1, λ = 1) and more training examples, we divided the test set into eight folds. Again, this was done image-wise, i.e., such that each fold comprised the data from 25 images (cf. Section 2). For each fold we trained our model on all training data not coming from the respective 25 images. As expected, the use of more training data significantly improved the accuracy to 0.60 (0.011 SEM). For a comparison with other work, we also computed the mean ROC score of our system, 0.64 (0.010 SEM). This performance is lower than the 0.67 reported in [8]. However, their model explains only about 10% of the ”simplest” fixations in the data. Another recent study yielded 0.63 [21], although on a different data set. Itti’s model [4] was tested in [15], who report ROC scores around 0.65 (taken from a graph, no actual numbers are given). Scores of up to 0.70 were achieved with an extended version, that uses more elaborate long-range interactions and eccentricity-dependent processing. We also ran Itti’s model on our test set, using the code from [22]. We tried both the ”standard” [3] and ”iterative” [4] normalization scheme. The best performing setting was the earlier ”standard” method, which yielded 0.62 (0.022 SEM). The more recent iterative scheme did not improve on this result, also not when only the first, or first few fixations were considered. For a qualitative comparison, Figure 3 shows our learned saliency map and Itti’s model evaluated on a sample image. It is important to mention that the purpose of the above comparison is not to show that our model makes better predictions than existing models — which would be a weak statement anyway since the data sets are different. The main insight here is that our nonparametric model performs at the same level as existing, biologically motivated models, which implement plausible, multi-scale front-end filters, carefully designed non-linearities, and even global effects. 4.3 Feature Analysis In the previous section we have shown that our model generalizes to unseen data, i.e., that it has learned regularities in the data that are relevant to the human fixation selection mechanism. This section addresses the question of what the learned regularities are, and how they are related to existing models. As mentioned in Section 1, characterizing a nonlinear model solely by the feature maps at its basis is insufficient. In fact, our SVM-based model is an example where this would be particularly wrong. An SVM assigns the smaller (down to zero) weights αi, the easier the respective training samples xi can be classified. Describing f by its support vectors {xi|αi > 0} is therefore misleading, since they represent unusual examples, rather than prototypes. To avoid this, we instead characterize the learned function by means of inputs x that are particularly excitatory or inhibitory to the entire system. As a first test, we collected 20, 000 image patches from random locations in natural scenes (not in the training set) and presented them to our system. The top and bottom 100 patches sorted by model output and a histogram over all 20, 000 saliency values are shown in Figure 4 . Note that since our model is unbiased towards any particular image structure, the different (a) (b) Figure 4: Natural image patches ranked by saliency according to our model. The panels (a) and (b) show the bottom and top 100 of 20, 000 patches, respectively (the dots in between denote the 18, 800 patches which are not shown). A histogram of all 20, 000 saliency values is given on the lower right. The outputs in (a) range from −2.0 to −1.7, the ones in (b) from 0.99 to 1.8. −2 −1 0 1 2 0 2 4 6 frequency (x1000) saliency patterns observed in high and low output patches are solely due to differences between pixel statistics at fixated and background regions. The high output patches seem to have higher contrast, which is in agreement with previous results, e.g., [8, 10, 14, 18]. In fact, the correlation coefficient of the model output (all 20, 000 values) with r.m.s. contrast is 0.69. Another result from [14, 18] is that in natural images the correlation between pixel values decays faster at fixated locations, than at randomly chosen locations. Figure 4 shows this trend as well: as we move away from the patch center, the pixels’ correlation with the center intensity decays faster for patches with high predicted salience. Moreover, a study on bispectra at fixated image locations [10] suggested that “the saccadic selection system avoids image regions, which are dominated by a single oriented structure. Instead, it selects regions containing different orientations, like occlusions, corners, etc”. A closer look at Figure 4 reveals that our model tends to attribute saliency not alone to contrast, but also to non-trivial image structure. Extremely prominent examples of this effect are the high contrast edges appearing among the bottom 100 patches, e.g., in the patches at position (7,2) or (10,10). To further characterize the system, we explicitly computed the maximally excitatory and inhibitory stimuli. This amounts to solving the unconstrained optimization problems arg maxx f(x) and arg minx f(x), respectively. Since f is differentiable, we can use a simple gradient method. The only problem is that f(x) can have multiple extrema in x. A common way to deal with local optima is to run the search several times with different initial values for x. Here, we repeated the search 1, 000 times for both minima and maxima. The initial x were constructed by drawing 169 pixel values from a normal distribution with zero mean and then normalizing the patch standard deviation to 0.11 (the average value over the training patches). The 1, 000 optimal values were then clustered using k-means. The number of clusters k was found by increasing k until the clusters were stable. Interestingly, the clusters for both minima and maxima were already highly concentrated for k = 2, i.e., within each cluster, the average variance of a pixel was less than 0.03% of the pixel variance of its center patch. This result could also be confirmed visually, i.e., despite the randomized initial values both optimization problems had only two visually distinct outcomes. We also re-ran this experiment with natural image patches as starting values, with identical results. This indicates that our saliency function has essentially two minima and two maxima in x. The four optimal stimuli are shown in Figure 5 . The first two images (a) and (b) show the maximally inhibitory stimuli. saliency=−4.9 saliency=−4.5 saliency=5.0 saliency=5.5 (a) (b) (c) (d) Figure 5: Maximally inhibitory and excitatory stimuli of the learned model. Note the large magnitude of the saliency values compared to the typical model output (cf. the histogram in Figure 4). (a) and (b): the two maximally inhibitory stimuli (lowest possible saliency). (c) and (d): the two maximally excitatory stimuli (highest possible saliency), (e) and (f): the radial average of (c) and (d), respectively. 0 0.1 0.2 0.3 0.4 −0.3 −0.2 −0.1 0 (e) (f) These are rather difficult to interpret other than no particular structure is visible. On the other hand, the maximally excitatory stimuli, denoted by (c) and (d), have center-surround structure. All four stimuli have zero mean, which is not surprising since during gradient search, both the initial value and the step directions—which are linear combinations of the training data—have zero mean. As a consequence, the surrounds of (c) and (d) are inhibitory w.r.t. their centers, which can also be seen from the different signs in their radial averages (e) and (f).3 The optimal stimuli thus bear a close resemblance to receptive fields in the early visual system [11]. To see that the optimal stimuli have in fact prototype character, note how the histogram in Figure 4 reflects the typical distribution of natural image patches along the learned saliency function. It illustrates that the saliency values of unseen natural image patches usually lie between −2.0 and 1.8 (for the training data, they are between −1.8 and 2.2). In contrast, our optimal stimuli have saliency values of 5.0 and 5.5, indicating that they represent the difference between fixated and background locations in a much more articulated way than any of the noisy measurements in our data set. 5 Discussion We have presented a nonparametric model for bottom-up visual saliency, trained on human eye movement data. A major goal of this work was to complement existing approaches in that we keep the number of assumptions low, and instead learn as much as possible from the data. In order to make this tractable, the model is rather simplistic, e.g., it implements no long-range interactions within feature maps. Nevertheless, we found that the prediction performance of our system is comparable to that of parametric, biologically motivated models. Although no such information was used in the design of our model, we found that the learned features are consistent with earlier results on bottom-up saliency. For example, the outputs of our model are strongly correlated with local r.m.s. contrast [18]. Also, we found that the maximally excitatory stimuli of our system have centersurround structure, similar to DoG filters commonly used in early vision models [3, 13, 21]. This is a nontrivial result, since our model has no preference for any particular image features, i.e., a priori, any 13 × 13 image patch is equally likely to be an optimal stimulus. Recently, several authors have explored whether oriented (Gabor) or center-surround (DoG) features are more relevant to human eye movements. As outlined in Section 1, this is a difficult task: while some results indicate that both features perform equally well [21], others suggest that one [1] or the other [6, 13] are more relevant. Our results shed additional light on this discussion in favor of center-surround features. 3Please note that the radial average curves in Figure 5 (e) and (f) do not necessarily sum to zero, since the patch area in (c) and (d) grows quadratically with its corresponding radius. References [1] R. J. Baddeley and B. W. Tatler. High frequency edges (but not contrast) predict where we fixate: A bayesian system identification analysis. Vision Research, 46(18):2824–2833, 2006. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121–167, 1998. [5] L. Itti. Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes. Visual Cognition, 12(6):1093–1123, 2005. [6] L. Itti. Quantitative modeling of perceptual salience at human eye position. Visual Cognition (in press), 2006. [3] L. Itti, Koch C., and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254–1259, 1998. [4] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10-12):1489–1506, 2000. [7] S. S. Keerthi and C. J. Lin. Asymptotic behaviors of support vector machines with gaussian kernel. Neural Computation, 15:1667–1689, 2003. [8] W. Kienzle, F. A. Wichmann, B. Sch¨olkopf, and M. O. Franz. Learning an interest operator from human eye movements. In Beyond Patches Workshop, International Conference on Computer Vision and Pattern Recognition, 2006. [9] C. Koch and S. Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology, 4(4):219–227, 1985. [10] G. Krieger, I. Rentschler, G. Hauske, K. Schill, and C. Zetzsche. Object and scene analysis by saccadic eye-movements: an investigation with higher-order statistics. Spatial Vision, 3(2,3):201–214, 2000. [11] S. W. Kuffler. Discharge patterns and functional organization of mammalian retina. Journal of Neurophysiology, 16(1):37–68, 1953. [12] S. K. Mannan, K. H. Ruddock, and D. S. Wooding. The relationship between the locations of spatial features and those of fixations made during visual examination of briefly presented images. Spatial Vision, 10(3):165–88, 1996. [13] D. J. Parkhurst, K. Law, and E. Niebur. Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42(1):107–123, 2002. [14] D. J. Parkhurst and E. Niebur. Scene content selected by active vision. Spatial Vision, 16(2):125–154, 2003. [15] R. J. Peters, A. Iyer, C. Koch, and L. Itti. Components of bottom-up gaze allocation in natural scenes (poster). In Vision Sciences Society (VSS) Annual Meeting, 2005. [16] C. M. Privitera and L. W. Stark. Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9):970–982, 2000. [17] R. Raj, W. S. Geisler, R. A. Frazor, and A. C. Bovik. Contrast statistics for foveated visual systems: Fixation selection by minimizing contrast entropy. Journal of the Optical Society of America A., 22(10):2039–2049, 2005. [18] P. Reinagel and A. M. Zador. Natural scene statistics at the center of gaze. Network: Computation in Neural Systems, 10(4):341–350, 1999. [19] L. W. Renninger, J. Coughlan, P. Verghese, and J. Malik. An information maximization model of eye movements. In Advances in Neural Information Processing Systems, volume 17, pages 1121–1128, 2005. [20] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67–93, 2001. [22] D. Walther. Interactions of visual attention and object recognition: computational modeling, algorithms, and psychophysics. PhD thesis, California Institute of Technology, 2006.
|
2006
|
128
|
2,953
|
Computation of Similarity Measures for Sequential Data using Generalized Suffix Trees Konrad Rieck Fraunhofer FIRST.IDA Kekul´estr. 7 12489 Berlin, Germany rieck@first.fhg.de Pavel Laskov Fraunhofer FIRST.IDA Kekul´estr. 7 12489 Berlin, Germany laskov@first.fhg.de S¨oren Sonnenburg Fraunhofer FIRST.IDA Kekul´estr. 7 12489 Berlin, Germany sonne@first.fhg.de Abstract We propose a generic algorithm for computation of similarity measures for sequential data. The algorithm uses generalized suffix trees for efficient calculation of various kernel, distance and non-metric similarity functions. Its worst-case run-time is linear in the length of sequences and independent of the underlying embedding language, which can cover words, k-grams or all contained subsequences. Experiments with network intrusion detection, DNA analysis and text processing applications demonstrate the utility of distances and similarity coefficients for sequences as alternatives to classical kernel functions. 1 Introduction The ability to operate on sequential data is a vital prerequisite for application of machine learning techniques in many challenging domains. Examples of such applications are natural language processing (text documents), bioinformatics (DNA and protein sequences) and computer security (byte streams or system call traces). A key instrument for handling such data is the efficient computation of pairwise similarity between sequences. Similarity measures can be seen as an abstraction between particular structure of data and learning theory. One of the most successful similarity measures thoroughly studied in recent years is the kernel function [e.g. 1–3]. Various kernels have been developed for sequential data, starting from the original ideas of Watkins [4] and Haussler [5] and extending to application-specific kernels such as the ones for text and natural language processing [e.g. 6–8], bioinformatics [e.g. 9–14], spam filtering [15] and computer security [e.g. 16; 17]. Although kernel-based learning has gained a major focus in machine learning research, a kernel function is obviously only one of various possibilities for measuring similarity between objects. The choice of a similarity measure is essentially determined by (a) understanding of a problem and (b) properties of the learning algorithm to be applied. Some algorithms operate in vector spaces, others in inner product, metric or even non-metric feature spaces. Investigation of techniques for learning in spaces other than RKHS is currently one of the active research fields in machine learning [e.g. 18–21]. The focus of this contribution lies on general similarity measures for sequential data, especially on efficient algorithms for their computation. A large number of such similarity measures can be expressed in a generic form so that a simple linear-time algorithm can be applied for computation of a wide class of similarity measures. This algorithm enables the investigation of alternative representations of problem domain knowledge other than kernel functions. As an example, two applications are presented for which replacement of a kernel – or equivalently, the Euclidean distance – with a different similarity measure yields a significant improvement of accuracy in an unsupervised learning scenario. The rest of the paper is organized as follows. Section 2 provides a brief review of common similarity measures for sequential data and introduces a generic form in which a large variety of them can be cast. The generalized suffix tree and a corresponding algorithm for linear-time computation of similarity measures are presented in Section 3. Finally, the experiments in Section 4 demonstrate efficiency and utility of the proposed algorithm on real-world applications: network intrusion detection, DNA sequence analysis and text processing. 2 Similarity measures for sequences 2.1 Embedding of sequences A common way to define similarity measures for sequential data is via explicit embedding into a high-dimensional feature space. A sequence x is defined as concatenation of symbols from a finite alphabet Σ. To model the content of a sequence, we consider a language L ⊆Σ∗comprising subsequences w ∈L. We refer to these subsequences as words, even though they may not correspond to a natural language. Typical examples for L are a “bag of words” [e.g. 22], the set of all sequences of fixed length (k-grams or k-mers) [e.g. 10; 23] or the set of all contained subsequences [e.g. 8; 24]. Given a language L, a sequence x can be mapped into an |L|-dimensional feature space by calculating an embedding function φw(x) for every w ∈L appearing in x. The funcion φw is defined as follows φw : Σ∗→R+ ∪{0}, φw(x) := ψ(occ(w, x)) · Ww (1) where occ(w, x) is the number of occurrences of w in x, ψ a numerical transformation, e.g. a conversion to frequencies, and W a weighting assigned to individual words, e.g. length-dependent or position-dependent weights [cf. 3; 24]. By employing the feature space induced through L and φ, one can adapt many vectorial similarity measures to operate on sequences. The feature space defined via explicit embedding is sparse, since the number of non-zero dimensions for each feature vector is bounded by the sequence length. Thus the essential parameter for measuring complexity of computation is the sequence length, denoted hereinafter as n. Furthermore, the length of a word |w| or in case of a set of words the maximum length is denoted by k. 2.2 Vectorial similarity measures Several vectorial kernel and distance functions can be applied to the proposed embedding of sequential data. A list of common functions in terms of L and φ is given in Table 1. Kernel function k(x, y) Linear P w∈L φw(x)φw(y) Polynomial P w∈L φw(x)φw(y) + θ d RBF exp −d(x,y)2 σ Distance function d(x, y) Manhattan P w∈L |φw(x) −φw(y)| Canberra P w∈L |φw(x)−φw(y)| φw(x)+φw(y) Minkowski kqP w∈L |φw(x) −φw(y)|k Hamming P w∈L sgn |φw(x) −φw(y)| Chebyshev maxw∈L |φw(x) −φw(y)| Table 1: Kernels and distances for sequential data Similarity coefficient s(x, y) Simpson a/ min(a + b, a + c) Jaccard a/(a + b + c) Braun-Blanquet a/ max(a + b, a + c) Czekanowski, Sorensen-Dice 2a/(2a + b + c) Sokal-Sneath, Anderberg a/(a + 2(b + c)) Kulczynski (1st) a/(b + c) Kulczynski (2nd) 1 2(a/(a + b) + a/(a + c)) Otsuka, Ochiai a/ p (a + b)(a + c) Table 2: Similarity coefficients for sequential data Beside kernel and distance functions, a set of rather exotic similarity coefficients is also suitable for application to sequential data [25]. The coefficients are constructed using three summation variables a, b and c, which in the case of binary vectors correspond to the number of matching component pairs (1-1), left mismatching pairs (0-1) and right mismatching pairs (1-0) [cf. 26; 27] Common similarity coefficients are given in Table 2. For application to non-binary data these summation variables can be extended as proposed in [25]: a = X w∈L min(φw(x), φw(y)) b = X w∈L [φw(x) −min(φw(x), φw(y))] c = X w∈L [φw(y) −min(φw(x), φw(y))] 2.3 A generic representation One can easily see that the presented similarity measures can be cast in a generic form that consists of an outer function ⊕and an inner function m: s(x, y) = M w∈L m(φw(x), φw(y)) (2) Given this definition, the kernel and distance functions presented in Table 1 can be re-formulated in terms of ⊕and m. Adaptation of similarity coefficients to the generic form (2) involves a reformulation of the summation variables a, b and c. The particular definitions of outer and inner functions for the presented similarity measures are given in Table 3. The polynomial and RBF kernels are not shown since they can be expressed in terms of a linear kernel or a distance respectively. Kernel function ⊕ m(x, y) Linear + x · y Similarity coef. ⊕ m(x, y) Variable a + min(x, y) Variable b + x −min(x, y) Variable c + y −min(x, y) Distance function ⊕ m(x, y) Manhattan + |x −y| Canberra + |x −y|/(x + y) Minkowskik + |x −y|k Hamming + sgn |x −y| Chebyshev max |x −y| Table 3: Generalized formulation of similarity measures 3 Generalized suffix trees for comparison of sequences The key to efficient comparison of two sequences lies in considering only the minimum of words necessary for computation of the generic form (2) of similarity measures. In the case of kernels only the intersection of words in both sequences needs to be considered, while the union of words is needed for calculating distances and non-metric similarity coefficients. A simple and well-known approach for such comparison is representing the words of each sequence in a sorted list. For words of maximum length k such a list can be constructed in O(kn log n) using general sorting or O(kn) using radix-sort. If the length of words k is unbounded, sorted lists are no longer an option as the sorting time becomes quadratic. Thus, special data structures are needed for efficient comparison of sequences. Two data structures previously used for computation of kernels are tries [28; 29] and suffix trees [30]. Both have been applied for computation of a variety of kernel functions in O(kn) [3; 10] and also in O(n) run-time using matching statistics [24]. In this contribution we will argue that a generalized suffix tree is suitable for computation of all similarity measures of the form (2) in O(n) run-time. A generalized suffix tree (GST) is a tree containing all suffixes of a set of strings x1, . . . , xl [31]. The simplest way to construct a generalized suffix tree is to extend each string xi with a delimiter $i and to apply a suffix tree construction algorithm [e.g. 32] to the concatenation of strings x1$1 . . . xl$l. In the remaining part we will restrict ourselves to the case of two strings x and y delimited by # and $, computation of an entire similarity matrix using a single GST for a set of strings being a straightforward extension. An example of a generalized suffix tree for the strings “aab#” and “babab$” is shown in Fig. 1(a). (1,0) ab ab$ $ #... #... #... $ b a ab#... b $ ab$ $ (a) Generalized suffix tree (GST) x (1,3) m(0,2) (1,0) m #... #... b $ ab ab$ $ (1,0) (0,1) (0,1) (0,1) (b) Traversal of a GST Figure 1: Generalized suffix tree for “aab#” and “babab$” and a snapshot of its traveral Once a generalized suffix tree is constructed, it remains to determine the number of occurences occ(w, x) and occ(w, y) of each word w present in the sequences x and y. Unlike the case for kernels for which only nodes corresponding to both sequences need to be considered [24], the contributions must be correctly computed for all nodes in the generalized suffix tree. The following simple recursive algorithm computes a generic similarity measure between the sequence x and y in one depth-first traversal of the generalized suffix tree (cf. Algorithm 1). The algorithm exploits the fact that a leaf in a GST representing a suffix of x contributes exactly 1 to occ(w, x) if w is the prefix of this suffix – and similarly for y and occ(w, y). As the GST contains all suffixes of x and y, every word w in x and y is represented by at least one leaf. Whether a leaf contributes to x or y can be determined by considering the edge at the leaf. Due to the uniqueness of the delimiter #, no branching nodes can occur below an edge containing #, thus a leaf node at an edge starting before the index of # must contain a suffix of x; otherwise it contains a suffix of y. The contributions of all leaves are aggregated in two variables x and y during a post-order traversal. At each node the inner function m of (2) is calculated using ψ(x) and ψ(y) according to the embedding φ in (1). A snapshot of the traversal procedure is illustrated in Fig. 1(b). To account implicit nodes along the edges of the GST and to support weighted embeddings φ, the weighting function WEIGHT introduced in [24] is employed. At a node v the function takes the beginning (begin[v]) and the end (end[v]) of the incoming edge and the depth of node (depth[v]) as arguments to determine how much the node and edge contribute to the similarity measure, e.g. for k-gram models only nodes up to a path depth of k need to be considered. Algorithm 1 Suffix tree comparison 1: function COMPARE(x,y) 2: S ←SUFFIXTREE(x # y $) 3: (x, y, s) ←MATCH(root[S]) 4: return s 5: 6: function MATCH(v) 7: if v is leaf then 8: s ←0 9: if begin[v] ≤index# then 10: (x, y) ←(1, 0) ⊲Leaf of a suffix of x 11: j ←index# −1 12: else 13: (x, y) ←(0, 1) ⊲Leaf of a suffix of y 14: j ←index$ −1 15: else 16: (x, y, s) ←(0, 0, 0) 17: for all c in children[v] do 18: (ˆx, ˆy, ˆs) ←MATCH(c) ⊲Traverse GST 19: (x, y, s) ←(x + ˆx, y + ˆy, s ⊕ˆs) 20: j ←end[v] 21: W ←WEIGHT(begin[v], j, depth[v]) 22: s ←s ⊕m(ψ(x)W, ψ(y)W) ⊲Cf. definitions in (1) and (2) 23: return (x, y, s) Similarly to the extension of string kernels proposed in [33], the GST traversal can be performed on an enhanced suffix array [34] for further run-time and space reduction. To prove correctness of our algorithm, a different approach must be taken than the one in [24]. We cannot claim that the computed similarity value is equivalent to the one returned by the matching statistic algorithm, since the latter is restricted to kernel functions. Instead we show that at each recursive call to the MATCH function correct numbers of occurences are maintained. Theorem 1. A word w occurs occ(w, x) and occ(w, y) times in x and y if and only if MATCH( ¯w) returns x = occ(w, x) and y = occ(w, y), where ¯w is the node at the end of a path from the root reassembling w in the generalized suffix tree of x and y . Proof. If w occurs m times in x, there exist exactly m suffixes of x with w as prefix. Since w corresponds to a path from the root of the GST to a node ¯w all m suffixes must pass ¯w. Due to the unique delimiters # each suffix of x corresponds to one leaf node in the GST whose incoming edge contains #. Hence m equals occ(w, x) and is exactly the aggregated quantity x returned by MATCH( ¯w). Likewise, occ(w, y) is the number of suffixes beginning after # and having a prefix w, which is computed by y. 4 Experimental Results 4.1 Run-time experiments In order to illustrate the efficiency of the proposed algorithm, we conducted run-time experiments on three benchmark data sets for sequential data: network connection payloads from the DARPA 1999 IDS evaluation [35], news articles from the Reuters-21578 data set [36] and DNA sequences from the human genome [14]. Table 4 gives an overview of the data sets and their specific properties. We compared the run-time of the generalized suffix tree algorithm with a recent trie-based method supporting computation of distances. Tries yield better or equal run-time complexity for computation of similarity measures over k-grams than algorithms using indexed arrays and hash tables. A detailed description of the trie-based approach is given in [25]. Note that in all of the following experiments tries were generated in a pre-processing step and the reported run-time corresponds to the comparison procedure only. For each of the three data sets, we implemented the following experimental protocol: the Manhattan distances were calculated for 1000 pairs of randomly selected sequences using k-grams as an emName Type Alphabet Min. length Max. length DNA Human genome sequences 4 2400 2400 NIDS TCP connection payloads 108 53 132753 TEXT Reuters Newswire articles 93 43 10002 Table 4: Sequential data sets bedding language. The procedure was repeated 10 times for various values of k, and the run-time was averaged over all runs. Fig. 2 compares the run-time of sequence comparison algorithms using the generalized suffix trees and tries. On all three data sets the trie-based comparison has a low run-time for small values of n but grows linearly with k. The algorithm using a generalized suffix tree is independent from complexity of the embedding language, although this comes at a price of higher constants due to a more complex data structure. It is obvious that a generalized suffix tree is the algorithm of choice for higher values of k. 5 10 15 20 0 2 4 6 8 10 mean runtime per 1000 cmp. (s) k−gram length Manhattan distance runtime (NIDS dataset) Trie comparison GST comparison (a) NIDS data set 5 10 15 20 0 1 2 3 4 5 mean runtime per 1000 cmp. (s) k−gram length Manhattan distance runtime (TEXT dataset) Trie comparison GST comparison (b) TEXT data set 5 10 15 20 0 2 4 6 8 10 12 mean runtime per 1000 cmp. (s) k−gram length Manhattan distance runtime (DNA dataset) Trie comparison GST comparison (c) DNA data set Figure 2: Run-time performance for varying k-gram lengths 4.2 Applications As a second part of our evaluation, we show that the ability of our approach to compute diverse similarity measures pays off when it comes to real applications, especially in an unsupervised learning scenario. The experiments were performed for (a) intrusion detection in real network traffic and (b) transcription start site (TSS) recognition in DNA sequences. For the first application, network data was generated by members of our laboratory using virtual network servers. Recent attacks were injected by a penetration-testing expert. The distance-based anomaly detection method Zeta [17] was applied to 5-grams extracted from byte sequences of TCP connections using different similarity measures: the linear kernel, the Manhattan distance and the Kulczynski coefficient. The results on network data from the HTTP protocol are shown in Fig. 3(a). Application of the Kulczynski coefficient yields the highest detection accuracy. Over 78% of all attacks are identified with no false-positives in an unsupervised setup. In comparison, the linear kernel yields roughly 30% lower detection rates. The second application focused on TSS recognition in DNA sequences. The data set comprises fixed length DNA sequences that either cover the TSS of protein coding genes or have been extracted randomly from the interior of genes [14]. We evaluated three methods on this data: an unsupervised k-nearest neighbor (kNN) classifier, a supervised and bagged kNN classifier and a Support Vector Machine (SVM). Each method was trained and tested using a linear kernel and the Manhattan distance as a similarity measure over 4-grams. Fig. 3(b) shows the performance achieved by the unsupervised and supervised versions of the kNN classifier1. Even though the linear kernel and the Manhattan distance yield similar accuracy in a supervised setup, their performance differs significantly in unsupervised application. In the absence of prior knowledge of labels the Manhattan 1Results for the SVM are similar to the supervised kNN and have been omitted. 0 0.002 0.004 0.006 0.008 0.01 0 0.2 0.4 0.6 0.8 1 false positive rate true positive rate ROC for intrusion detection in HTTP Kulczynski coefficient (unsup. knn) Linear kernel (unsup. knn) Manhattan distance (unsup. knn) (a) Results for network application 0 0.02 0.04 0.06 0.08 0.1 0 0.2 0.4 0.6 0.8 1 false positive rate true positive rate ROC for transcription site recognition Linear kernel (unsup. knn) Linear kernel (sup. knn) Manhattan distance (unsup. knn) Manhattan distance (sup. knn) (b) Results for DNA application Figure 3: Comparison of similarity measures on the network and DNA data distance expresses better discriminative properties for TSS recognition than the linear kernel. For the supervised application the classication performance is bounded for both similarity measures, since only some discriminative features for TSS recognition are encapsulated in n-gram models [14]. 5 Conclusions Kernel functions for sequences have recently gained strong attention in many applications of machine learning, especially in bioinformatics and natural language processing. In this contribution we have shown that other similarity measures such as metric distances or non-metric similarity coefficients can be computed with the same run-time complexity as kernel functions. The proposed algorithm is based on a post-order traversal of a generalized suffix tree of two or more sequences. During the traversal, the counts of matching and mismatching words from an embedding language are computed in time linear in sequence length – regardless of the particular kind of chosen language: words, k-grams or even all consecutive subsequences. By using a generic representation of the considered similarity measures based on an outer and inner function, the same algorithm can be applied for various kernel, distance and similarity functions on sequential data. Our experiments demonstrate that the use of general similarity measures can bring significant improvement to learning accuracy – in our case observed for unsupervised learning – and emphasize importance of further investigation of distance- and similarity-based learning algorithms. Acknowledgments The authors gratefully acknowledge the funding from Bundesministerium f¨ur Bildung und Forschung under the project MIND (FKZ 01-SC40A) and would like to thank Klaus-Robert M¨uller and Mikio Braun for fruitful discussions and support. References [1] V.N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [2] B. Sch¨olkopf and A.J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [3] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge University Press, 2004. [4] C. Watkins. Dynamic alignment kernels. In A.J. Smola, P.L. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 39–50, Cambridge, MA, 2000. MIT Press. [5] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, UC Santa Cruz, July 1999. [6] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. Technical Report 23, LS VIII, University of Dortmund, 1997. [7] E. Leopold and J. Kindermann. Text categorization with Support Vector Machines. how to represent texts in input space? Machine Learning, 46:423–444, 2002. [8] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text classification using string kernels. Journal of Machine Learning Research, 2:419–444, 2002. [9] A. Zien, G. R¨atsch, S. Mika, B. Sch¨olkopf, T. Lengauer, and K.-R. M¨uller. Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites. BioInformatics, 16(9):799–807, September 2000. [10] C. Leslie, E. Eskin, and W.S. Noble. The spectrum kernel: A string kernel for SVM protein classification. In Proc. Pacific Symp. Biocomputing, pages 564–575, 2002. [11] C. Leslie, E. Eskin, A. Cohen, J. Weston, and W.S. Noble. Mismatch string kernel for discriminative protein classification. Bioinformatics, 1(1):1–10, 2003. [12] J. Rousu and J. Shawe-Taylor. Efficient computation of gapped substring kernels for large alphabets. Journal of Machine Leaning Research, 6:1323–1344, 2005. [13] G. R¨atsch, S. Sonnenburg, and B. Sch¨olkopf. RASE: recognition of alternatively spliced exons in c. elegans. Bioinformatics, 21:i369–i377, June 2005. [14] S. Sonnenburg, A. Zien, and G. R¨atsch. ARTS: Accurate Recognition of Transcription Starts in Human. Bioinformatics, 22(14):e472–e480, 2006. [15] H. Drucker, D. Wu, and V.N. Vapnik. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048–1054, 1999. [16] E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo. Applications of Data Mining in Computer Security, chapter A geometric framework for unsupervised anomaly detection: detecting intrusions in unlabeled data. Kluwer, 2002. [17] K. Rieck and P. Laskov. Detecting unknown network attacks using language models. In Proc. DIMVA, pages 74–90, July 2006. [18] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K. Obermayer. Classification on pairwise proximity data. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems, volume 11, pages 438–444. MIT Press, 1999. [19] V. Roth, J Laub, M. Kawanabe, and J.M. Buhmann. Optimal cluster preserving embedding of non-metric proximity data. IEEE Trans. PAMI, 25:1540–1551, December 2003. [20] J. Laub and K.-R. M¨uller. Feature discovery in non-metric pairwise data. Journal of Machine Learning, 5(Jul):801–818, July 2004. [21] C. Ong, X. Mary, S. Canu, and A.J. Smola. Learning with non-positive kernels. In Proc. ICML, pages 639–646, 2004. [22] G. Salton. Mathematics and information retrieval. Journal of Documentation, 35(1):1–29, 1979. [23] M. Damashek. Gauging similarity with n-grams: Language-independent categorization of text. Science, 267(5199):843–848, 1995. [24] S.V.N. Vishwanathan and A.J. Smola. Kernels and Bioinformatics, chapter Fast Kernels for String and Tree Matching, pages 113–130. MIT Press, 2004. [25] K. Rieck, P. Laskov, and K.-R. M¨uller. Efficient algorithms for similarity measures over sequential data: A look beyond kernels. In Proc. DAGM, pages 374–383, September 2006. [26] R.R. Sokal and P.H. Sneath. Principles of numerical taxonomy. Freeman, San Francisco, CA, USA, 1963. [27] M.R. Anderberg. Cluster Analysis for Applications. Academic Press, Inc., New York, NY, USA, 1973. [28] E. Fredkin. Trie memory. Communications of ACM, 3(9):490–499, 1960. [29] D. Knuth. The art of computer programming, volume 3. Addison-Wesley, 1973. [30] P. Weiner. Linear pattern matching algorithms. In Proc. 14th Annual Symposium on Switching and Automata Theory, pages 1–11, 1973. [31] D. Gusfield. Algorithms on strings, trees, and sequences. Cambridge University Press, 1997. [32] E. Ukkonen. Online construction of suffix trees. Algorithmica, 14(3):249–260, 1995. [33] C.H. Teo and S.V.N. Vishwanathan. Fast and space efficient string kernels using suffix arrays. In Proceedings, 23rd ICMP, pages 939–936. ACM Press, 2006. [34] M.I. Abouelhoda, S. Kurtz, and E. Ohlebusch. Replacing suffix trees with enhanced suffix arrays. Journal of Discrete Algorithms, 2(1):53–86, 2002. [35] R. Lippmann, J.W. Haines, D.J. Fried, J. Korba, and K. Das. The 1999 DARPA off-line intrusion detection evaluation. Computer Networks, 34(4):579–595, 2000. [36] D.D. Lewis. Reuters-21578 text categorization test collection. AT&T Labs Research, 1997.
|
2006
|
129
|
2,954
|
Bayesian Image Super-resolution, Continued Lyndsey C. Pickup, David P. Capel†, Stephen J. Roberts Andrew Zisserman Information Engineering Building, Dept. of Eng. Science, Parks Road, Oxford, OX1 3PJ, UK {elle,sjrob,az}@robots.ox.ac.uk † 2D3, d.capel@2d3.com Abstract This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop’s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach. 1 Introduction Multi-frame image super-resolution refers to the process by which a group of images of the same scene are fused to produce an image or images with a higher spatial resolution, or with more visible detail in the high spatial frequency features [7]. Such problems are common, with everything from holiday snaps and DVD frames to satellite terrain imagery providing collections of low-resolution images to be enhanced, for instance to produce a more aesthetic image for media publication [15], or for higher-level vision tasks such as object recognition or localization [5]. Limits on the resolution of the original imaging device can be improved by exploiting the relative sub-pixel motion between the scene and the imaging plane. No matter how accurate the registration estimate, there will be some residual uncertainty associated with the parameters [13]. We propose a scheme to deal with this uncertainty by integrating over the registration parameters, and demonstrate improved results on synthetic and real digital image data. Image registration and super-resolution are often treated as distinct processes, to be considered sequentially [1, 3, 7]. Hardie et al. demonstrated that the low-resolution image registration can be updated using the super-resolution image estimate, and that this improves a Maximum a Posteriori (MAP) super-resolution image estimate [5]. More recently, Pickup et al. used a similar joint MAP approach to learn more general geometric and photometric registrations, the super-resolution image, and values for the prior’s parameters simultaneously [12]. Tipping and Bishop’s Bayesian image super-resolution work [16] uses a Maximum Likelihood (ML) point estimate of the registration parameters and the camera imaging blur, found by integrating the high-resolution image out of the registration problem and optimizing the marginal probability of the observed low-resolution images directly. This gives an improvement in the accuracy of the recovered registration (measured against known truth on synthetic data) compared to the MAP approach. The image-integrating Bayesian super-resolution method [16] is extremely costly in terms of computation time, requiring operations that scale with the cube of the total number of high-resolution pixels, severely limiting the size of the image patches over which they perform the registration (they use 9 × 9 pixel patches). The marginalization also requires a form of prior on the super-resolution image that renders the integral tractable, though priors such as Tipping and Bishop’s chosen Gaussian form are known to be poor for tasks such as edge preservation, and much super-resolution work has employed other more favorable priors [2, 3, 4, 11, 14]. It is generally more desirable to integrate over the registration parameters rather than the superresolution image, because it is the registration that constitutes the “nuisance parameters”, and the super-resolution image that we wish to estimate. We derive a new view of Bayesian image superresolution in which a MAP high-resolution image estimate is found by marginalizing over the uncertain registration parameters. Memory requirements are considerably lower than the imageintegrating case; while the algorithm is more costly than a simple MAP super-resolution estimate, it is not infeasible to run on images of several hundred pixels in size. Sections 2 and 3 develop the model and the proposed objective function. Section 4 evaluates results on synthetically-generated sequences (with ground truth for comparison), and on a real data example. A discussion of this approach and concluding remarks can be found in section 5. 2 Generative model The generative model for multi-frame super-resolution assumes a known scene x (vectorized, size N × 1), and a given registration vector θ(k). These are used to generate a vectorized low-resolution image y(k) with M pixels through a system matrix W(k). Gaussian i.i.d. noise with precision β is then added to y(k), y(k) = λ(k) α W θ(k) x + λ(k) β + ϵ(k) (1) ϵ(k) ∼ N 0, β−1I . (2) Photometric parameters λα and λβ provide a global affine correction for the scene illumination, and λβ is simply an M × 1 vector filled out with the value of λβ. Each row of W(k) constructs a single pixel in y(k), and the row’s entries are the vectorized and point-spread function (PSF) response for each low-resolution pixel, in the frame of the super-resolution image [2, 3, 16]. The PSF is usually assumed to be an isotropic Gaussian on the imaging plane, though for some motion models (e.g. planar projective) this does not necessarily lead to a Gaussian distribution on the frame of x. For an individual low-resolution image, given registrations and x, the data likelihood is p y(k) x, θ(k), λ(k) = β 2π M 2 exp −β 2 y(k) −λ(k) α W θ(k) x −λ(k) β 2 2 . (3) When the registration is known approximately, for instance by pre-registering inputs, the uncertainty can be modeled as a Gaussian perturbation about the mean estimate ¯θ (k) for each image’s parameter set, with covariance C, which we restrict to be a diagonal matrix, θ(k) λ(k) α λ(k) β = ¯θ (k) ¯λ(k) α ¯λ(k) β + δ(k) (4) δ(k) ∼ N (0, C) (5) p θ(k), λ(k) = |C−1| (2π)n 1 2 exp −1 2δ(k)T C−1δ(k) . (6) A Huber prior is assumed for the directional image gradients Dx in the super-resolution image x (in the horizontal, vertical, and two diagonal directions), p (x) = 1 Zx exp n −ν 2 ρ (Dx, α) o (7) ρ(z, α) = z2 if |z| < α 2α|z| −α2 otherwise (8) where α is a parameter of the Huber potential function, and ν is the prior strength parameter. This belongs to a family of functions often favored over Gaussians for super-resolution image priors [2, 3, 14] because the Huber distribution’s heavy tails mean image edges are penalized less severely. The difficulty in computing the partition function Zx is a consideration when marginalizing over x as in [16], though for the MAP image estimate, a value for this scale factor is not required. Regardless of the exact forms of these probability distributions, probabilistic super-resolution algorithms can usually be interpreted in one of the following ways. The most popular approach to super-resolution is to obtain a MAP estimate, typically using an iterative scheme to maximize p x y(k), θ(k), λ(k) with respect to x, where p x n y(k), θ(k), λ(k)o = p (x) QK k=1 p y(k) x, θ(k), λ(k) p y(k) θ(k), λ(k) , (9) and the denominator is an unknown scaling factor. Tipping and Bishop’s approach takes an ML estimate of the registration by marginalizing over x, then calculates the super-resolution estimate as in (9). While Tipping and Bishop did not include a photometric model, the equivalent expression to be maximized with respect to θ and λ is p n y(y)o n θ(k), λ(k)o = Z p (x) K Y k=1 p y(y) x, θ(k), λ(k) dx. (10) Note that Tipping and Bishop’s work does employ the same data likelihood expression as in (3), which forced them to select a Gaussian form for p (x), rather than a more suitable image prior, in order to keep the integral tractable. Finally, in this paper we find x through marginalizing over θ and λ, so that a MAP estimate of x can be obtained by maximizing p x y(k) directly with respect to x. This is achieved by finding p x n y(k)o = p(x) p y(k) Z K Y k=1 p θ(k), λ(k) p y(k) x, θ(k), λ(k) d {θ, λ} , (11) which is developed further in the next section. Note that the integral does not involve the prior, p (x). 3 Marginalizing over registration parameters In order to obtain an expression for p x| y(k) from expressions (3), (6) and (7) above, the parameter variations δ(k) must be integrated out of the problem. Registration estimates ¯θ (k), ¯λα and ¯λβ can be obtained using classical registration methods, either intensity-based [8] or estimation from image points [6], and the diagonal matrix C is constructed to reflect the confidence in each parameter estimate. This might mean a standard deviation of a tenth of a low-resolution pixel on image translation parameters, or a few gray levels’ shift on the illumination model, for instance. The integral performed is p x| n y(k)o = 1 p y(k) β 2π KM 2 b 2π Kn 2 1 Zx exp n −ν 2ρ (Dx, α) o × Z exp ( − K X k=1 β 2 y(k) −λ(k) α W θ(k) x −λ(k) β 2 2 + 1 2δ(k)C(k)−1δ(k) ) dδ, (12) where δT = δ(1)T , δ(2)T , . . . , δ(K)T and all the λ and θ parameters are functions of δ as in (4). Expanding the data error term in the exponent for each low-resolution image as a second-order Taylor series about the estimated geometric registration parameter yields e(k) (δ) = y(k) −λα θ(k) W(k) θ(k) x −λ(k) β θ(k) 2 2 (13) = F (k) + G(k)T δ + 1 2δ(k)T H(k)δ(k), (14) Values for F, G and H can be found numerically (for geometric registrations) or analytically (for the photometric parameters) from x and n y(k), θ(k), λ(k) α , λ(k) β o . Thus the whole exponent of (12), f, becomes f = K X k=1 −β 2 F (k) −β 2 G(k)T δ(k) −1 2δ(k)T β 2 H(k) + C−1 δ(k) (15) = −β 2 F −β 2 GT δ −1 2δT β 2 H + V−1 δ, (16) where the omission of image superscripts indicates stacked matrices, and H is therefore a blockdiagonal nK × nK sparse matrix, and V is comprised of the repeated diagonal of C. Finally, letting S = β 2 H + V−1, Z exp {f} dδ = exp −β 2 F Z exp −β 2 GT δ −1 2δT Sδ dδ (17) = exp −β 2 F (2π) nK 2 |S|−1 2 exp β2 8 GT S−1G . (18) The objective function, L, to be minimized with respect to x is obtained by taking the negative log of (12), using the result from (18), and neglecting the constant terms: L = ν 2 ρ (Dx, α) + β 2 F + 1 2 log |S| −β2 8 GT S−1G. (19) This can be optimized using Scaled Conjugate Gradients (SCG) [9], noting that the gradient can be expressed dL dx = ν 2 DT d dxρ (Dx) + β 2 dF dx −β2 4 GT S−1 dG dx + β 4 vec S−1T −β3 16 GT S−1 ⊗GT S−1 dvecH dx , (20) where derivatives of F, G and H with respect to x can be found analytically for photometric parameters, and numerically (using the analytic gradient of e(k) δ(k) with respect to x) with respect to the geometric parameters. 3.1 Implementation notes Notice that the value F from (16) is simply the reprojection error of the current estimate of x at the mean registration parameter values, and that gradients of this expression with respect to the λ parameters, and with respect to x can both be found analytically. To find the gradient with respect to a geometric registration parameter θ(k) i , and elements of the Hessian involving it, a central difference scheme involving only the kth image is used. Mean values for the registration are computed by standard registration techniques, and x is initialized using around 10 iterations of SCG to find the maximum likelihood solution evaluated at these mean parameters. Additionally, pixel values are scaled to lie between −1 2 and 1 2, and the ML solution is bounded to lie within these values in order to curb the severe overfitting usually observed in ML super-resolution results. In our implementation, the parameters representing the λ values are scaled so that they share the same standard deviations as the θ parameters, which represent the sub-pixel geometric registration shifts, which makes the matrix V a multiple of the identity. The scale factors are chosen so that one standard deviation in λβ gives a 10-gray-level shift, and one standard deviation in λα varies pixel values by around 10 gray levels at mean image intensity. 4 Results The first experiment takes a sixteen-image synthetic dataset created from an eyechart image. Data is generated at a zoom factor of 4, using a 2D translation-only motion model, and the two-parameter global affine illumination model described above, giving a total of four registration parameters per low-resolution image. Gaussian noise with standard deviation equivalent to 5 gray levels is added to each low-resolution pixel independently. The sub-pixel perturbations are evenly spaced over a grid up to plus or minus one half of a low-resolution pixel, giving a similar setup to that described in [10], but with additional lighting variation. The ground truth image and two of the low-resolution images appear in the first row of Figure 1. Geometric and photometric registration parameters were initialized to the identity, and the images were registered using an iterative intensity-based scheme. The resulting parameter values were used to recover two sets of super-resolution images: one using the standard Huber MAP algorithm, and the second using our extension integrating over the registration uncertainty. The Huber parameter α was fixed at 0.01 for all runs, and ν was varied over a range of possible values representing ratios between ν and the image noise precision β. The images giving lowest RMS error from each set are displayed in the second row of Figure 1. Visually, the differences between the images are subtle, though the bottom row of letters is better defined in the output from the new algorithm. Plotting the RMSE as a function of ν in Figure 2, we see that the proposed registration-integrating approach achieves a lower error, compared to the ground truth high-resolution image, than the standard Huber MAP algorithm for any choice of prior strength, ν in the optimal region. (a) ground truth high−res (b) input 1/16 (c) input 16/16 (d) best Huber (err = 15.6) (e) best int−θ−λ (err = 14.8) Figure 1: (a) Ground truth image. Only the central recoverable part is shown; (b,c) low-resolution images. The variation in intensity is clearly visible, and the sub-pixel displacements necessary for multi-frame image super-resolution are most apparent on the “D” characters to the right of each image; (d) The best (ı.e. minimum MSE – see Figure 2) image from the regular Huber MAP algorithm, having super-resolved the dataset multiple times with different prior strength settings; (e) The best result using out approach of integrating over θ and λ. As well as having a lower RMSE, note the improvement in black-white edge detail on some of the letters on the bottom line. The second experiment uses real data with a 2D translation motion model and an affine lighting model exactly as above. The first and last images appear on the top row of Figure 3. Image registration was carried out in the same manner as before, and the geometric parameters agree with the provided homographies to within a few hundredths of a pixel. Super-resolution images were created 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 14 15 16 17 18 19 20 21 22 23 RMSE comparison ratio of prior strength parameter, ν, and noise precision, β RMSE in gray levels Standard Huber MAP Integrating over registrations and illumination Figure 2: Plot showing the variation of RMSE with prior strength for the standard Huber-prior MAP super-resolution method and our approach integrating over θ and λ. The images corresponding to the minima of the two curves are shown in Figure 1 for a number of ν values, the equivalent values to those quoted in [3] were found subjectively to be the most suitable. The covariance of the registration values was chosen to be similar to that used in the synthetic experiments. Finally, Tipping and Bishop’s method was extended to cover the illumination model and used to register and super-resolve the dataset, using the same PSF standard deviation (0.4 lowresolution pixels) as the other methods. The three sets of results on the real data sequence are shown in the middle and bottom rows of Figure 3. To facilitate a better comparison, a sub-region of each is expanded to make the letter details clearer. The Huber prior tends to make the edges unnaturally sharp, though it is very successful at regularizing the solution elsewhere. Between the Tipping and Bishop image and the registration-integrating approach, the text appears more clear in our method, and the regularization in the constant background regions is slightly more successful. 5 Discussion It is possible to interpret the extra terms introduced into the objective function in the derivation of this method as an extra regularizer term or image prior. Considering (19), the first two terms are identical to the standard MAP super-resolution problem using a Huber image prior. The two additional terms constitute an additional distribution over x in the cases where S is not dominated by V; as the distribution over θ and λ tightens to a single point, the terms tend to constant values. The intuition behind the method’s success is that this extra prior resulting from the final two terms of (19) will favor image solutions which are not acutely sensitive to minor adjustments in the image registration. The images of figure 4 illustrate the type of solution which would score poorly. To create the figure, one dataset was used to produce two super-resolved images, using two independent sets of registration parameters which were randomly perturbed by an i.i.d. Gaussian vector with a standard deviation of only 0.04 low-resolution pixels. The checker-board pattern typical of ML super-resolution images can be observed, and the difference image on the right shows the drastic contrast between the two image estimates. (a) input 1/10 (b) input 10/10 (c) integrating θ, λ (d) integrating θ, λ (detailed region) (e) regular Huber (detailed region) (f) Tipping & Bishop (detailed region) Figure 3: (a,b) First and last images from a real data sequence containing 10 images acquired on a rig which constrained the motion to be pure translation in 2D. (c) The full super-resolution output from our algorithm. (d) Detailed region of the central letters, again with our algorithm. (e) Detailed region of the regular Huber MAP super-resolution image, using parameter values suggested in [3], which are also found to be subjectively good choices. The edges are slightly artificially crisp, but the large smooth regions are well regularized. (f) Close-up of letter detail for comparison with Tipping and Bishop’s method of marginalization. The Gaussian form of their prior leads to a more blurred output, or one that over-fits to the image noise on the input data if the prior’s influence is decreased. 5.1 Conclusion This work has developed an alternative approach for Bayesian image super-resolution with several advantages over Tipping and Bishop’s original algorithm. These are namely a formal treatment of registration uncertainty, the use of a much more realistic image prior, and the computational speed and memory efficiency relating to the smaller dimension of the space over which we integrate. The results on real and synthetic images with this method show an advantage over the popular MAP approach, and over the result from Tipping and Bishop’s method, largely owing to our more favorable prior over the super-resolution image. It will be a straightforward extension of the current approach to incorporate learning for the pointspread function covariance, though it will result in a less sparse Hessian matrix H, because each row and column associated with the PSF parameter(s) has the potential to be full-rank, assuming a common camera configuration is shared across all the frames. Finally, the best way of learning the appropriate covariance values for the distribution over θ given the observed data, and how to assess the trade-off between its “prior-like” effects and the need for a standard Huber-style image prior, are still open questions. Acknowledgements The real dataset used in the results section is due to Tomas Pajdla and Daniel Martinec, CMP, Prague, and is available at http://www.robots.ox.ac.uk/∼vgg/data4.html. (a) truth (c) ML image 2 (d) difference (b) ML image 1 Figure 4: An example of the effect of tiny changes in the registration parameters. (a) Ground truth image from which a 16-image low-resolution dataset was generated. (b,c) Two ML super-resolution estimates. In both cases, the same dataset was used, but the registration parameters were perturbed by an i.i.d. vector with standard deviation of just 0.04 low-resolution pixels. (d) The difference between the two solutions. In all these images, values outside the valid image intensity range have been rounded to white or black values. This work was funded in part by EC Network of Excellence PASCAL. References [1] S. Baker and T. Kanade. Limits on super-resolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1167–1183, 2002. [2] S. Borman. Topics in Multiframe Superresolution Restoration. PhD thesis, University of Notre Dame, Notre Dame, Indiana, May 2004. [3] D. Capel. Image Mosaicing and Super-resolution (Distinguished Dissertations). Springer, ISBN: 1852337710, 2004. [4] S. Farsiu, M. Elad, and P. Milanfar. A practical approach to super-resolution. In Proc. of the SPIE: Visual Communications and Image Processing, San-Jose, 2006. [5] R. C. Hardie, K. J. Barnard, and E. A. Armstrong. Joint map registration and high-resolution image estimation using a sequence of undersampled images. IEEE Transactions on Image Processing, 6(12):1621–1633, 1997. [6] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. [7] M. Irani and S. Peleg. Super resolution from image sequences. ICPR, 2:115–120, June 1990. [8] M. Irani and S. Peleg. Improving resolution by image registration. Graphical Models and Image Processing, 53:231–239, 1991. [9] I. Nabney. Netlab algorithms for pattern recognition. Springer, 2002. [10] N. Nguyen, P. Milanfar, and G. Golub. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement. IEEE Transactions on Image Processing, 10(9):1299–1308, September 2001. [11] L. C. Pickup, S. J. Roberts, and A. Zisserman. A sampled texture prior for image superresolution. In Advances in Neural Information Processing Systems, pages 1587–1594, 2003. [12] L. C. Pickup, S. J. Roberts, and A. Zisserman. Optimizing and learning for super-resolution. In Proceedings of the British Machine Vision Conference, 2006. to appear. [13] D. Robinson and P. Milanfar. Fundamental performance limits in image registration. IEEE Transactions on Image Processing, 13(9):1185—1199, September 2004. [14] R. R. Schultz and R. L. Stevenson. A bayesian approach to image expansion for improved definition. IEEE Transactions on Image Processing, 3(3):233–242, 1994. [15] Salient Stills. http://www.salientstills.com/. [16] M. E. Tipping and C. M. Bishop. Bayesian imge super-resolution. In S. Thrun, S. Becker, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, pages 1279–1286, Cambridge, MA, 2003. MIT Press.
|
2006
|
13
|
2,955
|
Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension Manfred K. Warmuth Computer Science Department University of California - Santa Cruz manfred@cse.ucsc.edu Dima Kuzmin Computer Science Department University of California - Santa Cruz dima@cse.ucsc.edu Abstract We design an on-line algorithm for Principal Component Analysis. In each trial the current instance is projected onto a probabilistically chosen low dimensional subspace. The total expected quadratic approximation error equals the total quadratic approximation error of the best subspace chosen in hindsight plus some additional term that grows linearly in dimension of the subspace but logarithmically in the dimension of the instances. 1 Introduction In Principal Component Analysis the n-dimensional data instances are projected into a kdimensional subspace (k < n) so that the total quadratic approximation error is minimized. After centering the data, the problem is equivalent to finding the eigenvectors of the k largest eigenvalues of the data covariance matrix. We develop a probabilistic on-line version of PCA: in each trial the algorithm chooses a kdimensional projection matrix P t based on some internal parameter; then an instance xt is received and the algorithm incurs loss ∥xt −P txt∥2 2; finally the internal parameter is updated. The goal is to obtain algorithms whose total loss in all trials is close to the smallest total loss of any k-dimensional subspace P chosen in hindsight. We first develop our algorithms in the expert setting of on-line learning. The algorithm maintains a mixture vector over the n experts. At the beginning of trial t the algorithm chooses a subset P t of k experts based on the current mixture vector wt. It then receives a loss vector λt ∈[0..1]n and incurs loss equal to the remaining n −k components of the loss vector, i.e. P i∈{1,...,n}−P t ℓt i. Finally it updates its mixture vector to wt+1. Note that now the subset P t corresponds to the subspace onto which we “project”, i.e. we incur no loss on the k components of P t and are charged only for the remaining n −k components. The trick is to maintain a mixture vector wt as a parameter with the additional constraint that wt i ≤ 1 n−k. We will show that these constrained mixture vectors represent an implicit mixture over subsets of experts of size n −k, and given wt we can efficiently sample from the implicit mixture and use it to predict. This gives an on-line algorithm whose total loss is close to the smallest n−k components of P t λt and this algorithm generalizes to an on-line PCA algorithm when the mixture vectors are replaced by density matrices whose eigenvalues are bounded by 1 n−k. Now the constrained density matrices represent implicit mixtures of the (n −k)-dimensional subspaces. The complementary k-dimensional space is used to project the current instance. 2 Standard PCA and On-line PCA Given a sequence of data vectors x1, . . . , xT , the goal is to find a low-dimensional approximation of this data that minimizes the 2-norm approximation error. Specifically, we want to find a rank k projection matrix P and a bias vector b ∈Rn such that the following cost function is minimized: loss(P , b) = T X t=1 ∥xt −(P xt + b)∥2 2. Differentiating and solving for b gives us b = (I −P ) ¯x, where ¯x is the data mean. Substituting this bias b into the loss we obtain loss(P ) = T X t=1 ∥(I −P )(xt −¯x)∥2 2 = T X t=1 (xt −¯x)⊤(I −P )2(xt −¯x). Since I −P is a projection matrix, (I −P )2 = I −P , and we get: loss(P ) = tr((I −P ) T X i=1 (xi −¯x)(xi −¯x)⊤) = tr((I −P ) | {z } rank n−k C) = tr(C) −tr( P |{z} rank k C), where C is the data covariance matrix. Therefore the loss is minimized over (n −k)-dimensional subspaces and this is equivalent to maximizing over k-dimensional subspaces. In the on-line setting, learning proceeds in trials. (For the sake of simplicity we are not using a bias term at this point.) At trial t, the algorithm chooses a rank k projection matrix P t. It then receives an instance xt and incurs loss ∥xt −P txt∥2 2 = tr((I −P t) xt(xt)⊤). Our goal is to obtain an algorithm whose total loss over a sequence of trials PT t=1 tr((I −P t) xt(xt)⊤) is close to the total loss of the best rank k projection matrix P , i.e. infP tr((I −P ) PT t=1 xt(xt)⊤). Note that the latter loss is equal to the loss of standard PCA on data sequence x1, . . . , xT (assuming the data is centered). 3 Choosing a Subset of Experts Recall that projection matrices are symmetric positive definite matrices with eigenvalues in {0, 1}. Thus a rank k projection matrix can be written as P = Pk i=1 pip⊤ i , where the pi are the k orthonormal vectors forming the basis of the subspace. Assume for the moment that the eigenvectors are restricted to be standard basis vectors. Now projection matrices become diagonal matrices with entries in {0, 1}, where the number of ones is the rank. Also, the trace of a product of such a diagonal projection matrix and any symmetric matrix becomes a dot product between the diagonals of both matrices and the whole problem reduces to working with vectors: the rank k projection matrices reduce to vectors with k ones and n −k zeros and the diagonal of the symmetric matrix may be seen as a loss vector λt. Our goal now is to develop on-line algorithms for finding the lowest n −k components of the loss vectors λt so that the total loss is close the to the lowest n −k components of PT t=1 λt. Equivalently, we want to find the highest k components in λt. We begin by developing some methods for dealing with subsets of components. For convenience we encode such subsets as probability vectors: we call r ∈[0, 1]n an m-corner if it has m components set to 1 m and the remaining n −m components set to zero. At trial t the algorithm chooses an (n −k)-corner rt. It then receives a loss vector λt and incurs loss (n −k) rt · λt. Let An m consist of all convex combinations of m-corners. In other words, An m is the convex hull of the n m m-corners. Clearly any component wi of a vector w in An m is at most 1 m because it is a convex combination of numbers in [0.. 1 m]. Therefore An m ⊆Bn m, where Bn m is the set of ndimensional vectors w for which |w| = P i wi = 1 and 0 ≤wi ≤ 1 m, for all i. The following theorem implies that An m = Bn m: Theorem 1. Algorithm 1 produces a convex combination1 of at most n m-corners for any vector in Bn m. Algorithm 1 Mixture Construction input 1 ≤m < n and w ∈Bn m repeat Let r be a corner whose m components correspond to nonzero components of w and contain all the components of w that are equal to |w| m Let s be the smallest of the m chosen components in w and l be the largest value of the remaining n −m components w := w − p z }| { min(m s, |w| −m l) r and output p r until w = 0 Proof. Let b(w) be the number of boundary components in w, i.e. b(w) := |{i : wi is 0 or |w| m }|. Let eBn m be all vectors w such that 0 ≤wi ≤|w| m , for all i. If b(w) = n, then w is either a corner or 0. The loop stops when w = 0. If w is a corner then it takes one iteration to arrive at 0. We show if w ∈eBn m and w is neither a corner nor 0, then the successor bw ∈eBn m and b( bw) > b(w). Clearly, bw ≥0, because the amount that is subtracted in the m components of the corner is at most as large as the corresponding components of w. We next show that bwi ≤| b w| m . If i belongs to the corner then bwi = wi −p m ≤|w|−p m = | b w| m . Otherwise bwi = wi ≤l, and l ≤| b w| m follows from the fact that p ≤|w| −m l. This proves that bw ∈eBn m. For showing that b( bw) > b(w) first observe that all boundary components in w remain boundary components in bw: zeros stay zeros and if wi = |w| m then i is included in the corner and bwi = |w|−p m = | b w| m . However, the number of boundary components is increased at least by one because the components corresponding to s and l are both non-boundary components in w and at least one of them becomes a boundary point in bw: if p = m s then the component corresponding to s in w is s − p m = 0 in bw and if p = |w| −m l then the component corresponding to l in w is l = |w|−p m = | b w| m . It follows that it may take up to n iterations to arrive at a corner which has n boundary components and one more iteration to arrive at 0. Finally note that there is no weight vector w ∈eBn m s.t. b(w) = n −1 and therefore the size of the produced linear combination is at most n. More precisely, the size is at most n −b(w) if n −b(w) ≤n −2 and one if w is a corner. The algorithm produces a linear combinations of corners, i.e. w = P j pjrj. Since pj ≥0 and all |rj| = 1, P j pj = 1 and we actually have a convex combination. Fact 1. For any loss vector λ, the following corner has the smallest loss of any convex combination of corners in An m = Bn m: Greedily pick the component of minimum loss (m times). How can we use the above construction and fact? It seems too hard to maintain information about all n n−k corners of size n−k. However, the best corner is also the best convex combination of corners, i.e. the best from the set An n−k where each member of this set is given by n n−k coefficients. Luckily, this set of convex combinations equals Bn n−k and it takes n coefficients to specify a member in that set. Therefore we can search for the best hypothesis in the set Bn n−k and for any such hypothesis we can always construct a convex combination (of size ≤n) of (n −k)-corners which has the same expected loss for each loss vector. This means that any algorithm predicting with a hypothesis vector in Bn n−k can be converted to an algorithm that probabilistically chooses an (n −k)-corner. Finally, the set P t of the k components missed by the chosen (n −k)-corner corresponds to the subspace we project onto. Algorithm 2 spells out the details for this approach. The algorithm chooses a corner probabilistically and (n−k) wt ·λt is the expected loss in one trial. The projection bwt onto Bn n−k can be achieved as follows: find the smallest l s.t. capping the largest l components to 1 n−k and rescaling the remaining n−l weights to total weight 1− l n−k makes none of the rescaled weights go above 1 n−k. The simplest 1The existence of a convex combination of at most n corners is implied by Carath´eodory’s theorem [Roc70], but the algorithm gives an effective construction. algorithm starts with sorting the weights and then searches for l with a binary search. However, a linear algorithm that recursively uses the median is given in [HW01]. Algorithm 2 Capped Weighted Majority Algorithm input: 1 ≤k < n and an initial probability vector w1 ∈Bn n−k for t = 1 to T do Decompose wt as P j pjrj with Algorithm 1, where m = n −k Draw a corner r = rj with probability pj Let P t be the k components outside the drawn corner Receive loss vector λt Incur loss (n −k) r · λt = P i∈{1,...,n}−P t ℓt i. bwt i := wt i exp(−ηℓt i) / Z, where Z normalizes the weights to one wt+1 := argmin w∈Bn n−k d(w, bwt) end for When k = n −1, n −k = 1 and Bn 1 is the entire probability simplex. In this case the call to Algorithm 1 and the projection onto Bn 1 are vacuous and we get the standard Randomized Weighted Majority algorithm [LW94]2 with loss vector λt. Let d(u, w) denote the relative entropy between two probability vectors: d(u, w) = P i ui log ui wi . Theorem 2. On an arbitrary sequence of loss vectors λ1, . . . , λT ∈[0, 1]n, the total expected loss of Algorithm 2 is bounded as follows: (n −k) T X t=1 wt · λt ≤(n −k)η PT t=1 u · λt + d(u, w1) −d(u, wT +1) 1 −exp(−η) , for any learning rate η > 0 and comparison vector u ∈Bn n−k. Proof. The update for bwt in Algorithm 2 is the update of the Continuous Weighted Majority for which the following basic inequality is known (essentially [LW94], Lemma 5.3): d(u, wt) −d(u, bwt) ≥−η u · λt + wt · λt(1 −exp(−η)). (1) The weight vector wt+1 is a Bregman projection of vector bwt onto the convex set Bn n−k. For such projections the Generalized Pythagorean Theorem holds (see e.g [HW01] for details): d(u, bwt) ≥d(u, wt+1) + d(wt+1, bwt) Since Bregman divergences are non-negative, we can drop the d(wt+1, bwt) term and get the following inequality: d(u, bwt) −d(u, wt+1) ≥0, for u ∈Bn n−k. Adding this to the previous inequality we get: d(u, wt) −d(u, wt+1) ≥−η u · λt + wt · λt(1 −exp(−η)) By summing over t, multiplying by n −k, and dividing by 1 −exp(−η), the bound follows. 4 On-line PCA In this context (matrix) corners are density matrices with m eigenvalues equal to 1 m and the rest are 0. Also the set An m consists of all convex combinations of such corners. The maximum eigenvalue of a convex combination of symmetric matrices is at most as large as the maximum eigenvalue of any of the matrices ([Bha97], Corollary III.2.2). Therefore each convex combination of corners is 2The original Weighted Majority algorithms were described for the absolute loss. The idea of using loss vectors instead was introduced in [FS97]. a density matrix whose eigenvalues are bounded by 1 m and An m ⊆Bn m, where Bn m consists of all density matrices whose maximum eigenvalue is at most 1 m. Assume we have some density matrix W ∈Bn m with eigendecomposition W diag(ω)W⊤. Algorithm 1 can be applied to the vector of eigenvalues ω of this density matrix. The output convex combination of up to n diagonal corners ω = P j pjrj can be turned into a convex combination of matrix corners that expresses the density matrix: W = P j pj W diag(rj)W⊤. It follows that An m = Bn m as in the diagonal case. Theorem 3. For any symmetric matrix S, minW ∈Bn m tr(W S) attains its minimum at the following matrix corner: greedily choose orthogonal eigenvectors of S of minimum eigenvalue (m times). Proof. Let λ↓(W ) denote the vector of eigenvalues of W in descending order and let λ↑(S) be the same vector of S but in ascending order. Since both matrices are symmetric, tr(W S) ≥λ↓(W ) · λ↑(S) ([MO79], Fact H.1.h of Chapter 9). Since λ↓(W ) ∈Bn m, the dot product is minimized and the inequality is tight when W is an m-corner corresponding to the m smallest eigenvalues of S. Also the greedy algorithm finds the solution (see Fact 1 of this paper). Algorithm 2 generalizes to the matrix setting. The Weighted Majority update is replaced by the corresponding matrix version which employs the matrix exponential and matrix logarithm [WK06] (The update can be seen as a special case of the Matrix Exponentiated Gradient update [TRW05]). The following theorem shows that for the projection we can keep the eigensystem fixed. Here ∆(U, W ) denotes the quantum relative entropy tr(U(log U −log W )). Theorem 4. Projecting a density matrix onto Bn m w.r.t. the quantum relative entropy is equivalent to projecting the vector of eigenvalues w.r.t. the “normal” relative entropy: If W has the eigendecomposition W diag(ω)W⊤, then argmin U∈Bn m ∆(U, W ) = Wu∗W⊤, where u∗= argmin u∈Bn m d(u, ω). Proof. If λ↓(S) denotes the vector of eigenvalues of a symmetric matrix S arranged in descending order, then tr(ST ) ≤λ↓(S) · λ↓(T ) ([MO79], Fact H.1.g of Chapter 9). This implies that tr(U log W ) ≤λ↓(U) · log λ↓(W ) and ∆(U, W ) ≥d(λ↓(U), λ↓(W )). Therefore min U∈Bn m ∆(U, W ) ≥min u∈Bn m d(u, ω) and if u∗minimizes the r.h.s. then W diag(u∗)W⊤minimizes the l.h.s. because ∆(W diag(u∗)W, W ) = d(u∗, ω). Algorithm 3 On-line PCA algorithm input: 1 ≤k < n and an initial density matrix W 1 ∈Bn n−k for t = 1 to T do Perform eigendecomposition W t = WωW⊤ Decompose ω as P j pjrj with Algorithm 1, where m = n −k Draw a corner r = rj with probability pj Form a matrix corner R = W diag(r)W⊤ Form a rank k projection matrix P t = I −(n −k)R Receive data instance vector xt Incur loss ∥xt −P txt∥2 2 = tr((I −P t) xt(xt)⊤) c W t = exp(log W t −η xt(xt)⊤) / Z, where Z normalizes the trace to 1 W t+1 := argmin W ∈Bn n−k ∆(W , c W t) end for The expected loss in trial t of this algorithm is given by (n −k)tr(W txt(xt)⊤) Theorem 5. For an arbitrary sequence of data instances x1, . . . , xT of 2-norm at most one, the total expected loss of the algorithm is bounded as follows: T X t=1 (n −k)tr(W txt(xt)⊤) ≤(n −k)η PT t=1 tr(Uxt(xt)⊤) + ∆(U, W 1) −∆(U, W T ) 1 −exp(−η) , for any learning rate η > 0 and comparator density matrix U ∈Bn n−k.3 Proof. The update for c W t is a density matrix version of the standard Weighted Majority update which was used for variance minimization along a single direction (i.e. k = n −1) in [WK06]. The basic inequality (1) for that update becomes: ∆(U, W t) −∆(U, c W t) ≥−η tr(Uxt(xt)⊤) + tr(W txt(xt)⊤)(1 −exp(−η)) As in the proof of Theorem 2 of this paper, the Generalized Pythagorean theorem applies and dropping one term we get the following inequality: ∆(U, c W t) −∆(U, W t+1) ≥0, for U ∈Bn n−k. Adding this to the previous inequality we get: ∆(U, W t) −∆(U, W t+1) ≥−η tr(Uxt(xt)⊤) + tr(W txt(xt)⊤)(1 −exp(−η)) By summing over t, multiplying by n −k, and dividing by 1 −exp(−η), the bound follows. It is easy to see that ∆(U, W 1) ≤(n −k) log n n−k. If k ≤n/2, then this is further bounded by k log n k . Thus, the r.h.s. is essentially linear in k, but logarithmic in the dimension n. By tuning η [CBFH+97, FS97], we can get regret bounds of the form: (expected total loss of alg.) - (total loss best k-space) = O r (total loss of best k-subspace) k log n k + k log n k . (2) Using standard but significantly simplified conversion techniques from [CBFH+97] based on the leave-one-out loss we also obtain algorithms with good regret bounds in the following model: the algorithm is given T −1 instances drawn from a fixed but unknown distribution and produces a k-space based on those instances; it then receives a new instance from the same distribution. We can bound the expected loss on the last instance: (expected loss of alg.) - (expected loss best k-space) = O r (expected loss of best k-subspace) k log n k T + k log n k T . (3) 5 Lower Bound The simplest competitor to our on-line PCA algorithm is the algorithm that does standard (uncentered) PCA on all the data points seen so far. In the expert setting this algorithm corresponds to “projecting” to the n −k experts that have minimum loss so far (where ties are broken arbitrarily). When k = n −1, this becomes the follow the leader algorithm. It is easy to construct an adversary strategy for this type of deterministic algorithm (any k) that forces the on-line algorithm to incur n times as much loss as the off-line algorithm. In contrast our algorithm is guaranteed to have expected additional loss (regret) of the order of square root of k ln n times the total loss of the best off-line algorithm. When the instances are diagonal matrices then our algorithm specializes to the standard expert setting and in that setting there are probabilistic lower bounds that show that our tuned bounds (2,3) are tight [CBFH+97]. 6 Simple Experiments The above lower bounds do not justify our complicated algorithms for on-line PCA because natural data might be more benign. However natural data often shifts and we constructed a simple dataset of this type in Figure 1. The first 333 20-dimensional points were drawn from a Gaussian distribution with a rank 2 covariance matrix. This is repeated twice for different covariance matrices of rank 3The xt(xt)⊤can replaced by symmetric matrices St whose eigenvalues have range at most one. Figure 1: The data set used for the experiments. Different colors/symbols denote the data points that came from three different Gaussians with rank 2 covariance matrices. The data vectors are 20-dimensional but we plot only the first 3 dimensions. Figure 2: The blue curve plots the total loss of on-line algorithm up to trial t for 50 different runs (with k = 2 and η fixed to one). Note that the variance of the losses is small. The red single curve plots the total loss of the best subspace of dimension 2 for the first t points. Figure 3: Behavior of the algorithm around a transition point between two distributions. Each ellipse depicts the projection matrix with the largest coefficient in the decomposition of W t. The transition sequence starts with the algorithm focused on the projection matrix for the first subset of data and ends with essentially the optimal matrix for the second subset. The depicted transition takes about 60 trials. 2. We compare the total loss of our on-line algorithm with the total loss of the best subspace for the first t data points. During the first 333 datapoints the latter loss is zero since the first dataset is 2-dimensional, but after the third dataset is completed, the loss of any fixed off-line comparator is large. Figure 3 depicts how our algorithm transitions between datasets and exploits the on-lineness of the data. Randomly permuting the dataset removes the on-lineness and results in a plot where the total loss of the algorithm is somewhat above that of the off-line comparator (not shown). Any simple “windowing algorithm” would also be able to detect the switches. Such algorithms are often unwieldy and we don’t know any strong regret bounds for them. In the expert setting there is however a long line of research on shifting (see e.g. [BW02, HW98]). An algorithm that mixes a little bit of the uniform distribution into the current mixture vector is able to restart when the data switches. More importantly, an algorithm that mixes in a little bit of the past average density matrix is able to switch quickly to previously seen subspaces and to our knowledge windowing techniques cannot exploit this type of switching. Preliminary experiments on face image data indicate that the algorithms that accommodate switching work as expected, but more comprehensive experiments still need to be done. 7 Conclusions We developed a new set of techniques for low dimensional approximation with provable bounds. Following [TRW05, WK06], we essentially lifted the algorithms and bounds developed for diagonal case to the matrix case. Are there general reductions? The on-line PCA problem was also addressed in [Cra06]. However, that paper does not fully capture the PCA problem because their algorithm predicts with a full-rank matrix in each trial, whereas we predict with a probabilistically chosen projection matrix of the desired rank k. Furthermore, that paper proves bounds on the filtering loss, which are typically easier to prove, and it is not clear how this loss relates to the more standard regret bounds proven in this paper. For the expert setting there are alternate techniques for designing on-line algorithms that do as well as the best subset of n −k experts: set {i1, . . . , in−k} receives weight proportional to exp(−P j ℓ<t ij ) = Q j exp(−ℓ<t ij ). In this case we can get away with keeping only one weight per expert (the ith expert gets weight exp(−ℓ<t i )) and then use dynamic programming to sum over sets (see e.g. [TW03] for this type of methods). With some more work, dynamic programming can also be applied for PCA. However, our new trick of using additional constraints on the eigenvalues is an alternative that avoids dynamic programming. Many technical problems remain. For example we would like to enhance our algorithms to learn a bias as well and apply our low-dimensional approximation techniques to regression problems. Acknowledgment: Thanks to Allen Van Gelder for valuable discussions re. Algorithm 1. References [Bha97] R. Bhatia. Matrix Analysis. Springer, Berlin, 1997. [BW02] Olivier Bousquet and Manfred K. Warmuth. Tracking a small set of experts by mixing past posteriors. Journal of Machine Learning Research, 3:363–396, 2002. [CBFH+97] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. 44(3):427–485, 1997. [Cra06] Koby Crammer. Online tracking of linear subspaces. In Proceedings of the 19th Annual Conference on Learning Theory (COLT 06), Pittsburg, June 2006. Springer. [FS97] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997. [HW98] Mark Herbster and Manfred Warmuth. Tracking the best expert. Machine Learning, 32(2):151–178, 1998. Earlier version in 12th ICML, 1995. [HW01] Mark Herbster and Manfred K. Warmuth. Tracking the best linear predictor. Journal of Machine Learning Research, 1:281–309, 2001. [LW94] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Inform. Comput., 108(2):212–261, 1994. [MO79] A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and its Applications. Academic Press, 1979. [Roc70] R. Rockafellar. Convex Analysis. Princeton University Press, 1970. [TRW05] K. Tsuda, G. R¨atsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of Machine Learning Research, 6:995–1018, June 2005. [TW03] Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. Journal of Machine Learning Research, 4:773–818, 2003. [WK06] Manfred K. Warmuth and Dima Kuzmin. Online variance minimization. In Proceedings of the 19th Annual Conference on Learning Theory (COLT 06), Pittsburg, June 2006. Springer.
|
2006
|
130
|
2,956
|
Efficient Structure Learning of Markov Networks using L1-Regularization Su-In Lee Varun Ganapathi Daphne Koller Department of Computer Science Stanford University Stanford, CA 94305-9010 {silee,varung,koller}@cs.stanford.edu Abstract Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1-based method achieves considerably higher generalization performance than the more standard L2-based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem. 1 Introduction Undirected graphical models, such as Markov networks or log-linear models, have been used in an ever-growing variety of applications, including computer vision, natural language, computational biology, and more. However, as this modeling framework is used in increasingly more complex and less well-understood domains, the problem of selecting from the exponentially large space of possible network structures becomes of great importance. Including all of the possibly relevant interactions in the model generally leads to overfitting, and can also lead to difficulties in running inference over the network. Moreover, learning a “good” structure can be an important task in its own right, as it can provide insight about the underlying structure in the domain. Unfortunately, the problem of learning Markov networks remains a challenge. The key difficulty is that the maximum likelihood (ML) parameters of these networks have no analytic closed form; finding these parameters requires an iterative procedure (such as conjugate gradient [15] or BFGS [5]), where each iteration runs inference over the current model. This type of procedure is computationally expensive even for models where inference is tractable. The problem of structure learning is considerably harder. The dominant type of solution to this problem uses greedy local heuristic search, which incrementally modifies the model by adding and possibly deleting features. One approach [6, 14] adds features so as to greedily improve the model likelihood; once a feature is added, it is never removed. As the feature addition step is heuristic and greedy, this can lead to the inclusion of unnecessary features, and thereby to overly complex structures and overfitting. An alternative approach [1, 7] explicitly searches over the space of low-treewidth models, but the utility of such models in practice is unclear; indeed, hand-designed models for real-world problems generally do not have low tree-width. Moreover, in all of the greedy heuristic search methods, the learned network is (at best) a local optimum of a penalized likelihood score. In this paper, we propose a different approach for learning the structure of a log-linear graphical model (or a Markov network). Rather than viewing it as a combinatorial search problem, we embed the structure selection step within the problem of parameter estimation: features that have weight zero (in log-space) are degenerate, and do not induce dependencies between the variables they involve. To appropriately bias the model towards sparsity, we use a technique that has become increasingly popular in the context of supervised learning, in problems that involve a large number of features, many of which may be irrelevant. It has been long known [21] that using L1-regularization over the model parameters — optimizing a joint objective that trades off fit to data with the sum of the absolute values of the parameters — tends to lead to sparse models, where many weights have value 0. More recently, Dudik et al. (2004) showed that density estimation in log-linear models using L1-regularized likelihood has sample complexity that grows only logarithmically in the number of features of the log-linear model; Ng (2004) shows a similar result for L1-regularized logistic regression. These results show that this approach is useful for selecting the relevant features from a large number of irrelevant ones. Other recent work proposes effective algorithms for L1-regularized generalized linear models (e.g., [18, 10, 9]), support vector machines (e.g., [25]), and feature selection in log-linear models encoding natural language grammars [19]. Surprisingly, the use of L1-regularization has not been proposed for the purpose of structure learning in general Markov networks. In this paper, we explore this approach, and discuss issues that are important to its effective application to large problems. A key point is that, for a given L1based model score and given candidate feature set F, we have a fixed convex optimization problem that admits a unique optimal solution. Due to the properties of the L1 score, in this solution, many features will have weight 0, generally leading to a sparse network structure. However, it is generally impractical to simply initialize the model to include all possible features: exact inference in such a model is almost invariably intractable, and approximate inference methods such as loopy belief propagation [17] are likely to give highly inaccurate estimates of the gradient, leading to poorly learned models. Thus, we propose an algorithm schema that gradually introduces features into the model, and lets the L1-regularization scheme eliminate them via the optimization process. We explore the use of different approaches for feature introduction, one based on the gain-based method of Della Pietra, Della Pietra and Lafferty [6] and one on the grafting method of Perkins, Lacker and Theiler [18]. We provide a sound termination condition for the algorithm based on the criterion proposed by Perkins et al. [18]; given correct estimates of the gradient, this algorithm is guaranteed to terminate only at the unique global optimum, for any reasonable feature introduction method. We test our method on synthetic data generated from known MRFs and on two real-world tasks: modeling the joint distribution of pixel values in the MNIST data [12], and modeling the joint distribution of genetic sequence variations — single-nucleotide polymorphisms (SNPs) — in the human HapMap data [3]. Our results show that L1-regularization out-performs other approaches, and provides an effective method for learning MRF structure even in large, complex domains. 2 Preliminaries We focus our presentation on the framework of log-linear models, which forms a convenient basis for a discussion of learning. Let X = {X1, . . . , Xn} be a set of discrete-valued random variables. A log-linear model is a compact representation of a probability distribution over assignments to X. The log-linear model is defined in terms of a set of feature functions fk(Xk), each of which is a function that defines a numerical value for each assignment xk to some subset Xk ⊂X. Given a set of feature functions F = {fk}, the parameters of the log-linear model are weights θ = {θk : fk ∈F}. The overall distribution is then defined as: Pθ(x) = 1 Z(θ) exp(P fk∈F θkfk(xk)), where xk is the assignment to Xk within x, and Z(θ) is the partition function that ensures that the distribution is normalized (so that all entries sum to 1). Note that this definition of features encompasses both “standard” features that relate unobserved network variables (e.g., the part of speech of a word in a sentence) to observed elements in the data (e.g., the word itself), and structural features that encode the interaction between hidden variables in the model. A log-linear model induces a Markov network over X, where there is an edge between every pair of variables Xi, Xj that appear together in some feature fk(Xk) (Xi, Xj ∈Xk). The clique potentials are constructed from the log-linear features in the obvious way. Conversely, every Markov network can be encoded as a log-linear model by defining a feature which is an indicator function for every assignment of variables xc to a clique Xc. The mapping to Markov networks is useful, as most inference algorithms, such as belief propagation [17, 16], operate on the graph structure of the Markov network. The standard learning problem for MRFs is formulated as follows. We are given a set of IID training instances D = {x[1], . . . , x[M]}, each consisting of a full assignment to the variables in X. Our goal is to output a log-linear model M over X, which consists of a set of features F and a parameter vector θ that specifies a weight for each fk ∈F. The log-likelihood function log P(D | M) has the following form: ℓ(M : D) = X fk∈F θkfk(D) −M log Z(θ) = θ⊤f(D) −M log Z(θ), (1) where fk(D) = PM m=1 fk(xk[m]) is the sum of the feature values over the entire data set, f(D) is the vector where all of these aggregate features have been arranged in the same order as the parameter vector, and θ⊤f(D) is a vector dot-product operation. There is no closed-form solution for the parameters that maximize Eq. (1), but the objective is concave, and can therefore be optimized using numerical optimization procedures such as conjugate gradient [15] or BFGS [5]. The gradient of the log-likelihood is: ∂ℓ(M : D) ∂θk = fk(D) −MIEx∼Pθ[fk(x)]. (2) This expression has a particularly intuitive form: the gradient attempts to make the feature counts in the empirical data equal to their expected counts relative to the learned model. Note that, to compute the expected feature counts, we must perform inference relative to the current model. This inference step must be performed at every iteration of the gradient process. 3 L1-Regularized Structure Learning We formulate our structure learning problem as follows. We assume that there is a (possibly very large) set of features F, from which we wish to select a subset F ⊆F for inclusion in the model. This problem is generally solved using a heuristic search over the combinatorial space of possible feature subsets. Our approach addresses it as a search over the possible parameter vectors θ ∈IR|F|. Specifically, rather than optimizing the log-likelihood itself, we introduce a Laplacian parameter prior for each feature fk takes the form P(θk) = βk/2 · exp(−βk|θk|). We define P(θ) = Q k P(θk). We now optimize the posterior probability P(D, θ) = P(D | θ)P(θ) rather than the likelihood. Taking the logarithm and eliminating constant terms, we obtain the following objective: max θ [θ⊤f(D) −M log Z(θ) − X k βk|θk|]. (3) In most cases, the same prior is used for all features, so we have βk = β for all k. This objective is convex, and can be optimized efficiently using methods such as conjugate gradient or BFGS, although care needs to be taken with the discontinuity of the derivative at 0. Thus, in principle, we can simply optimize this objective to obtain its globally optimal parameter assignment. The objective of Eq. (3) should be contrasted with the one obtained for the more standard parameter prior used for log-linear models: the mean-zero Gaussian prior P(θk) ∝exp(−θ2 k/2σ2). The gaussian prior induces a regularization term that is quadratic in θk, which penalizes large features much more than smaller ones. Conversely, L1-regularization still penalizes small terms strongly, thereby forcing parameters to 0. Overall, it is known that, empirically, optimizing an L1-regularized objective leads to a sparse representation with a relative small number of non-zero parameters. Aside from this intuitive argument, recent theoretical results also provide a formal justification for the use of L1-regularization over other approaches: The analysis of Dudik et al. (2004) and Ng (2004) suggests that this form of regularization is effective at identifying relevant features even with a relatively small number of samples. Building directly on the results of Dudik et al. (2004), we can show the following result: Corollary 3.1: Let X = {X1, . . . , Xn} be a set of variables each of domain size d, and P ∗(X) be a distribution. Let F be the set of indicator features over all subsets of variables X ⊂X of cardinality at most c, and δ, ϵ, B > 0. Let be the parameterization over F that optimizes θ∗ B = arg max θ : ∥θ∥≤B IEξ∼P ∗ h ℓ(ξ : ˆ θ∗ B) i . For a data set D, let ˆθD be the assignment that optimizes Eq. (3), for regularization parameter βk = β = p c ln(2nd/δ)/(2m) for all k. Then with probability at least 1 −δ, for a data set D of IID instances from P ∗of size m ≥2cB2 1 ϵ2 ln 2nd δ . we have that: IEξ∼P ∗ h ℓ(ξ : ˆθD) i ≥IEξ∼P ∗[ℓ(ξ : θ∗ B)] −ϵ. In words, using the L1-regularized log-likelihood objective, we can learn a Markov network with a maximal clique size c, whose expected log-likelihood relative to the true underlying distribution is at most ϵ worse than the log-likelihood of the optimal Markov network in this class whose L1-norm is at most B. The number of samples required grows logarithmically in the number of nodes in the network, and polynomially in B. The dependence on B is quite natural, indicating that more samples are required to learn networks containing more “strong” interactions. Note, however, that if we bound the magnitude of each potential in the Markov network, then B = O((nd)c), so that a polynomial number of data instances suffices. 4 Incremental Feature Introduction The above discussion implicitly assumed that we can find the global optimum of Eq. (3) by simple convex optimization. However, we cannot simply include all of the features in the model in advance, and use only parameter optimization to prune away the irrelevant ones. Recall that computing the gradient requires performing inference in the resulting model. If we have too many features, the model may be too densely connected to allow effective inference. Even approximate inference algorithms, such as belief propagation, tend to degrade as the density of the network increases; for example, BP algorithms are less likely to converge, and the answers they return are typically much less accurate. Thus, our approach also contains a feature introduction component, which gradually selects features to add into the model, allowing the optimization process to search for the optimal values for their parameters. More precisely, our algorithm maintains a set of active features F ⊆F. An inactive feature fk has its parameter θk set to 0; the parameters of active features are free to be changed when optimizing the objective Eq. (3). In addition to various simple baseline methods, we explore two feature introduction methods, both of which are greedy and myopic, in that they compute some heuristic estimate of the likely benefit to be gained from introducing a single feature into the active set. The grafting procedure of Perkins et al. [18], which was developed for feature selection in standard classification tasks, selects features based on the gradient of these parameters: We first optimize the objective relative to the current active features F and their weights, so that, at convergence, the gradient relative to these features is zero. Then, for each inactive feature f, we compute the partial derivative of the objective Eq. (3) relative to θf, and select the one whose gradient is largest. A more conservative estimate is obtained from the gain-based method of Della Pietra et al. [6]. This method was designed for the log-likelihood objective. It begins by optimizing the parameters relative to the current active set F. Then, for each inactive feature f, it computes the log-likelihood gain of adding that feature, assuming that we could optimize its feature weight arbitrarily, but that the weights of all other features are held constant. It then introduces the feature with the greatest gain. Della Pietra et al. show that the gain is a concave objective that can be computed efficiently using a one-dimensional line search. For the restricted case of binary-valued features, they provide a closed-form solution for the gain. Our task is to compute not the optimal gain in log-likelihood, but rather the optimal gain of Eq. (3). It is not difficult to see that the gain in this objective, which differs from the log-likelihood in only a linear term, is also a concave function that can be optimized using line search. Moreover, for the case of binary-valued features, we can also provide a closed-form solution for the gain. The change in the objective function for introducing a feature fk is: ∆L1 = θkfk(D) −β∥θk∥−M log[exp(θk)Pθ(fk) + Pθ(¬fk)], where M is the number of training instances. If we take the derivative of the above function and set it to zero, we also get a closed form solution: θk = log (fk(D) −βsign(θk))Pθ(¬fk) (M −fk(D) + βsign(θk))Pθ(fk) . Both methods are heuristic, in that they consider only the potential gain of adding a single feature in isolation, assuming all other weights are held constant. However, the grafting method is more optimistic, in that it estimates the value of adding a single feature via the slope of adding it, whereas the gain-based approach also considers, intuitively, how far one can go in that direction before the gain “peaks out”. The gain based heuristic is, in fact, a lower bound on the actual gain obtained from adding this feature, while allowing the other features to also adapt. Overall, the gain-based heuristic provides a better estimate of the value of adding the feature, albeit at slightly greater computational cost (except in the limited cases where a closed-form solution can be found). As observed by Perkins et al. [18], the use of the L1-regularized objective also provides us with a sound stopping criterion for any incremental feature-induction algorithm. If we have that, for every inactive fk ̸∈F, the gradient of the log-likelihood | ∂ℓ(M:D) ∂θk | ≤β, then the gradient of the objective in any direction is non-positive, and the objective is at a stationary point. Importantly, as the overall objective is a concave function, it has a unique global maximum. Hence, once the termination condition is achieved, we are guaranteed that we are at the local maximum, regardless of the feature introduction method used. Thus, assuming the algorithm is run until the convergence criterion is satisfied, there is no impact of the feature introduction heuristic on the final outcome, but only on the computational complexity. Finally, constantly evaluating all of the non-active candidate features can be computationally prohibitive when many features are possible. Even in pairwise Markov networks, when the number of nodes is large, a quadratic number of candidate edges can become unmanageable. In this case, we must generally pre-select a smaller set of candidate features, and ignore the others entirely. One very natural method for pre-selecting edges is to train an L1-regularized logistic regression classifier for each variable given all of the others, and then use only the edges that are used in these individual classifiers. This approach is similar to the work of Wainwright et al. [22] (done in parallel with our work), who proposed the use of L1-regularized pseudo-likelihood for asymptotically learning a Markov network structure. 5 The Use of Approximate Inference All of the steps in the above algorithm rely on the use of inference for computing key quantities: The gradient is needed for the parameter optimization, for the grafting method, and for the termination condition, and the expression for the gradient requires the computation of marginal probabilities relative to our current model. Similarly, the computation of the gain also requires inference. As we discussed above, in most of the networks that are useful models for real applications, exact inference is intractable. Therefore, we must resort to approximate inference, which results in errors in the gradient. While many approximate inference methods have been proposed, one of the most commonly used is the general class of loopy belief propagation (BP) algorithms [17, 16, 24]. The use of an approximate inference algorithm such as BP raises several important points. One important question issue relates to the computation of the gradient or the gain for features that are currently inactive. The belief propagation algorithm, when executed on a particular network with a set of active features F, creates a cluster for every subset of variables X k that appear as the scope of a feature fk(Xk). The output of the BP inference process is a set of marginal probabilities over all of the clusters; thus, it returns the necessary information for computing the expected sufficient statistics in the gradient of the objective (see Eq. (2)). However, for features fk(Xk) that are currently inactive, there is no corresponding cluster in the induced Markov network, and hence, in most cases, the necessary marginal probabilities over Xk will not be computed by BP. We can approximate this marginal probability by extracting a subtree of the calibrated loopy graph that contains all of the variables in Xk. At convergence of the BP algorithm, every subtree of the loopy graph is calibrated, in that all of the belief potentials must agree [23]. Thus, we can view the subtree as a calibrated clique tree, and use standard dynamic programming methods over the tree (see, e.g.. [4]) to extract an approximate joint distribution over Xk. We note that this computation is exact for tree-structured cluster graphs, but approximate otherwise, and that the choice of tree is not obvious, and affects the accuracy of the answers. A second key issue is that the performance of BP algorithms generally degrades significantly as the density of the network increases: they are less likely to converge, and the answers they return are typically much less accurate. Moreover, non-convergenceof the inference is more common when the network parameters are allowed to take larger, more extreme values; see, for example, [20, 11, 13] for some theoretical results supporting this empirical phenomenon. Thus, it is important to keep the model amenable to approximate inference, and thereby continue to improve, for as long as possible. This observation has two important consequences. First, while different feature introduction schemes achieve the same results when using exact inference, their outcomes can vary greatly when using approximate inference, due to differences in the structure of the networks arising during the learning process. Thus, as we shall see, better feature introduction methods, which introduce the more relevant features first, work much better in practice. Second, in order to keep the inference feasible for as long as possible, we utilize an annealing schedule for the regularization parameter β, beginning with large values of β, leading to greater sparsification of the structure, and then gradually reducing β, allowing additional (weaker) features to be introduced. This method allows a greater part of the learning to be executed with a more robust model. 6 Results In our experiments, we focus on binary pairwise Markov networks, where each feature function is an indicator function for a certain assignment to a pair of nodes. As computing the exact loglikelihood is intractable in most networks, we use the conditional marginal log-likelihood (CMLL) as our evaluation metric on the learned network. To calculate CMLL, we first divide the variables into two groups: Xhidden and Xobserved. Then, for any test instance X[m], we compute CMLL(X[m]) = P Xh∈Xhidden[m] log P(Xh|Xobserved[m]). In practice, we divide the variables into four groups and calculate the average CMLL when observing only one group and hiding the rest. Note that the CMLL is defined only with respect to the marginals (but not the global partition function Z(θ)), which are empirically thought to be more accurate. We considered three feature induction schemes: (a) Gain: based on the estimated change of gain, (b) Grad: using grafting and (c) Simple: based on pairwise similarity. Under the Simple scheme, the score of a pairwise feature between Xi and Xj is the mutual information between Xi and Xj. For each scheme, we varied the regularization method: (a) None: no regularization, (b) L1: L1 regularization and (c) L2: L2 regularization. We note that Gain and Grad performed similarly for L1 and None. Moreover, we used only Grad for L2, because L2 regularization does not admit a closed form solution for the approximate gain. Experiments on Synthetically Generated Data. We generated synthetic data through Gibbs sampling on a synthetic network. A network structure with N nodes was generated by treating each possible edge as a Bernoulli random variable and sampling the edges. We chose the parameter of Bernoulli distribution so that each node had K neighbors on average. In order to analyze the dependence of the performance on the size and connectivity of a network, we varied N and K. Figure 1: Results from the experiments on the synthetic data (See text for details.) We compare our algorithm using L1 regularization against no regularization and L2 regularization in three different ways. Figure 1 summarizes our results on this data sets, and includes information about the synthetic networks used for each experiment. The method labeled ’True’ simply learns the parameters given the true model. In Figure 1(a), we measure performance using CMLL and reconstruction error as the number of training examples increases. As expected, L1 produces the biggest improvement when the number of training instances is small, whereas L2 and None are more prone to overfitting. This effect is much more pronounced when measuring the Hamming distance, the number of disagreeing edges between the learned structure and the true structure. The figure shows that L2 and None learn many spurious edges. Not surprisingly, L1 shows sparser distribution on the weights, thereby it has smaller number of edges with non-negligible weights; the structures from None and L2 tend to have many edges with small values. In Figure 1(b), we plot performance as a function of the density of the synthetic network. As the synthetic network gets denser, L1 increasingly outperforms the other algorithms. This may be because as the graph gets more dense, each node is indirectly correlated with more other nodes. Therefore, the feature induction algorithm is more likely to introduce an spurious edge, which L1 may later remove, whereas None and L2 do not. In Figure 1(c), we measure the wall-clock time as a function of the size of the synthetic network. Figure 1(c) shows that the computational cost of learning the structure of the network using Gain-L1 not much more than that of learning the parameters alone. Moreover, L1 increasingly outperforms other regularization methods as the number of nodes increases. Experiments on MNIST Digit Dataset. Moving to real data, we applied our algorithm to handwritten digits. The MNIST training set consists of 32 × 32 binary images of handwritten digits. In order to speed up inference and learning, we resized the image to 16 × 16. We trained the model where each pixel is a variable for each digit separately, using a training set consisting of 189–195 images per digit. For each digit, we used 50 images as training instances and the remainder as test instances. Figure 2(a) compares CMLL of the different methods. To save space, we show the digits on which the relative difference in performance of L1 compared to the next best competitor is the lowest (digit 5) and highest (digit 0), as well as the average performance. As mentioned earlier, the performance Figure 2: Results from the experiments on MNIST dataset of the regularized algorithm should be insensitive to the feature induction method, assuming inference is exact. However, in practice, because inference is approximate, an induction algorithm that introduces spurious features will affect the quality of inference, and therefore the performance of the algorithm. This effect is substantiated by the poor performance of the Simple-L1 and SimpleL2 methods that introduce features based on mutual information rather than gradient (Grad-) or approximate gain (Gain-). Nevertheless L1 still outperforms None and L2, regardless the feature induction algorithm with which it is paired. Figure 2(b) shows a visualization of the MRF learned when modeling digits 4 and 7. Of course, one would expect many short-range interactions, such as the associativity between neighboring pixels, and the algorithm does indeed capture these relationships. (They are not shown in the graph to simplify the analysis of the relationships.) Interestingly, the algorithm picks up long-range interactions, which presumably allow the algorithm to model the variations in the size and shape of hand-written digits. Experiments on Human Genetic Variation Data. The Human HapMap data set 1 represents the genetic variation over human individuals. Six data sets contain the genotype values over 6141,052 genetic markers (SNPs) from 120 individuals. For each data set, we learned the structure of the Markov network whose nodes are binary valued SNPs such that it captures the structure of the human genetic variation. Figure 2(c) compares CMLLs among three methods for these data sets. For all data sets, L1 shows better performance than L2 and None. 7 Discussion and Future Work We have presented a simple and effective method for learning the structure of Markov networks. We view the structure learning problem as an L1-regularized parameter estimation task, allowing it to be solved using convex optimization techniques. We show that the computational cost of our method is not considerably greater than pure parameter estimation for a fixed structure, suggesting that MRF structure learning is a feasible option for many applications. 1The Human HapMap data are available at: http://www.hapmap.org. There are some important directions in which our work can be extended. Currently, our method handles each feature in the log-linear model independently, with no attempt to bias the learning towards sparsity in the structure of the induced Markov network. We can extend our approach to introduce such a bias by using a variant of L1 regularization that penalizes blocks of parameters together, such as the block-L1-norm of [2]. From a theoretical perspective, it would be interesting to show that, at the large sample limit, redundant features are eventually eliminated, so that the learning eventually converges to a minimal structure consistent with the underlying distribution. Similar results were shown by Donoho [8], and can perhaps be adapted to this case. A key limiting factor in MRF learning, and in our approach, is the fact that it requires inference over the model. While our experiments suggest that approximate inference is a viable solution, as the network structure becomes dense, its performance does degrade, especially as the approximate gradient does not always move the parameters to 0, diminishing the sparsifying effect of the L1 regularization, and rendering the inference even less precise. It would be interesting to explore inference methods whose goal is correctly estimating the direction (even if not the magnitude) of the gradient. Finally, it would be interesting to explore the viability of the learned network structures in realworld applications, both for density estimation and for knowledge discovery, for example, in the context of the HapMap data. References [1] F. Bach and M. Jordan. Thin junction trees. In NIPS 14, 2002. [2] F.R. Bach, G.R.G. Lanckriet, and M.I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm, 2004. [3] The International HapMap Consortium. The international hapmap project. Nature, 426:789–796, 2003. [4] Robert G. Cowell and David J. Spiegelhalter. Probabilistic Networks and Expert Systems. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1999. [5] H. Daum´e III. Notes on CG and LM-BFGS optimization of logistic regression. August 2004. [6] S. Della Pietra, V.J. Della Pietra, and J.D. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393, 1997. [7] A. Deshpande, M.N. Garofalakis, and M.I. Jordan. Efficient stepwise selection in decomposable models. In Proc. UAI, pages 128–135, 2001. [8] D. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition, 1999. [9] A. Genkin, D. D. Lewis, and D. Madigan. Large-scale bayesian logistic regression for text categorization. 2004. [10] J. Goodman. Exponential priors for maximum entropy models. In North American ACL, 2005. [11] Alexander T. Ihler, John W. Fischer III, and Alan S. Willsky. Loopy belief propagation: Convergence and effects of message errors. J. Mach. Learn. Res., 6:905–936, 2005. [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. [13] Martijn A. R. Leisink and Hilbert J. Kappen. General lower bounds based on computer generated higher order expansions. In UAI, pages 293–300, 2002. [14] A. McCallum. Efficiently inducing features of conditional random fields. In Proc. UAI, 2003. [15] Thomas P. Minka. Algorithms for maximum-likelihood logistic regression. 2001. [16] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy belief propagation for approximate inference: an empirical study. pages 467–475, 1999. [17] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [18] S. Perkins, K. Lacker, and J. Theiler. Grafting: Fast, incremental feature selection by gradient descent in function space. 3(2003):1333–1356, 2003. [19] Stefan Riezler and Alexander Vasserman. Incremental feature selection and l1 regularization for relaxed maximum-entropy modeling. In Proceedings of EMNLP 2004. [20] Sekhar Tatikonda and Michael I. Jordan. Loopy belief propogation and gibbs measures. In UAI, pages 493–500, 2002. [21] R Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B, 1996. [22] Martin J. Wainwright, Pradeep Ravikumar, and Lafferty. Inferring graphical model structure using ℓ1regularized pseudo-likelihood. In Advances in Neural Information Processing Systems 19, 2007. [23] Martin J. Wainwright, Erik B. Sudderth, and Alan S. Willsky. Tree-based modeling and estimation of gaussian processes on graphs with cycles. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 661–667. MIT Press, 2001. [24] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems 13. MIT Press, 2001. [25] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. In Proc. NIPS, 2003.
|
2006
|
131
|
2,957
|
Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions Philip M. Long Google Mountain View, CA plong@google.com Rocco A. Servedio Department of Computer Science Columbia University New York, NY rocco@cs.columbia.edu Abstract We consider the well-studied problem of learning decision lists using few examples when many irrelevant features are present. We show that smooth boosting algorithms such as MadaBoost can efficiently learn decision lists of length k over n boolean variables using poly(k, logn) many examples provided that the marginal distribution over the relevant variables is “not too concentrated” in an L2-norm sense. Using a recent result of H˚astad, we extend the analysis to obtain a similar (though quantitatively weaker) result for learning arbitrary linear threshold functions with k nonzero coefficients. Experimental results indicate that the use of a smooth boosting algorithm, which plays a crucial role in our analysis, has an impact on the actual performance of the algorithm. 1 Introduction A decision list is a Boolean function defined over n Boolean inputs of the following form: if ℓ1 then b1 else if ℓ2 then b2 ... else if ℓk then bk else bk+1. Here ℓ1, ..., ℓk are literals defined over the n Boolean variables and b1, . . . , bk+1 are Boolean values. Since the work of Rivest [24] decision lists have been widely studied in learning theory and machine learning. A question that has received much attention is whether it is possible to attribute-efficiently learn decision lists, i.e. to learn decision lists of length k over n variables using only poly(k, log n) many examples. This question was first asked by Blum in 1990 [3] and has since been re-posed numerous times [4, 5, 6, 29]; as we now briefly describe, a range of partial results have been obtained along different lines. Several authors [4, 29] have noted that Littlestone’s Winnow algorithm [17] can learn decision lists of length k using 2O(k) log n examples in time 2O(k)n log n. Valiant [29] and Nevo and El-Yaniv [21] sharpened the analysis of Winnow in the special case where the decision list has only a bounded number of alternations in the sequence of output bits b1, . . . , bk+1. It is well known that the “halving algorithm” (see [1, 2, 19]) can learn length-k decision lists using only O(k log n) examples, but the running time of the algorithm is nk. Klivans and Servedio [16] used polynomial threshold functions together with Winnow to obtain a tradeoff between running time and the number of examples required, by giving an algorithm that runs in time n ˜ O(k1/3) and uses 2 ˜ O(k1/3) log n examples. In this work we take a different approach by relaxing the requirement that the algorithm work under any distribution on examples or in the mistake-bound model. This relaxation in fact allows us to handle not just decision lists, but arbitrary linear threshold functions with k nonzero coefficients. (Recall that a linear threshold function f : {−1, 1}n →{−1, 1}n is a function f(x) = sgn(Pn i=1 wixi −θ) where wi, θ are real numbers and the sgn function outputs the ±1 numerical sign of its argument.) The approach and results. We will analyze a smooth boosting algorithm (see Section 2) together with a weak learner that exhaustively considers all 2n possible literals xi, ¬xi as weak hypotheses. The algorithm, which we call Algorithm A, is described in more detail in Section 6. The algorithm’s performance can be bounded in terms of the L2-norm of the distribution over examples. Recall that the L2-norm of a distribution D over a finite set X is ∥D∥2 := (P x∈X D(x)2)1/2. The L2 norm can be used to evaluate the “spread” of a probability distribution: if the probability is concentrated on a constant number of elements of the domain then the L2 norm is constant, whereas if the probability mass is spread uniformly over a domain of size N then the L2 norm is 1/ √ N. Our main results are as follows. Let D be a distribution over {−1, 1}n. Suppose the target function f has k relevant variables. Let Drel denote the marginal distribution over {−1, 1}k induced by the relevant variables to f (i.e. if the relevant variables are xi1, . . . , xik, then the value that Drel puts on an input (z1, . . . , zk) is Prx∈D[xi1 . . . xik = z1 . . . zk]. Let Uk be the uniform distribution over {−1, 1}k and suppose that ||Drel||2/||Uk||2 = τ. (Note that for any D we have τ ≥1, since Uk has minimal L2-norm among all distributions over {−1, 1}k.) Then we have: Theorem 1 Suppose the target function is an arbitrary decision list in the setting described above. Then given poly(logn, 1 ϵ , τ, log 1 δ ) examples, Algorithm A runs in poly(n, τ, 1 ϵ, log 1 δ ) time and with probability 1 −δ constructs a hypothesis h that is ϵ-accurate with respect to D. Theorem 2 Suppose the target function is an arbitrary linear threshold function in the setting described above. Then given poly(k, log n, 2 ˜ O((τ/ϵ)2), log 1 δ ) examples, Algorithm A runs in poly(n, 2 ˜ O((τ/ϵ)2), log 1 δ ) time and with probability 1 −δ constructs a hypothesis h that is ϵ-accurate with respect to D. Relation to Previous Work. Jackson and Craven [14] considered a similar approach of using Boolean literals as weak hypotheses for a boosting algorithm (in their case, AdaBoost). Jackson and Craven proved that for any distribution over examples, the resulting algorithm requires poly(K, log n) examples to learn any weight-K linear threshold function, i.e. any function of the form sgn(Pn i=1 wixi −θ) over Boolean variables where all weights wi are integers and Pn i=1 |wi| ≤K (this clearly implies that there are at most K relevant variables). It is well known [12, 18] that general decision lists of length k can only be expressed by linear threshold functions of weight 2Ω(k), and thus the result of [14] does not give an attribute efficient learning algorithm for decision lists. More recently Servedio [27] considered essentially the same algorithm we analyze in this work by specifically studying smooth boosting algorithms with the “best-single-variable” weak learner. He considered a general linear threshold learning problem (with no assumption that there are few relevant variables) and showed that if the distribution satisfies a margin condition then the algorithm has some level of resilience to malicious noise. The analysis of this paper is different from that of [27]; to the best of our knowledge ours is the first analysis in which the smoothness property of boosting is exploited for attribute efficient learning. 2 Boosting and Smooth Boosting Fix a target function f : {−1, 1}n →{−1, 1} and a distribution D over {−1, 1}n. A hypothesis function h : {−1, 1}n →{−1, 1} is a γ-weak hypothesis for f with respect to D if ED[fh] ≥γ. We sometimes refer to ED[fh] as the advantage of h with respect to f. We remind the reader that a boosting algorithm is an algorithm which operates in a sequence of stages and at each stage t maintains a distribution Dt over {−1, 1}n. At stage t the boosting algorithm is given a weak hypothesis ht for f with respect to D; the boosting algorithm then uses this to construct the next distribution Dt+1 over {−1, 1}n. After T such stages the boosting algorithm constructs a final hypothesis h based on the weak hypotheses h1, . . . , hT that is guaranteed to have high accuracy with respect to the initial distribution D. See [25] for more details. Let D1, D2 be two distributions. For κ ≥1 we say that D1 is κ-smooth with respect to D2 if for all x ∈{−1, 1}n, D1(x)/D2(x) ≤κ. Following [15], we say that a boosting algorithm B is κ(ϵ, γ)-smooth if for any initial distribution D and any distribution Dt that is generated starting from D when B is used to boost to ϵ-accuracy with γ-weak hypotheses at each stage, Dt is κ(ϵ, γ)-smooth w.r.t. D. It is known that there are algorithms that are κ-smooth for κ = Θ( 1 ϵ) with no dependence on γ, see e.g. [8]. For the rest of the paper B will denote such a smooth boosting algorithm. It is easy to see that every distribution D which is 1 ϵ-smooth w.r.t. the uniform distribution U satisfies ∥D∥2/∥U∥2 ≤ p 1/ϵ. On the other hand, there are distributions D that are highly non-smooth relative to U but which still have ∥D∥2/∥U∥2 small. For instance, the distribution D over {−1, 1}k which puts weight 1 2k/2 on a single point and distributes the remaining weight uniformly on the other 2k −1 points is only 2k/2-smooth (i.e. very non-smooth) but satisfies ∥D∥2/∥Uk∥2 = Θ(1). Thus the L2-norm condition we consider in this paper is a weaker condition than smoothness with respect to the uniform distribution. 3 Total variation distance and L2-norm of distributions The total variation distance between two probability distributions D1, D2 over a finite set X is dT V := maxS⊆X D1(S) −D2(S) = 1 2 P x∈X |D1(x) −D2(x)| . It is easy to see that the total variation distance between any two distributions is at most 1, and equals 1 if and only if the supports of the distributions are disjoint. The following is immediate: Lemma 1 For any two distributions D1 and D2 over a finite domain X, we have dT V (D1, D2) = 1 −P x∈X min{D1(x), D2(x)}. We can bound the total variation distance between a distribution D and the uniform distribution in terms of the ratio ∥D∥2/∥U∥2 of the L2-norms as follows: Lemma 2 For any distribution D over a finite domain X, if U is the uniform distribution over X, we have dT V (D, U) ≤1 − ||U||2 2 4||D||2 2 . Proof: Let M = ||D||2 ||U||2 . Since ||D||2 2 = Ex∼D[D(x)], we have Ex∼D[D(x)] = M 2||U||2 2 = M2 |X|. By Markov’s inequality, Pr x∼D[D(x) ≥2M 2U(x)] = Pr x∼D[D(x) ≥2M 2 |X| ] ≤1/2. (1) By Lemma 1, we have 1 −dT V (D, U) = X x min{D(x), U(x)} ≥ X x:D(x)≤2M2U(x) min{D(x), U(x)} ≥ X x:D(x)≤2M2U(x) D(x) 2M 2 ≥ 1 4M 2 , where the second inequality uses the fact that M ≥1 (so D(x)/2M 2 < D(x)) and the third inequality uses (1). Using the definition of M and solving for dT V (D, U) completes the proof. 4 Weak hypotheses for decision lists Let f be any decision list that depends on k variables: if ℓ1 then output b1 else · · · else if ℓk then output bk else output bk+1 (2) where each ℓi is either “(xi = 1)” or “(xi = −1).” The following folklore lemma can be proved by an easy induction (see e.g. [12, 26] for proofs of essentially equivalent claims): Lemma 3 The decision list f can be represented by a linear threshold function of the form f(x) = sgn(c1x1 + · · ·+ ckxk −θ) where each ci = ±2k−i and θ is an even integer in the range [−2k, 2k]. It is easy to see that for any fixed c1, . . . , ck as in the lemma, as x = (x1, . . . , xk) varies over {−1, 1}k the linear form c1x1+· · ·+ckxk will assume each odd integer value in the range [−2k, 2k] exactly once. Now we can prove: Lemma 4 Let f be any decision list of length k over the n Boolean variables x1, . . . , xn. Let D be any distribution over {−1, 1}n, and let Drel denote the marginal distribution over {−1, 1}k induced by the k relevant variables of f. Suppose that dT V (Drel, Uk) ≤1 −η. Then there is some weak hypothesis h ∈{x1, −x1, . . . , xn, −xn, 1, −1} which satisfies EDrel[fh] ≥η2 16. Proof: We first observe that by Lemma 3 and the well-known “discriminator lemma” of [23, 11], under any distribution D some weak hypothesis h from {x1, −x1, . . . , xn, −xn, 1, −1} must have ED[fh] ≥ 1 2k . This immediately establishes the lemma for all η ≤ 4 2k/2 , and thus we may suppose w.l.o.g. that η > 4 2k/2 . We may assume w.l.o.g. that f is the decision list (2), that is, that the first literal concerns x1, the second concerns x2, and so on. Let L(x) denote the linear form c1x1+· · ·+ckxk−θ from Lemma 3, so f(x) = sgn(L(x)). If x is drawn uniformly from {−1, 1}k, then L(x) is distributed uniformly over the 2k odd integers in the interval [−2k −θ, 2k −θ], as c1x1 is uniform over ±2k, c2x2 over ±2k−1, and so on. Let S denote the set of those x ∈{−1, 1}k that satisfy |L(x)| ≤η 42k. Note that there are at most η 42k + 1 elements in S, corresponding to L(x) = ±1, ±3, . . ., ±(2j −1), where j is the greatest integer such that 2j −1 ≤η 42k. Since η > 4 2k/2 , certainly |S| ≤1 + η 42k ≤η 22k. We thus have PrUk[|L(x)| > η 42k] ≥1−η/2. It follows that PrDrel[|L(x)| > η 42k] ≥η 2 (for otherwise we would have dT V (Drel, Uk) > 1 −η), and consequently we have EDrel[|L(x)|] ≥η2 8 2k. Now we follow the simple argument used to prove the “discriminator lemma” [23, 11]. We have EDrel[|L(x)|] = EDrel[f(x)L(x)] = c1E[f(x)x1]+· · · +ckE[f(x)xk]−θE[f(x)] ≥η2 8 2k. (3) Recalling that each |ci| = 2k−i, it follows that some h ∈{x1, −x1, . . . , xn, −xn, 1, −1} must satisfy EDrel[fh] ≥( η2 8 2k)/(2k−1 +· · ·+20 +|θ|). Since |θ| ≤2k this is at least η2 16, and the proof is complete. 5 Weak hypotheses for linear threshold functions Now we consider the more general setting of arbitrary linear threshold functions. Though there are additional technical complications the basic idea is as in the previous section. We will use the following fact due to H˚astad: Fact 3 (H˚astad) (see [28], Theorem 9) Let f : {−1, 1}k →{−1, 1} be any linear threshold function that depends on all k variables x1, . . . , xk. There is a representation sgn(Pk i=1 wixi −θ) for f which is such that (assuming the weights w1, . . . , wk are ordered by decreasing magnitude 1 = |w1| ≥|w2| ≥· · · ≥|wk| > 0) we have |wi| ≥ 1 i!(k+1) for all i = 2, . . . , k. The main result of this section is the following lemma. The proof uses ideas from the proof of Theorem 2 in [28]. Lemma 5 Let f : {−1, 1}n →{−1, 1} be any linear threshold function that depends on k variables. Let D be any distribution over {−1, 1}n, and let Drel denote the marginal distribution over {−1, 1}k induced by the k relevant variables of f. Suppose that dT V (Drel, Uk) ≤1 −η. Then there is some weak hypothesis h ∈{x1, −x1, . . . , xn, −xn, 1, −1} which satisfies EDrel[fh] ≥ 1/(k22 ˜ O(1/η2)). Proof sketch: We may assume that f(x) = sgn(L(x)) where L(x) = w1x1 + · · · + wkxk −θ with w1, . . . , wk as described in Fact 3. Let ℓ:= ˜O(1/η2) = O((1/η2)poly(log(1/η))). (We will specify ℓin more detail later.) Suppose first that ℓ≥k. By a well-known result of Muroga et al. [20], every linear threshold function f that depends on k variables can be represented using integer weights each of magnitude 2O(k log k). Now the discriminator lemma [11] implies that for any distribution P, for some h ∈{x1, −x1, . . . , xn, −xn, 1, −1} we have EP[fh] ≥1/2O(k log k). If ℓ≥k and ℓ= O((1/η2)poly(log(1/η))), we have k log k = ˜O(1/η2). Thus, in this case, EP[fh] ≥1/2 ˜O(1/η2), so the lemma holds if ℓ≥k. Thus we henceforth assume that ℓ< k. It remains only to show that EDrel[|L(x)|] ≥1/(k2 ˜O(1/η2)); (4) once we have this, following (3) we get EDrel[|L(x)|] = EDrel[fL] = w1E[f(x)x1] + · · · + wkE[f(x)xk] −θE[f(x)] ≥1/(k2 ˜ O(1/η2)), and now since each |wi| ≤1 (and w.l.o.g. |θ| ≤k) this implies that some h satisfies EDrel[fh] ≥ 1/(k22 ˜ O(1/η2)) as desired. Similar to [28] we consider two cases (which are slightly different from the cases in [28]). Case I: For all 1 ≤i ≤ℓwe have w2 i /(Pk j=i w2 j ) > η2/576. Let α := r 2 Pk j=ℓ+1 w2 j ln(8/η). Recall the following version of Hoeffding’s bound: for any 0 ̸= w ∈Rk and any γ > 0, we have Prx∈{−1,1}k[|w · x| ≥γ∥w∥] ≤2e−γ2/2 (where we write ∥w∥to denote qPk i=1 w2 i ). This bound directly gives us that Pr x∈Uk[|wℓ+1xℓ+1 + · · · + wkxk| ≥α] ≤2e−2 ln(8/η)/2 = η 4. (5) Moreover, the argument in [28] that establishes equation (4) of [28] also yields Pr x∈Uk[|w1x1 + · · · + wℓxℓ−θ| ≤2α] ≤η 4 (6) in our current setting. (The only change that needs to be made to the argument of [28] is adjusting various constant factors in the definition of ℓ). Equations (5) and (6) together yield Prx∈Uk[|w1x1 + · · · + wkxk −θ| ≥α] ≥1 −η 2. Now as before, taken together with the dT V bound this yields PrDrel[|L(x)| ≥α] ≥η 2 and hence we have EDrel[|L(x)|] ≥ηα/2. Since α > wℓ+1 and wℓ+1 ≥ 1/((k + 1)(ℓ+ 1)!) by Fact 3, we have established (4) in Case I. Case II: For some value J ≤ℓwe have w2 J/(Pk i=J w2 i ) ≤η2/576. Let us fix any setting z ∈ {−1, 1}J−1 of the variables x1, . . . , xJ−1. By an inequality due to Petrov [22] (see [28], Theorem 4) we have Pr xJ,...,xk∈Uk−J+1[|w1z1+· · ·+wJ−1zJ−1+wJxJ+· · ·+wkxk−θ| ≤wJ] ≤ 6wJ qPk i=J w2 i ≤6η 24 = η 4. Thus for each z ∈{−1, 1}J−1 we have Prx∈Uk[|L(x)| ≤wJ | x1 . . . xJ−1 = z1 . . . zJ−1] ≤η 4. This immediately yields Prx∈Uk[|L(x)| > wJ] ≥1 −η 4, which in turn gives Prx∈Drel[|L(x)| > wJ] ≥3η 4 and hence EDrel[|L(x)|] ≥3ηwJ 4 by our usual arguments. Now (4) follows using Fact 3 and J ≤ℓ. 6 Putting it all together Algorithm A works by running a Θ( 1 ϵ)-smooth boosting-by-filtering algorithm; for concreteness we use the MadaBoost algorithm of Domingo and Watanabe [8]. At the t-th stage of boosting, when MadaBoost simulates the distribution Dt, the weak learning algorithm works as follows: O( log n+log(1/δ′) γ2 ) many examples are drawn from the simulated distribution Dt, and these examples are used to obtain an empirical estimate of EDt[fh] for each h ∈{x1, −x1, . . . , xn, −xn, −1, 1}. (Here γ is an upper bound on the advantage EDt[fh] of the weak hypotheses used at each stage; we discuss this more below.) The weak hypothesis used at this stage is the one with the highest observed empirical estimate. The algorithm is run for T = O( 1 ϵγ2 ) stages of boosting. Consider any fixed stage t of the algorithm’s execution. As shown in [8], at most O( 1 ϵ) draws from the original distribution D are required for MadaBoost to simulate a draw from the distribution Dt. (This is a direct consequence of the fact that MadaBoost is O( 1 ϵ)-smooth; the distribution Dt is simulated using rejection sampling from D.) Standard tail bounds show that if the best hypothesis h has E[fh] ≥γ then with probability 1 −δ′ the hypothesis selected will have E[fh] ≥γ/2. In [8] it is shown that if MadaBoost always has an Ω(γ)-accurate weak hypothesis at each stage, then after at most T = O( 1 ϵγ2 ) stages the algorithm will construct a hypothesis which has error at most ϵ. Thus it suffices to take δ′ = O(δϵ2γ). The overall number of examples used by Algorithm A is O( log n+log(1/δ′) ϵ2γ4 ). Thus to establish Theorems 1 and 2, it remains only to show that for any initial distribution D with ∥Drel∥2/∥Uk∥2 = τ, the distributions Dt that arise in the course of boosting are always such that the best weak hypothesis h ∈{x1, −x1, . . . , xn, −xn, −1, 1} has sufficiently large advantage. Suppose f is a target function that depends on some set of k (out of n) variables. Consider what happens if we run a 1 ϵ-smooth boosting algorithm, where the initial distribution D satisfies ∥Drel∥/∥Uk∥= τ. At each stage we will have Drel t (x) ≤1 ϵ · Drel(x) for all x ∈{−1, 1}k, and consequently we will have ||Drel t ||2 2 = X x∈{−1,1}k Drel t (x)2 ≤1 ϵ2 X x∈{−1,1}k Drel(x)2 ≤τ 2 ϵ2 X x∈{−1,1}k Uk(x)2. Thus, by Lemma 2 each distribution Dt will satisfy dT V (Drel t , Uk) ≤1−ϵ2/(4τ 2). Now Lemmas 4 and 5 imply that in both cases (decision lists and LTFs) the best weak hypothesis h does indeed have the required advantage. 7 Experiments The smoothness property enabled the analysis of this paper. Is smoothness really helpful for learning decision lists with respect to diffuse distributions? Is it critical? This section is aimed at addressing these questions experimentally. We compared the accuracy of the classifiers output by a number of smooth boosters from the literature with AdaBoost (which is known to not be a smooth booster in general, see e.g. Section 4.2 of [7]) on synthetic data in which the examples were distributed uniformly, and the class designations were determined by applying a randomly generated decision list. The number of relevant variables was fixed at 10. The decision list was determined by picking ℓ1, ..., ℓ10 and b1, ..., b11 from (2) independently uniformly at random from among the possibilities. We evaluated the following algorithms: (a) AdaBoost [9], (b) MadaBoost [8], (c) SmoothBoost [27], and (d) a smooth booster proposed by Gavinsky [10]. Due to space constraints, we cannot describe each of these in detail.1 Each booster was used to reweight the training data, and in each round, the literal which minimized the weighted training error was chosen. Some of the algorithms choose the number of rounds of 1Very roughly speaking, AdaBoost reweights the data to assign more weight to examples that previously chosen base classifiers have often classified incorrectly; it then outputs a weighted vote over the outputs of the base classifiers, where each voting weight is determined as a function of how well its base classifier performed. MadaBoost modifies AdaBoost to place a cap on the weight, prior to normalization. SmoothBoost [27] caps the weight more aggressively as learning progresses, but also reweights the data and weighs the base classifiers in a manner that does not depend on how well they performed. The form of the manner in which Gavinsky’s booster updates weights is significantly different from AdaBoost, and reminiscent of [13, 15]. m n Ada Mada Gavinsky SB(0.05) SB(0.1) SB(0.2) SB(0.4) 100 100 0.086 0.077 0.088 0.071 0.067 0.077 0.089 200 100 0.052 0.045 0.050 0.067 0.047 0.047 0.051 500 100 0.022 0.018 0.024 0.056 0.031 0.025 0.031 1000 100 0.016 0.014 0.024 0.063 0.036 0.028 0.033 100 1000 0.123 0.119 0.116 0.093 0.101 0.117 0.128 200 1000 0.079 0.072 0.083 0.071 0.064 0.072 0.081 500 1000 0.045 0.039 0.045 0.050 0.040 0.040 0.044 1000 1000 0.033 0.026 0.035 0.048 0.038 0.032 0.036 Table 1: Average test set error rate m n Ada Mada Gavinsky SB(0.05) SB(0.1) SB(0.2) SB(0.4) 100 100 13.6 8.8 11.7 3.9 6.0 7.5 9.1 200 100 19.8 13.1 12.5 4.1 6.9 9.4 9.9 500 100 32.2 20.7 15.2 5.0 9.1 11.5 12.2 1000 100 37.2 19.2 15.3 7.1 10.7 12.1 13.0 100 1000 13.3 7.7 26.8 3.7 5.3 6.1 7.4 200 1000 19.8 11.5 19.4 4.4 7.4 9.5 11.7 500 1000 28.1 16.7 16.2 4.9 8.6 10.9 11.5 1000 1000 36.7 20.1 14.7 7.2 11.0 12.1 13.3 Table 2: Average smoothness boosting as a function of the desired accuracy; instead, we ran all algorithms for 100 rounds. All boosters reweighted the data by normalizing some function that assigns weight to examples based on how well previously chosen based classifiers are doing at classifying them correctly. The booster proposed by Gavinsky might set all of these weights to zero: in such cases, it was terminated. For each choice of the number of examples m and the number of features n, we repeated the following steps: (a) generate a random target, (b) generate m random examples, (c) split them into a training set with 2/3 of the examples and a test set with the remaining 1/3, (d) apply all the algorithms on the training set, and (e) apply all the resulting classifiers on the test set. We repeated the steps enough times so that the total size of the test sets was at least 10000; that is, we repeated them ⌈30000/m⌉times. The average test-set error is reported. SmoothBoost [27] has two parameters, γ and θ. In his analysis, θ = γ/(2+γ), so we used the same setting. We tried his algorithm with γ set to each of 0.05, 0.1, 0.2 and 0.4. The test set error rates are tabulated in Table 1. MadaBoost always improved on the accuracy of AdaBoost. The results are consistent with the possibility that AdaBoost learns decision lists attributeefficiently with respect to the uniform distribution; this motivates theoretical study of whether this is true. One possible route is to prove that, for sources like this, AdaBoost is, with high probability, a smooth boosting algorithm. The average smoothnesses are given in Table 2. SmoothBoost [27] was seen to be fairly robust to the choice of γ; with a good choice it sometimes performed the best. This motivates research into adaptive boosters along the lines of SmoothBoost. References [1] D. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. [2] J. Barzdin and R. Freivald. On the prediction of general recursive functions. Soviet Mathematics Doklady, 13:1224–1228, 1972. [3] A. Blum. Learning Boolean functions in an infinite attribute space. In Proceedings of the Twenty-Second Annual Symposium on Theory of Computing, pages 64–72, 1990. [4] A. Blum. On-line algorithms in machine learning. available at http://www.cs.cmu.edu/˜avrim/Papers/pubs.html, 1996. [5] A. Blum, L. Hellerstein, and N. Littlestone. Learning in the presence of finitely or infinitely many irrelevant attributes. Journal of Computer and System Sciences, 50:32–40, 1995. [6] A. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2):245–271, 1997. [7] N. Bshouty and D. Gavinsky. On boosting with optimal poly-bounded distributions. Journal of Machine Learning Research, 3:483–506, 2002. [8] C. Domingo and O. Watanabe. Madaboost: a modified version of adaboost. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, pages 180–189, 2000. [9] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. [10] Dmitry Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research, 4:101–117, 2003. [11] A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, and G. Turan. Threshold circuits of bounded depth. Journal of Computer and System Sciences, 46:129–154, 1993. [12] S. Hampson and D. Volper. Linear function neurons: structure and training. Biological Cybernetics, 53:203–217, 1986. [13] R. Impagliazzo. Hard-core distributions for somewhat hard problems. In Proceedings of the Thirty-Sixth Annual Symposium on Foundations of Computer Science, pages 538–545, 1995. [14] J. Jackson and M. Craven. Learning sparse perceptrons. In NIPS 8, pages 654–660, 1996. [15] A. Klivans and R. Servedio. Boosting and hard-core sets. Machine Learning, 53(3):217–238, 2003. Preliminary version in Proc. FOCS’99. [16] A. Klivans and R. Servedio. Toward attribute efficient learning of decision lists and parities. In Proceedings of the 17th Annual Conference on Learning Theory,, pages 224–238, 2004. [17] N. Littlestone. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2:285–318, 1988. [18] M. Minsky and S. Papert. Perceptrons: an introduction to computational geometry. MIT Press, Cambridge, MA, 1968. [19] T. Mitchell. Generalization as search. Artificial Intelligence, 18:203–226, 1982. [20] S. Muroga, I. Toda, and S. Takasu. Theory of majority switching elements. J. Franklin Institute, 271:376– 418, 1961. [21] Z. Nevo and R. El-Yaniv. On online learning of decision lists. Journal of Machine Learning Research, 3:271–301, 2002. [22] V. V. Petrov. Limit theorems of probability theory. Oxford Science Publications, Oxford, England, 1995. [23] G. Pisier. Remarques sur un resultat non publi’e de B. Maurey. Sem. d’Analyse Fonctionelle, 1(12):1980– 81, 1981. [24] R. Rivest. Learning decision lists. Machine Learning, 2(3):229–246, 1987. [25] R. Schapire. Theoretical views of boosting. In Proc. 10th ALT, pages 12–24, 1999. [26] R. Servedio. On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, pages 296–307, 1999. [27] R. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning Research, 4:633–648, 2003. Preliminary version in Proc. COLT’01. [28] R. Servedio. Every linear threshold function has a low-weight approximator. In Proceedings of the 21st Conference on Computational Complexity (CCC), pages 18–30, 2006. [29] L. Valiant. Projection learning. Machine Learning, 37(2):115–130, 1999.
|
2006
|
132
|
2,958
|
Mixture Regression for Covariate Shift Amos J Storkey Institute of Adaptive and Neural Computation School of Informatics, University of Edinburgh a.storkey@ed.ac.uk Masashi Sugiyama Department of Computer Science Tokyo Institute of Technology sugi@cs.titech.ac.jp Abstract In supervised learning there is a typical presumption that the training and test points are taken from the same distribution. In practice this assumption is commonly violated. The situations where the training and test data are from different distributions is called covariate shift. Recent work has examined techniques for dealing with covariate shift in terms of minimisation of generalisation error. As yet the literature lacks a Bayesian generative perspective on this problem. This paper tackles this issue for regression models. Recent work on covariate shift can be understood in terms of mixture regression. Using this view, we obtain a general approach to regression under covariate shift, which reproduces previous work as a special case. The main advantages of this new formulation over previous models for covariate shift are that we no longer need to presume the test and training densities are known, the regression and density estimation are combined into a single procedure, and previous methods are reproduced as special cases of this procedure, shedding light on the implicit assumptions the methods are making. 1 Introduction There is a common presumption in developing supervised methods that the distribution of training points used for learning supervised models will match the distribution of points seen in a new test scenario. The expectation that the training and test points follow the same distribution is explicitly stated in [2, p. 10], is an assumption of empirical risk minimisation, [see e.g. 9, p. 25], and is implicit in the common practice of randomized splitting given data into a “training set” and a “test set”, where the latter is used in assessing performance [5, p. 482-495]. This paper, then, is concerned with the following issue. A set of real valued training data pairs of the form (x, y) is provided to train a model for a supervised learning problem. In addition data of the form x is provided from one (or more) test environments where the model will be used. The question to be addressed is “How should we predict a value of y given a value x from within that particular test environment?” Cases where test scenarios truly match the training data are probably rare. The problem of mismatch has been grappled within literature from a number of fields, and has become known as covariate shift [14]. Specific examples of covariate shift include situations in reinforcement learning [c.f. 13] and bio-informatics [c.f. 1]. The common issue of sample selection bias [7] is a particular case of covariate shift. Much of the recent analysis of covariate shift has been made in the context of assessing the asymptotic bias of various estimators [15]. In general it has been noted that in the case of mismatched models (i.e. where the model from which the training data is generated is not included in the training model class), some typical estimators, such as least squares approaches, produce biased asymptotic estimators [14]. It might appear that the presumption of matched models in Bayesian analysis means covariate shift is not an issue: failure or otherwise under situations of covariate shift is solved by valid choice for the prior distribution over conditional models. The difficulty with this dismissal of the subject is that modelling conditional distributions alone is not always valid. In fact we can categorise at least three different types of covariate shift: 1. Independent covariate shift: Ptrain(y|x) = Ptest(y|x), but Ptrain(x) ̸= Ptest(x). 2. Dependent prior probability change: Ptrain(x|y) = Ptest(x|y), but Ptrain(y) ̸= Ptest(y). 3. Latent prior probability change: Ptrain(x, y|r) = Ptest(x, y|r) for all values of some latent variable r, but Ptrain(r) ̸= Ptest(r). Let us presume that we are only interested in the quality of the conditional model Ptest(y|x). Then Case 1 is the only one of the above where covariate shift will have no effect on modelling. Case 2 is the well known situation of class prior probability change and, for example, is considered in comparing the benefits of a naive Bayes model, which allows for class prior probability change, and discriminant models, which typically do not. Case 3 involves a more general assumption, and arguably can be used to cover most situations of covariate shift, by incorporating any known structural characteristics of the problem into some latent variable r. Change in the distribution of x points implicitly informs us about variation in the targets y via the shift in the latent variable r, which is the causal factor for the change. The purpose of this paper is to provide a generative framework for analysis of covariate shift. The main advantages of this new formulation over previous approaches are • It provides an explicit model for the changes occurring in situations of covariate shift, and hence the predictions that result from it. • There is no need to presume the training and test distributions are known. Furthermore the test covariates are also used as part of the model estimation procedure, resulting in better predictions. • Previous results, such as Importance Weighted Least Squares, are special cases of this method with explicit presumptions that can be relaxed to gain more general models. Hence this paper is a natural extension to the existing work. • Utilising the test covariate distribution gives performance benefits over using the same model for training data alone. • All the usual machinery for mixture of experts are available, and so this approach allows model selection and many natural extensions. Outline. In Section 2, related work is discussed, before the problem is formally specified and a general model is derived in Section 3. A specific form of mixture regression model is formulated and an Expectation Maximisation solution is given in Section 3.1. The specific relationship to Importance Weighted Least Squares is discussed in Section 3.1.2. Test examples are given in section 4. The results and methods are discussed in Section 5. 2 Prior work Covariate shift will be interpreted, in the context of this work, using mixture of regressor models, where the regression model is dependent on a latent class variable. Clustered regression models have been discussed widely [4, 18, 8, 16]. The benefits of the mixture of regressor approach for heterogeneous data was discussed in [17], but not formulated specifically for the problem of covariate shift. This paper establishes for the first time the relationship between the mixture of regressor model and the typical statistical results in the literature on covariate shift. The main differences of our approach from a standard mixture of regressor formalism is that we utilise the training and test distributions as part of the model and do not use only a conditional model, and we allow coupling of regressors across different mixture components. The main significance with regard to the literature on covariate shift is that we establish covariate shift within a general probabilistic modelling paradigm and hence extend the standard techniques to establish more general methods, which are also applicable when the training and test distributions are not explicitly given. The mixture of regressors form for (x, y) used in this paper is a specific from of mixture of experts [10]. Hence hierarchical extensions are also possible in the form of [11]. The problem of sample selection bias is related to covariate shift. Sample selection bias has been discussed in [19], where they estimate the distribution determining the bias for a classification problem. The problem of sample selection bias differs from the case in this paper as here there is no fundamental requirement of distribution overlap between the training and test sets. First, each can have zero density in regions the other is non-zero. Second, the presumption is different: rather than there being a sample rejection process that characterised the difference between training and test sets, there is a sample production process that differs. 3 Framework for Covariate Shift This paper follows most others in considering the restricted case of a single training and single test set. Each datum x is assumed to have been generated from one of a number of data sources using a mixture distribution corresponding to the source. The proportions of each of the sources varies across the training and test datasets. Hence, in the context of this paper, we understand covariate shift to be effected by a change in the contribution of different sources to the data. The motivation of the framework in this paper is that there is a latent feature set upon which each dataset is dependent, and the the variations between the two datasets are dependent upon variation of the proportions, but not the form, of those latent features. This is characterised by presuming each data source is a member of one of two different sets. Each of the two sets of sources is also associated with a regression model. The two sets of sources have the following characteristics: • Source set 1 corresponds to sources that may occur in the test data, and potentially also in the training data, and are associated with regression model P1(y|x). • Source set 2 corresponds to sources that occur only in the training data, and are associated with regression model P2(y|x). By taking this approach we note that we will be able to separate out effects that we expect to be only characteristics of the training data from effects that are common across training and test sets. The full generative model for the observed data consists of the model for the training data D and model for the test data T. The test data is just used to determine the nature of the covariate shift, and consists of only of the covariates x, and not any targets y. We emphasise that we do not presume to have seen the test data we wish to predict. Rather a prior model is built for the training and test data, and this is then conditioned on the information from the training data and the known covariates for the test data but not the unknown targets. 3.1 Mixture Regression for Covariate Shift In this section the full model is introduced. This significantly extends the previous work on covariate shift, in that the model allows for unknown training and test distributions, and utilises a mixture model approach for representing the relationship between the two. In Section 3.1.2, we will show how the previous results on covariate shift are special cases of the general model. We will develop this formalism for any parametric form for the regressors P(y|x). In fact this restriction is mainly for ease of explanation, and the method can be used with non-parametric models too, and will be tested in the case of Gaussian process models1. The model takes the following form • The distribution of the training data and test data are denoted PD and PT respectively, and are unknown in general. • Source set 1 consists of M mixture distributions, where mixture t is denoted P1t(x). Each of the components is associated2 with regression model P1(y|x). • Source set 2 consists of M2 mixture distributions, where mixture t is denoted P2t(x). Each of the components is associated with the regression model P2(y|x). 1The primary restriction is than we need to be able to compute standard EM responsibilites for a given regressor, and hence for Gaussian processes a variational approximation is needed to do this. 2If a component i is associated with a regression model j, this means that any datum x generated from the mixture component i, will also have a corresponding y generated from the associated regression model Pj(y|x). • The training and test data distributions take the following form: PD(x) = X t β1γD 1tP1t(x) + β2γD 2tP2t(x) and PT (x) = γT 1tP1t(x) (1) Hence β1 and β2 are parameters for the proportions of the two source sets in the training data, γD 1t are the relative proportions of each mixture from source set 1 in the training data, and γD 2t are the relative proportions of each mixture from source set 2 in the training data. Finally γT 1t are the proportions of each mixture from source set 1 in the test data. All these parameters are presumed unknown. At some points in the paper it will be presumed the mixtures are Gaussian, when the form N(x; m, K) will be used to denote the Gaussian distribution function of x, with mean m and covariance K. For a parametric model, with the collection of mixture parameters denoted by Ω, the collection of regression parameters denoted by Θ, and the mixing proportions, γ and β we have the full probabilistic model P({iµ, yµ, xµ|µ ∈D}, {iν, xν|ν ∈T}|β, Θ, Ω) = Y µ∈D P(sµ|β)P(tµ|γ, sµ)Psµtµ(xµ|Ωtµ)Psµ(yµ|xµ, Θ) Y ν∈T P(tν|γ)P1tµ(xν|Ω). (2) where sµ denotes the source set used to generate the data point µ, and tµ denotes the particular mixture from that source set used to generate the data point µ. In words, this says that the model for the training dataset involves sampling the particular source set sµ, then the mixture component tµ from that particular source set. Given these we then sample an xµ from the relevant mixture and a yµ conditionally on xµ from the relevant regressor. The same procedure is followed for the test set, except now there is only one source set to consider. 3.1.1 EM algorithm A maximum likelihood solution for the parameters (β, γ, Θ, Ω) can be obtained for this model (given the training data and test covariate) using Expectation Maximisation (EM) [3]. The derivations are standard EM calculations (see e.g. [2]), and hence are not reiterated here. Denote the responsibility of mixture i for data point µ by αµ i . Then the application of EM involves maximisation of log P({yµ, xµ|µ ∈D}, {xν|ν ∈T}|β, γ, Θ, Ω) (3) with respect to the parameters through iteration of E and M steps. The E-step update uses current parameter values to compute the responsibility (denoted by αs) of each mixture 1t and 2t for each data point µ in the training set and each data point ν in the test set using αµ st = βsγD stPst(xµ|Ω)Ps(yµ|xµ, Θ) P s,t βsγD stPst(xµ|Ω)Ps(yµ|xµ, Θ) and αν 1t = γT 1tP1t(xµ|Ω) P t γT 1tP1t(xµ|Ω). (4) We set αµ 2t = 0 for ν ∈T, as none of these mixtures are represented in the test set. The parameters of the mixture model distributions are then updated with the usual M steps for the relevant mixture component, and the regression parameters are updated using maximum responsibility-weighted likelihood. When each mixture component is a Gaussian of the form N(x; mst, Kst), when we have a Gaussian regression error term, and denoting the (vector of) regression functions by fs for each source set s, these update rules are: mst = P µ∈(D,T ) αµ stxµ P µ∈D,T αµ st , Kst = P µ∈(D,T ) αµ st(xµ −mst)(xµ −mst)T P µ∈D,T αµ st (5) βs = 1 |D| X µ∈D,t αµ st , γD st = 1 |D| X µ∈D αµ st βs , γT 1t = 1 |T| X ν∈T αν 1t (6) fs = arg min fs "X µ,t αµ st(fs(xµ) −yµ)T (fs(xµ) −yµ) # (7) Given the learnt model, inference is straightforward. The test data is associated with a single regression model P1(y|x), and so the predictive distribution for the test set is the learnt predictor P1(y|xi) for each point xi in the test set. 3.1.2 Importance Weighted Least Squares Previous results in modelling covariate shift can be obtained as special cases of the general approach taken in this paper. Suppose we make the assumptions that PD and PT are known, and that the source set 1 contains just the one component, which must be PT by definition. Suppose also that the two regressors have a large and identical variance Σ. In this simple case, we do not need to know the actual test points (in this framework these are only used to infer the test distribution, which is assumed given here). The M step update only involves update to the regressor. For the E step we use the approximation P(yµ|xµ, Θ1) ≈P(yµ|xµ, Θ2), which becomes asymptotically true in the case of infinite variance Σ. The resulting E and M steps are αµ ≈PT (xµ)β1 PD(xµ) and f1 = arg min f1 "X µ αµ(f1(xµ) −yµ)T (f1(xµ) −yµ) # (8) where we note that β1 is a common constant and can be dropped from the calculations. Hence we never need to learn β1 or the parameters associated with mixture 2 in this procedure. Also no iterative EM procedure is needed as the E step is independent of the M step results. Hence this is a one shot process. This is the Importance Weighted Least Squares estimator for covariate shift [14]. A simple extension of this model will allow the large variance assumption to be relaxed, so the model can use the regressor information for computing responsibilities. 4 Examples 4.1 Generated Test Data We demonstrate the mixture of regressors approach to covariate shift (MRCS) on generated test data: a one dimensional regression problem with two sources each corresponding to different linear regressors. Regression performance for MRCS with Gaussian mixtures and linear regressors is compared with three other cases. The first is an importance weighted least squares estimator (IWLS) given the best mixture model fit for the data, corresponding to the current standard for modelling covariate shift. The second uses a mixture of regressors model that ignores the form of the test data, but chooses the regressor corresponding to the mixtures which best match the test data distribution using a KL divergence measure (MRKL). This corresponds to recognising that covariate shift can happen, but ignoring the nature of the test distribution in the modelling process, and trying to choose the best of the two regressors. The third case is where the mixture of regressors is used simply as a standard regression model, ignoring the possibility of covariate shift (MRREG). The generative procedure for each of the 100 test datasets involves generating random parameter values for between 1 and 3 mixtures for each of two linear regressors. Test and training datasets of 200 data points each are generated from these mixtures and regression models, using different mixing proportions in each case. The various approaches were run 8 times with different random starting parameters for all methods. 80 iterations of EM were used. A fixed number of iterations was chosen to allow reasonable comparison. Analysis was done for fixed model sizes and for model choice using a Bayesian Information Criterion (BIC). Even though the regularity conditions for BIC do not hold for mixture models, it has been shown that BIC is consistent in this case [12]. It has also been shown to be a good choice on practical grounds [6]. The results of these tests show the significant benefits of explicit recognition of covariate shift over straight regression even compared with the use of the same mixture of regressors model, but without reference to the test distribution. It also shows benefits of the approach of this paper over the current state of the art for modelling covariate shift. Table 1 gives the result of these approaches for various fixed choices of numbers of mixtures associated with each regressor. Independent of the use of any model order choice, the Mixture of Regressors for Covariate Shift (MRCS) performs better than the other approaches. Table 1 also gives the results when the Bayesian Information Criterion is used for selecting the number of mixtures. Again MRCS performs best, and consistently gives better performance on the test data for more than 70 percent of the test cases. To illustrate the difference between the methods, Figure 1 plots the results of training a MRCS model on some one dimensional data using a regularised cubic regressor. The fit to the test data is also shown. Once again this is compared with IWLS and MRKL. It can be seen that both IWLS and MRKL fail to untangle the regressors associated with the overlapping central clusters in the training data and hence perform badly in that region of the test data. −3 −2 −1 0 1 2 3 4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 4 5 6 (a) (b) −3 −2 −1 0 1 2 3 4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 4 5 6 (c) (d) −3 −2 −1 0 1 2 3 4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 4 5 6 (e) (f) Figure 1: Nonlinear regression using covariate shift. (a),(c),(e) Training set fit and (b),(d),(f) test data with predictions for MRCS (top), IWLS (middle) and MRKL (bottom) respectively. In (a),(c),(e), the ’.’ data labels the points for which the test regressor has greater responsibility, and the ’+’ data labels points for which the training only regressor has greater responsibility. Table 1: Average mean square error over all 100 datasets for each choice of fixed model mixture size. The actual number of mixtures in the data varies. MRCS - mixture of regressors for covariate shift, IWLS - Importance weighted least squares, MRKL - Mixture of regressors, evaluated on regressor with best fit to test distribution. MRREG - Mixture of regressors as a standard regression model, ignoring covariate shift. The sixth row gives the average mean square error over all 100 datasets, with number of mixtures chosen using a Bayesian information criterion for each case, and the last row gives the proportion of times MRCS performs better than the other cases for a BIC choice of model. PValues: If two of the approaches were equivalent performers, empirically better performance for 70/100 or more cases would only occur on less than 1 × 10−4 such trials. MRCS IWLS MRKL MRREG 1 Mixture 0.588 0.797 3.274 0.890 2 Mixtures 0.536 0.804 2.673 0.881 3 Mixtures 0.601 0.831 3.390 0.887 4 Mixtures 0.623 0.817 2.823 0.894 5 Mixtures 0.612 0.837 2.817 0.898 BIC Choice 0.6100 0.7990 2.8638 0.8813 MCRS better 77/100 72/100 84/100 4.2 Auto-Mpg Test It is useful to see that the approach does indeed make a noticeable difference on data that takes the appropriate prior form, but that says nothing about how appropriate that prior is for real problems. Here we demonstrate the method on the auto-mpg problem from the UCI dataset. This provides a natural scenario for demonstrating covariate shift. The auto-mpg data can be found at http://www.ics.uci.edu/˜mlearn/MLSummary.html and involves the problem of predicting the city cycle fuel consumption of cars. One of the attributes is a class label dictating the origin of a particular car. To demonstrate covariate shift we can consider the prediction task trained on cars from one place of origin and tested on cars from another place of origin. Here we consider predicting the fuel consumption (attribute 1) using the four continuous attributes. We train the model using data on cars from origin 1, and test on cars from origin 2 and origin 3. We use the same test algorithms as the previous section, but now using a Gaussian process regressors for each regression function. The results of running this are in Table 2. The Gaussian process hyper-parameters were optimised separately for each case. These are results obtained using a Bayesian Information Criterion for selecting the number of mixtures between 1 and 14 for each of the cases. We obtain similar results if we compare methods with various fixed numbers of mixtures. Critically, we note that all covariate shift methods performed better than a straight Gaussian Process predictor in this situation. The mixture of Gaussian processes did not perform as well as the methods which explicitly recognised the covariate shift, although interestingly did perform better than a straight Gaussian process predictor. Again the MRCS performed better overall. 5 Discussion This paper establishes that explicit generative modelling of covariate shift can bring improvements over conditional regression models, or over standard covariate shift methods that ignore the dependent data in the modelling process. The method is also better than using an identical mixture of regressors model for the training data alone, as it utilises the positions of the independent test points to help refine the mixture locations and the separation of regressors. We expect significant improvements can be made with a fully Bayesian treatment of the parameters. This framework is currently being extended to the case of multiple training and test datasets using a fully Bayesian scheme, and will be the subject of future work. In this setting we have a Topic model, Table 2: Tests of methods on the auto-mpg dataset. These are the (standardised) mean squared errors for each approach. GP denotes the use of Gaussian Process regression for prediciton. Orgin 2, and Origin 3 denote the two different car origins used to test the model. GP MRCS IWLS MRKL MRREG Origin 2 1.192 0.600 0.700 1.2243 0.7397 Origin 3 0.898 0.568 0.691 1.3862 0.706 similar to Latent Dirichlet Allocation, where each dataset is built from a number of contributing regression components, where each component is expressed in different proportions in each dataset. The model and tests of this paper show that this multiple dataset extension could well be fruitful. 6 Conclusion In this paper a novel approach to the problem of covariate shift has been developed that is demonstrably better than state of the art regression approaches, and better than the current standard for covariate shift. These have been tested on both generated data, and on a real problem of covariate shift, derived from a standard UCI dataset. Importance Weighted Least Squares is shown to be a special case. Specifically we provide explicit modelling of the covariate shift process by assuming a shift in the proportions of a number of latent components. A mixture of regressors model is used for this purpose, but it differs from standard mixture of regressors by allowing sharing of the regression functions between mixture components and explicitly including a model for the test set as part of the process. References [1] P. Baldi, S. Brumak, and G. A. Stolovitzky. Bioinformatics: The Machine Learning Approach. MIT Press, Cambridge, 1998. [2] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995. [3] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39:1–38, 1977. [4] W.S. DeSarbo and W.L. Cron. A maximum likelihood methodology for clusterwise linear regression. Journal of Classification, 5:249–282, 1988. [5] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley Interscience, 2001. [6] C. Fraley and A.E. Raftery. How many clusters? Which clustering method? Answers via model-based cluster analysis. Computer Journal, 41:578–588, 1998. [7] J. J. Heckman. Sample selection bias as a specification error. Econometrica, 47:153–162, 1979. [8] C. Hennig. Identifiability of models for clusterwise linear regressions. Journal of Classification, 17:273– 296, 2000. [9] R. Herbrich. Learning Kernel Classifiers. MIT Press, 2002. [10] R.A. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3:79–87, 1991. [11] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6:181–214, 1994. [12] C. Keribin. Consistent estimation of the order of mixture models. Technical report, Universit´e d’Evry-Val d’Essonne, Laboratoire Analyse et Probabilit´e, 1997. [13] C.R. Shelton. Importance Sampling for Reinforcement Learning with multiple Objectives. PhD thesis, Massachusetts Institute of Technology, 2001. [14] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90:227–244, 2000. [15] M. Sugiyama and K. -R. M¨uller. Input-dependent estimation of generalisation error under covariate shift. Statistics and Decisions, 23:249–279, 2005. [16] H.G. Sung. Gaussian Mixture Regression and Classification. PhD thesis, Rice University, 2004. [17] J.K. Vermunt. A general non-parametric approach to unobserved heterogeneity in the analysis of event history data. In J. Hagenaars and A. McCutcheon, editors, Applied Latent Class Models. Cambridge University Press, 2002. [18] M. Wedel and W.S. DeSarbo. A mixture likelihood approach for generalised linear models. Journal of Classification, 12:21–55, 1995. [19] B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of ICML, 2004.
|
2006
|
133
|
2,959
|
Implicit Online Learning with Kernels Li Cheng S.V. N. Vishwanathan National ICT Australia li.cheng@nicta.com.au SVN.Vishwanathan@nicta.com.au Dale Schuurmans Department of Computing Science University of Alberta, Canada dale@cs.ualberta.ca Shaojun Wang Department of Computer Science and Engineering Wright State University shaojun.wang@wright.edu Terry Caelli National ICT Australia terry.caelli@nicta.com.au Abstract We present two new algorithms for online learning in reproducing kernel Hilbert spaces. Our first algorithm, ILK (implicit online learning with kernels), employs a new, implicit update technique that can be applied to a wide variety of convex loss functions. We then introduce a bounded memory version, SILK (sparse ILK), that maintains a compact representation of the predictor without compromising solution quality, even in non-stationary environments. We prove loss bounds and analyze the convergence rate of both. Experimental evidence shows that our proposed algorithms outperform current methods on synthetic and real data. 1 Introduction Online learning refers to a paradigm where, at each time t, an instance xt ∈X is presented to a learner, which uses its parameter vector ft to predict a label. This predicted label is then compared to the true label yt, via a non-negative, piecewise differentiable, convex loss function L(xt, yt, ft). The learner then updates its parameter vector to minimize a risk functional, and the process repeats. Kivinen and Warmuth [1] proposed a generic framework for online learning where the risk functional, Jt(f), to be minimized consists of two terms: a Bregman divergence between parameters ∆G(f, ft) := G(f) −G(ft) −⟨f −ft, ∂fG(ft)⟩, defined via a convex function G, and the instantaneous risk R(xt, yt, f), which is usually given by a function of the instantaneous loss L(xt, yt, f). The parameter updates are then derived via the principle ft+1 = argmin f Jt(f) := argmin f {∆G(f, ft) + ηtR(xt, yt, f)}, (1) where ηt is the learning rate. Since Jt(f) is convex, (1) is solved by setting the gradient (or, if necessary, a subgradient) to 0. Using the fact that ∂f∆G(f, ft) = ∂fG(f) −∂fG(ft), one obtains ∂fG(ft+1) = ∂fG(ft) −ηt∂fR(xt, yt, ft+1). (2) Since it is difficult to determine ∂fR(xt, yt, ft+1) in closed form, an explicit update, as opposed to the above implicit update, uses the approximation ∂fR(xt, yt, ft+1) ≈∂fR(xt, yt, ft) to arrive at the more easily computable expression [1] ∂fG(ft+1) = ∂fG(ft) −ηt∂fR(xt, yt, ft). (3) In particular, if we set G(f) = 1 2||f||2, then ∆G(f, ft) = 1 2||f −ft||2 and ∂fG(f) = f, and we obtain the familiar stochastic gradient descent update ft+1 = ft −ηt∂fR(xt, yt, ft). (4) We are interested in applying online learning updates in a reproducing kernel Hilbert space (RKHS). To lift the above update into an RKHS, H, one typically restricts attention to f ∈H and defines [2] R(xt, yt, f) := λ 2 ||f||2 H + C · L(xt, yt, f), (5) where || · ||H denotes the RKHS norm, λ > 0 is a regularization constant, and C > 0 determines the penalty imposed on point prediction violations. Recall that if H is a RKHS of functions on X × Y, then its defining kernel k : (X × Y)2 →R satisfies the reproducing property; namely that ⟨f, k((x, y), ·)⟩H = f(x, y) for all f ∈H. Therefore, by making the standard assumption that L only depends on f via its evaluations at f(x, y), one reaches the conclusion that ∂fL(x, y, f) ∈H, and in particular ∂fL(x, y, f) = X ˜y∈Y β˜yk((x, ˜y), ·), (6) for some β˜y ∈R. Since ∂fR(xt, yt, ft) = λft + C · ∂fL(xt, yt, ft), one can use (4) to obtain an explicit update ft+1 = (1 −ηtλ)ft −ηtC · ∂fL(xt, yt, ft), which combined with (6) shows that there must exist coefficients αi,˜y fully specifying ft+1 via ft+1 = t X i=1 X ˜y∈Y αi,˜yk((xi, ˜y), ·). (7) In this paper we propose an algorithm ILK (implicit online learning with kernels) that solves (2) directly, while still expressing updates in the form (7). That is, we derive a technique for computing the implicit update that can be applied to many popular loss functions, including quadratic, hinge, and logistic losses, as well as their extensions to structured domains (see e.g. [3])—in an RKHS. We also provide a general recipe to check if a new convex loss function is amenable to these implicit updates. Furthermore, to reduce the memory requirement of ILK, which grows linearly with the number of observations (instance-label pairs), we propose a sparse variant SILK (sparse ILK) that approximates the decision function f by truncating past observations with insignificant weights. 2 Implicit Updates in an RKHS As shown in (1), to perform an implicit update one needs to minimize ∆G(f, ft) + R(xt, yt, f). By replacing R(xt, yt, f) with (5), and setting G(f) = 1 2||f||2 H, one obtains ft+1 = arg min f J(f) = argmin f 1 2||f −ft||2 H + ηt λ 2 ||f||2 H + C · L(xt, yt, f) . (8) Since L is assumed convex with respect to f, setting ∂fJ = 0 and using an auxiliary variable τt = ηtλ 1+ηtλ yields ft+1 = (1 −τt)ft −(1 −τt)ηtC∂fL(xt, yt, ft+1). (9) On the other hand, from the form (7) it follows that ft+1 can also be written as ft+1 = t−1 X i=1 X ˜y∈Y αi,˜yk((xi, ˜y), ·) + X ˜y∈Y αt,˜yk((xt, ˜y), ·), (10) for some αj,˜y ∈R and j = 1, . . . , t. Since ∂fL(xt, yt, ft+1) = X ˜y∈Y βt,˜yk((xt, ˜y), ·), and for ease of exposition, we assume a fixed step size (learning rate) ηt = 1, consequently τt = τ, it follows from (9) and (10) that αi,˜y = (1 −τ)αi,˜y for i = 1, . . . , t −1, and ˜y ∈Y, (11) αt,˜y = −(1 −τ)Cβt,˜y for all ˜y ∈Y . (12) Note that sophisticated step size adaptation algorithms (e.g. [3]) can be modified in a straightforward manner to work in our setting. The main difficulty in performing the above update arises from the fact that βt,˜y depends on ft+1 (see e.g. (13)) which in turn depends on βt,˜y via αt,˜y. The general recipe to overcome this problem is to first use (9) to write βt,˜y as a function of αt,˜y. Plugging this back into (12) yields an equation in αt,˜y alone, which sometimes can be solved efficiently. We now elucidate the details for some well-known loss functions. Square Loss In this case, k((xt, yt), ·) = k(xt, ·). That is, the kernel does not depend on the value of y. Furthermore, we assume that Y = R, and write L(xt, yt, f) = 1 2(f(xt) −yt)2 = 1 2(⟨f(·), k(xt, ·)⟩H −yt)2, which yields ∂fL(xt, yt, f) = (f(xt) −yt) k(xt, ·). (13) Substituting into (12) and using (9) we have αt = −(1 −τ)C((1 −τ)ft(xt) + αtk(xt, xt) −yt). After some straightforward algebraic manipulation we obtain the solution αt = C(1 −τ)(yt −(1 −τ)ft(xt)) 1 + C(1 −τ)k(xt, xt) . Binary Hinge Loss As before, we assume k((xt, yt), ·) = k(xt, ·), and set Y = {±1}. The hinge loss for binary classification can be written as L(xt, yt, f) = (ρ −ytf(xt))+ = (ρ −yt⟨f, k(xt, ·)⟩H)+, (14) where ρ > 0 is the margin parameter, and (·)+ := max(0, ·). Recall that the subgradient is a set, and the function is said to be differentiable at a point if this set is a singleton [4]. The binary hinge loss is not differentiable at the hinge point, but its subgradient exists everywhere. Writing ∂fL(xt, yt, f) = βtk(xt, ·) we have: ytf(xt) > ρ =⇒ βt = 0; (15a) ytf(xt) = ρ =⇒ βt ∈[0, −yt]; (15b) ytf(xt) < ρ =⇒ βt = −yt. (15c) We need to balance between two conflicting requirements while computing αt. On one hand we want the loss to be zero, which can be achieved by setting ρ −ytft+1(xt) = 0. On the other hand, the gradient of the loss at the new point ∂fL(xt, yt, ft+1) must satisfy (15). We satisfy both constraints by appropriately clipping the optimal estimate of αt. Let ˆαt denote the optimal estimate of αt which leads to ρ −ytft+1(xt) = 0. Using (9) we have ρ −yt ((1 −τ)ft(xt) + ˆαtk(xt, xt)) = 0, which yields ˆαt = ρ −(1 −τ)ytft(xt) ytk(xt, xt) = yt(ρ −(1 −τ)ytft(xt)) k(xt, xt) . On the other hand, by using (15) and (12) we have αtyt ∈[0, (1 −τ)C]. By combining the two scenarios, we arrive at the final update αt = ˆαt if ytˆαt ∈[0, (1 −τ)C]; 0 if ytˆαt < 0; yt(1 −τ)C if ytˆαt > (1 −τ)C. (16) The updates for the hinge loss used in novelty detection are very similar. Graph Structured Loss The graph-structured loss on label domain can be written as L(xt, yt, f) = −f(xt, yt) + max ˜y̸=yt(∆(yt, ˜y) + f(xt, ˜y)) + . (17) Here, the margin of separation between labels is given by ∆(yt, ˜y) which in turn depends on the graph structure of the output space. This a very general loss, which includes binary and multiclass hinge loss as special cases (see e.g. [3]). We briefly summarize the update equations for this case. Let y∗= argmax˜y̸=yt{∆(yt, ˜y) + ft(xt, ˜y)} denote the best runner-up label for current instance xt. Then set αt,yt = −αt,y∗= αt, use kt(y, y′) to denote k((xt, y), (xt, y′)) and write ˆαt = −(1 −τ)ft(xt, yt) + ∆(yt, y∗) + (1 −τ)ft(xt, y∗) (kt(yt, yt) + kt(y∗, y∗) −2kt(yt, y∗)) . The updates are now given by αt = 0 if ˆαt < 0; ˆαt if ˆαt ∈[0, (1 −τ)C]; (1 −τ)C if ˆαt > (1 −τ)C. (18) Logisitic Regression Loss The logistic regression loss and its gradient can be written as L(xt, yt, f) = log (1 + exp(−ytf(xt))) , ∂fL(xt, yt, f) = −ytk(xt, ·) 1 + exp(ytf(xt)). respectively. Using (9) and (12), we obtain αt = (1 −τ)Cyt 1 + exp(yt(1 −τ)ft(xt) + αtytk(xt, xt)). Although this equation does not give a closed-form solution, the value of αt can still be obtained by using a numerical root-finding routine, such as those described in [5]. 2.1 ILK and SILK Algorithms We refer to the algorithm that performs implicit updates as ILK, for “implicit online learning with kernels”. The update equations of ILK enjoy certain advantages. For example, using (11) it is easy to see that an exponential decay term can be naturally incorporated to down-weight past observations: ft+1 = t X i=1 X ˜y∈Y (1 −τ)t−iαi,˜yk((xi, ˜y), ·). (19) Intuitively, the parameter τ ∈(0, 1) (determined by λ and η) trades off between the regularizer and the loss on the current sample. In the case of hinge losses—both binary and graph structured—the weight |αt| is always upper bounded by (1−τ)C, which ensures limited influence from outliers (cf. (16) and (18)). A major drawback of the ILK algorithm described above is that the size of the kernel expansion grows linearly with the number of data points up to time t (see (10)). In many practical domains, where real time prediction is important (for example, video surveillance), storing all the past observations and their coefficients is prohibitively expensive. Therefore, following Kivinen et al. [2] and Vishwanathan et al. [3] one can truncate the function expansion by storing only a few relevant past observations. We call this version of our algorithm SILK, for “sparse ILK”. Specifically, the SILK algorithm maintains a buffer of size ω. Each new point is inserted into the buffer with coefficient αt. Once the buffer limit ω is exceeded, the point with the lowest coefficient value is discarded to maintain a bound on memory usage. This scheme is more effective than the straightforward least recently used (LRU) strategy proposed in Kivinen et al. [2] and Vishwanathan et al. [3]. It is relatively straightforward to show that the difference between the true predictor and its truncated version obtained by storing only ω expansion coefficients decreases exponentially as the buffer size ω increases [2]. 3 Theoretical Analysis In this section we will primarily focus on analyzing the graph-structured loss (17), establishing relative loss bounds and analyzing the rate of convergence of ILK and SILK. Our proof techniques adopt those of Kivinen et al. [2]. Due to the space constraints, we leave some details and analysis to the full version of the paper. Although the bounds we obtain are similar to those obtained in [2], our experimental results clearly show that ILK and SILK are stronger than the NORMA strategy of [2] and its truncated variant. 3.1 Mistake Bound We begin with a technical definition. Definition 1 A sequence of hypotheses {(f1, . . . , fT ) : ft ∈H} is said to be (T, B, D1, D2) bounded if it satisfies ||ft||2 H ≤B2 ∀t ∈{1, . . . , T}, P t ||ft −ft+1||H ≤D1, and P t ||ft − ft+1||2 H ≤D2 for some B, D1, D2 ≥0. The set of all (T, B, D1, D2) bounded hypothesis sequences is denoted as F(T, B, D1, D2). Given a fixed sequence of observations {(x1, y1), . . . , (xT , yT )}, and a sequence of hypotheses {(f1, . . . , fT ) ∈F}, the number of errors M is defined as M := |{t : ∆f(xt, yt, y∗ t ) ≤0}| , where ∆f(xt, yt, y∗ t ) = f(xt, yt) −f(xt, y∗ t ) and y∗ t is the best runner-up label. To keep the equations succinct, we denote ∆kt((yt, y), ·) := k((xt, yt), ·)−k((xt, y), ·), and ∆kt((yt, y), (yt, y)) := ∥∆kt((yt, y), ·)∥2 H = kt(yt, yt) −2kt(yt, y) + kt(y, y). In the following we bound the number of mistakes M made by ILK by the cumulative loss of an arbitrary sequence of hypotheses from F(T, B, D1, D2). Theorem 2 Let {(x1, y1), . . . , (xT , yT )} be an arbitrary sequence of observations such that ∆kt((yt, y), (yt, y)) ≤ X2 holds for any t, any y, and for some X > 0. For an arbitrary sequence of hypotheses (g1, · · · , gT ) ∈F(T, B, D1, D2) with average margin µ = 1 | E | P t∈E ∆(yt, yg∗ t ) −∆(yt, y∗ t ) , and bounded cumulative loss K := P t L(xt, yt, gt), the number of mistakes of the sequence of hypotheses (f1, · · · , fT ) generated by ILK with learning rate ηt = η, λ = 1 Bη q D2 T is upper-bounded by M ≤K µ + 2S µ2 + 2 K µ + S µ2 1 2 S µ2 1 2 , (20) where S = X2 4 (B2 + BD1 + B√TD2), µ > 0, and yg∗ t denotes the best runner-up label with hypothesis gt. When considering the stationary distribution in a separable (noiseless) scenario, this theorem allows us to obtain a mistake bound that is reminiscent of the Perceptron convergence theorem. In particular, if we assume the sequence of hypotheses (g1, · · · , gT ) ∈F(T, B, D1 = 0, D2 = 0) and the cumulative loss K = 0, we obtain a bound on the number of mistakes M ≤B2X2 µ2 . (21) 3.2 Convergence Analysis The following theorem asserts that under mild assumptions, the cumulative risk PT t=1 R(xt, yt, ft) of the hypothesis sequence produced by ILK converges to the minimum risk of the batch learning counterpart g∗:= argming∈H PT t=1 R(xt, yt, g) at a rate of O(T −1/2). Theorem 3 Let {(x1, y1), . . . , (xT , yT )} be an arbitrary sequence of observations such that ∆kt ((yt, yt), (yt, yt)) ≤X2 holds for any t, any y. Denote (f1, . . . , fT ) the sequence of hypotheses produced by ILK with learning rate ηt = ηt−1/2, PT t=1 R(xt, yt, ft) the cumulative risk of this sequence, and PT t=1 R(xt, yt, g) the batch cumulative risk of (g, . . . , g), for any g ∈H. Then T X t=1 R(xt, yt, ft) ≤ T X t=1 R(xt, yt, g) + aT 1/2 + b, where U = CX λ , a = 4ηC2X2 + 2U 2 η , and b = U 2 2η are constants. In particular, if g∗= arg min g∈H T X t=1 R(xt, yt, g), we obtain 1 T T X t=1 R(xt, yt, ft) ≤1 T T X t=1 R(xt, yt, g∗) + O(T −1/2). (22) Essentially the same theorem holds for SILK, but now with a slightly larger constant a = 4 η(1 + 2 λ)C2X2 + 2U 2 η . In addition, denote g∗the minimizer of the batch learning cumulative risk P t R(xt, yt, g), and f ∗the minimizer of the minimum expected risk with R(f ∗) := minf E(x,y)∼P (x,y) R(x, y, f). As stated in [6] for the structured risk minimization framework, as -400 -200 0 200 400 600 800 0 500 1000 1500 2000 2500 3000 3500 140 150 160 170 180 ILK SILK ILK(0) NORMA Trunc. NORMA(0) 140 160 180 140 160 180 NORMA vs. ILK Mistakes of ILK Mistakes of NORMA Figure 1: The left panel depicts a synthetic data sequence containing two classes (blue crosses and red diamonds, see the zoomed-in portion in bottom-left corner), with each class being sampled from a mixture of two drifting Gaussian distributions. Performance comparison of ILK vs NORMA and truncated NORMA on this data: Average cumulative error over 100 trials (middle), and average cumulative error each trial (right). the sample size T grows, T →∞, we obtain g∗→f ∗in probability. This subsequently guarantees the convergence of the average regularized risk of ILK and SILK to R(f ∗). The upper bound in the above theorem can be directly plugged into Corollary 2 of Cesa-Bianchi et al. [7] to obtain bounds on the generalization error of ILK. Let ¯f denote the average hypothesis produced by averaging over all hypotheses f1, . . . , fT . Then for any δ ∈(0, 1), with probability at least 1 −δ, the expected risk of ¯f is upper bounded by the risk of the best hypothesis chosen in hindsight plus a term which grows as O q 1 T . 4 Experiments We evaluate the performance of ILK and SILK by comparing them to NORMA [2] and its truncated variant. On OCR data, we also compare our algorithms to SVMD, a sophisticated step-size adaptation algorithm in RKHS presented in [3]. For a fair comparison we tuned the parameters of each algorithm separately and report the best results. In addition, we fixed the margin to ρ = 1 for all our loss functions. Binary Classification on Synthetic Sequences The aim here is to demonstrate that ILK is better than NORMA in coping with non-stationary distributions. Each trial of our experiment works with 2000 two-dimensional instances sampled from a non-stationary distribution (see Figure 1) and the task is to classify the sampled points into one of two classes. The central panel of Figure 1 compares the number of errors made by various algorithms, averaged over 100 trials. Here, ILK and SILK make fewer mistakes than NORMA and truncated NORMA. We also tested two other algorithms, ILK(0) obtained by setting the decay factor λ to zero, and similarly for NORMA(0). As expected, both these variants make more mistakes because they are unable to forget the past, which is crucial for obtaining good performance in a non-stationary environment. To further compare the performance of ILK and NORMA we plot the relative errors of these two algorithms in the right panel of Figure 1. As can be seen, ILK out-performs NORMA on this simple non-stationary problem. Novelty Detection on Video Sequences As a significant application, we applied SILK to a background subtraction problem in video data analysis. The goal is to detect the moving foreground objects (such as cars, persons, etc) from relatively static background scenes in real time. The challenge in this application is to be able to cope with variations in lighting as well as jitter due to shaking of the camera. We formulate the problem as a novelty detection task using a network of classifiers, one for each pixel. For this task we compare the performance of SILK vs. truncated NORMA. (The ILK and NORMA algorithms are not suitable since their storage requirements grow linearly). A constant buffer size ω = 20 is used for both algorithms in this application. We report further implementation details in the full version of this paper. The first task is to identify people, under varying lighting conditions, in an indoor video sequence taken with a static camera. The left hand panel of Figure 2 plots the ROC curves of NORMA and SILK, which demonstrates the overall better performance of SILK. We sampled one of the initial frames after the light was switched off and back on. The results are shown in the right panel of Figure 2. As can be seen, SILK is able to recover from the change in lighting condition better than NORMA, and is able to identify foreground objects reasonably close to the ground truth. 0 1 2 3 4 5 6 x 10 −3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive True Positive SILK NORMA Ground Truth Frame 1353 SILK NORMA Figure 2: Performance comparison of SILK vs truncated NORMA on a background subtraction (moving object detection) task, with varying lighting conditions. ROC curve (left) and a comparison of algorithms immediately after the lights have been switched off and on (right). Figure 3: Performance of SILK on a road traffic sequence (moving car detection) task, with a jittery camera. Two random frames and the performance of SILK on those frames are depicted. Our second experiment is a traffic sequence taken by a camera that shakes irregularly, which creates a challenging problem for any novelty detection algorithm. As seen from the randomly chosen frames plotted in Figure 3 SILK manages to obtain a visually plausible detection result. We cannot report a quantitative comparison with other methods in this case, due to the lack of manually labeled ground-truth data. Binary and Multiclass Classification on OCR data We present two sets of experiments on the MNIST dataset. The aim of the first set experiment is to show that SILK is competitive with NORMA and SVMD on a simple binary task. The data is split into two classes comprising the digits 0 −4 and 5 −9, respectively. A polynomial kernel of degree 9 and a buffer size of ω = 128 is employed for all three algorithms. Figure 4 (a) plots current average error rate, i.e., the total number of errors on the examples seen so far divided by the iteration number. As can be seen, after the initial oscillations have died out, SILK consistently outperforms SVMD and NORMA, achieving a lower average error after one pass through the dataset. Figure 4 (b) examines the effect of buffer size on SILK. As expected, smaller buffer sizes result in larger truncation error and hence worse performance. With increasing buffer size the asymptotic average error decreases. For the 10-way multiclass classification task we set ω = 128, and used a Gaussian kernel following [3]. Figure 4 (c) shows that SILK consistently outperforms NORMA and SVMD, while the trend with the increasing buffer size is repeated, as shown in Figure 4 (d). In both experiments, we used the parameters for NORMA and SVMD reported in [3], and set τ = 0.00005 and C = 100 for SILK. 5 Outlook and Discussion In this paper we presented a general recipe for performing implicit online updates in an RKHS. Specifically, we showed that for many popular loss functions these updates can be computed efficiently. We then presented a sparse version of our algorithm which uses limited basis expansions to approximate the function. For graph-structured loss we also showed loss bounds and rates of convergence. Experiments on real life datasets demonstrate that our algorithm is able to track nonstationary targets, and outperforms existing algorithms. For the binary hinge loss, when τ = 0 the proposed update formula for αt (16) reduces to the PA-I algorithm of Crammer et al. [8]. Curiously enough, the motivation for the updates in both cases seems completely different. While we use an implicit update formula Crammer et al. [8] use (a) (b) (c) (d) Figure 4: Performance comparison of different algorithms over one run of the MNIST dataset. (a) Online binary classification. (b) Performance of SILK using different buffer sizes. (c) Online 10-way multiclass classification. (d) Performance of SILK on three different buffer sizes. a Lagrangian formulation, and a passive-aggressive strategy. Furthermore, the loss functions they handle are generally linear (hinge loss and its various generalizations) while our updates can handle other non-linear losses such as quadratic or logistic loss. Our analysis of loss bounds is admittedly straightforward given current results. The use of more sophisticated analysis and extending our bounds to deal with other non-linear loss functions is ongoing. We are also applying our techniques to video analysis applications by exploiting the structure of the output space. Acknowledgements We thank Xinhua Zhang, Simon Guenter, Nic Schraudolph and Bob Williamson for carefully proof reading the paper, pointing us to many references, and helping us improving presentation style. National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia’s Ability and the ICT Center of Excellence program. This work is supported by the IST Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778. References [1] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–64, 1997. [2] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2004. [3] S. V. N. Vishwanathan, N. N. Schraudolph, and A. J. Smola. Step size adaptation in reproducing kernel Hilbert space. Journal of Machine Learning Research, 7, 2006. [4] R. T. Rockafellar. Convex Analysis, volume 28 of Princeton Mathematics Series. Princeton University Press, 1970. [5] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C: The Art of Scientific Computing (2nd ed.). Cambridge University Press, Cambridge, 1992. ISBN 0 - 521 - 43108 - 5. [6] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [7] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. IEEE Trans. Information Theory, 50(9):2050–2057, 2004. [8] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551–585, 2006.
|
2006
|
134
|
2,960
|
PG-means: learning the number of clusters in data Yu Feng Greg Hamerly Computer Science Department Baylor University Waco, Texas 76798 {yu feng, greg hamerly}@baylor.edu Abstract We present a novel algorithm called PG-means which is able to learn the number of clusters in a classical Gaussian mixture model. Our method is robust and efficient; it uses statistical hypothesis tests on one-dimensional projections of the data and model to determine if the examples are well represented by the model. In so doing, we are applying a statistical test for the entire model at once, not just on a per-cluster basis. We show that our method works well in difficult cases such as non-Gaussian data, overlapping clusters, eccentric clusters, high dimension, and many true clusters. Further, our new method provides a much more stable estimate of the number of clusters than existing methods. 1 Introduction The task of data clustering is important in many fields such as artificial intelligence, data mining, data compression, computer vision, and others. Many different clustering algorithms have been developed. However, most of them require that the user know the number of clusters (k) beforehand, while an appropriate value for k is not always clear. It is best to choose k based on prior knowledge about the data, but this information is often not available. Without prior knowledge it can be especially difficult to choose k when the data have high dimension, making exploratory data analysis difficult. In this paper, we present an algorithm called PG-means (PG stands for projected Gaussian) which is able to discover an appropriate number of Gaussian clusters and their locations and orientations. Our method is a wrapper around the standard and widely used Gaussian mixture model. The paper’s primary contribution is a novel method of determining if a whole mixture model fits its data well, based on projections and statistical tests. We show that the new approach works well not only in simple cases in which the clusters are well separated, but also in the situations where the clusters are overlapped, eccentric, in high dimension, and even non-Gaussian. We show that where some other methods tend to severely overfit, our method does not, and that our method is comparable to but much faster than a recent variational Bayes-based approach for learning k. 2 Related work Several algorithms have been proposed to determine k automatically. Most of these algorithms wrap around either k-means or Expectation-Maximization for fixed k. As they proceed, they use splitting or merging rules to increase or decrease k until a proper value is reached. Pelleg and Moore [9] proposed the X-means algorithm, which is a regularization framework for learning k with k-means. This algorithm tries many values for k and obtains a model for each k value. Then X-means uses the Bayesian Information Criterion (BIC) to score each model [5, 12], and chooses the model with the highest BIC score. Besides the BIC, other scoring criteria could also be applied such as the Akaike Information Criterion [1], or Minimum Description Length [10]. One drawback of the X-means algorithm is that the cluster covariances are all assumed to be spherical and the same width. This can cause X-means to overfit when it encounters data that arise from non-spherical clusters. Hamerly and Elkan [4] proposed the G-means algorithm, a wrapper around the k-means algorithm. G-means uses projection and a statistical test for the hypothesis that the data in a cluster come from a Gaussian distribution. The algorithm grows k starting with a small number of centers. It applies a statistical test to each cluster and those which are not accepted as Gaussians are split into two clusters. Interleaved with k-means, this procedure repeats until every cluster’s data are accepted as Gaussian. While this method does not assume spherical clusters and works well if true clusters is well-separated, it has difficulties when true clusters overlap since the hard assignment of k-means can clip data into subsets that look non-Gaussian. Sand and Moore [11] proposed an approach based on repairing faults in a Gaussian mixture model. Their approach modifies the learned model at the regions where the residual is large between the model’s predicted density and the empirical density. Each modification adds or removes a cluster center. They use a hill-climbing algorithm to seek a model which maximizes a model fitness scoring function. However, calculating the empirical density and comparing it to the model density is difficult, especially in high dimension. Tibshirani et al. [13] proposed the Gap statistic, which compares the likelihood of a learned model with the distribution of the likelihood of models trained on data drawn from a null distribution. Our experience has shown that this method works well for finding a small number of clusters, but has difficulty as the true k increases. Welling and Kurihara [15] proposed Bayesian k-means, which uses Maximization-Expectation (ME) to learn a mixture model. ME maximizes over the hidden variables (assignment of examples to clusters), and computes an expectation over model parameters (center locations and covariances). It is a special case of variational Bayesian methods. Bayesian k-means works well but is slower than our method. None of these prior approaches perform well in all situations; they tend to overfit, underfit, or are too computationally costly. These issues form the motivation for our new approach. 3 Methodology Our approach is called PG-means, where PG stands for projected Gaussian and refers to the fact that the method applies projections to the clustering model as well as the data, before performing each hypothesis test for model fitness. PG-means uses the standard Gaussian mixture model with Expectation-Maximization training, but any underlying algorithm for training a Gaussian mixture might be used. Our algorithm starts with a simple model and increases k by one at each iteration until it finds a model that fits the data well. Each iteration of PG-means uses the EM algorithm to learn a model containing k centers. Each time EM learning converges, PG-means projects both the dataset and the learned model to one dimension, and then applies the Kolmogorov-Smirnov (KS) test to determine whether the projected model fits the projected data. PG-means repeats this projection and test step several times for a single learned model. If any test rejects the null hypothesis that the data follows the model’s distribution, then it adds one cluster and starts again with EM learning. If every test accepts the null hypothesis for a given model, then the algorithm terminates. Algorithm 1 describes the algorithm more formally. When adding a new cluster PG-means preserves the k clusters it has learned and adds a new cluster. This preservation helps EM converge more quickly on the new model. To find the best new model, PG-means runs EM 10 times each time it adds a cluster with a different initial location for the new cluster. The mean of each new cluster is chosen from a set of randomly chosen examples, and also points with low model-assigned probability density. The initial covariance of the new cluster is based on the average of the existing clusters’ covariances, and the new cluster prior is assigned 1/k and all priors are re-normalized. More than 10 EM applications could be used, as well as deterministic annealing [14], to ensure finding the best new model. In our tests, deterministic annealing did not improve the results of PG-means. As stated earlier, any training algorithm (not just EM) may be Algorithm 1 PG-means (dataset X, confidence α, number of projections p) 1: Let k ←1. Initialize the cluster with the mean and covariance of X. 2: for i = 1 . . . p do 3: Project X and the model to one dimension with the same projection. 4: Use the KS test at significance level α to test if the projected model fits the projected dataset. 5: If the test rejects the null hypothesis, then break out of the loop. 6: end for 7: if any test rejected the null hypothesis then 8: for i = 1 . . . 10 do 9: Initialize k + 1 clusters as the k previously learned plus one new cluster. 10: Run EM on the k + 1 clusters. 11: end for 12: Retain the model of k + 1 clusters with the best likelihood. 13: Let k ←k + 1, and go to step 2. 14: end if 15: Every test accepts the null hypothesis; stop and return the model. used to fit a particular set of k Gaussian models. For example, one might use k-means if more speed is desired. 3.1 Projection of the model and the dataset PG-means is novel because it applies projection to the learned model as well as to the dataset prior to testing for model fitness. There are several reasons to project both the examples and the model. First, a mixture of Gaussians remains a mixture of Gaussians after being linearly projected. Second, there are many effective and efficient tests for model fitness in one dimension, but in higher dimensions such testing is more difficult. Assume some data X is sampled from a single Gaussian cluster with distribution X ∼N(µ, Σ) in d dimensions. So µ = E[X] is the d × 1 mean vector and Σ = Cov[X] is the d × d covariance matrix. Given a d × 1 projection vector P of unit length (||P|| = 1), we can project X along P as X′ = P T X. Then X′ ∼N(µ′, σ2), where µ′ = P T µ and σ2 = P T ΣP. We can project each cluster model to obtain a one-dimensional projection of an entire mixture along P. Then we wish to test whether the projected model fits the projected data. The G-means and X-means algorithms both perform statistical tests for each cluster individually. This makes sense because each algorithm is a wrapper around k-means, and k-means uses hard assignment (each example has membership in only one cluster). However, this approach is problematic when clusters overlap, since the hard assignment results in ‘clipped’ clusters, making them appear very non-Gaussian. PG-means tests all clusters and all data at once. Then if two true clusters overlap, the additive probability of the learned Gaussians representing those clusters will correctly model the increased density in the overlapping region. 3.2 The Kolmogorov-Smirnov test and critical values After projection, PG-means uses the univariate Kolmogorov-Smirnov [7] test for model fitness. The KS test statistic is D = maxX |F(X)−S(X)| – the maximum absolute difference between the true CDF F(X) with the sample CDF S(X). The KS test is only applicable if F(X) is fully specified; however, PG-means estimates the model with EM, so F(X) cannot be specified a priori. The best we can do is use the parameter estimates, but this will cause us to accept the model too readily. In other words, the probability of a Type I error will be too low and PG-means will tend to choose models with too few clusters. Lilliefors [6] gave a table of smaller critical values for the KS test which correct for estimated parameters of a single univariate Gaussian. These values come from Monte Carlo calculations. Along this vein, we create our own test critical values for a mixture of univariate Gaussians. To generate the critical values for the KS test statistic, we use the projected one-dimensional model that has been learned to generate many different datasets, and then measure the KS test statistic for each dataset. Then we find the KS test statistic that is in the α range we desire, which is the critical value we want. Fortunately, this can be done efficiently and does not dominate the running time of our algorithm. It is much more efficient than if we were to generate datasets from the full dimensional data and project these to obtain the statistic distribution, yet they are equivalent approaches. Further optimization is possible when we follow Lilliefors’ observation that the critical value decreases as approximately √n, for sufficiently large n, which we have also observed in our simulations with mixtures of Gaussians. Therefore, we can use Monte Carlo simulations with n′ ≪n points, and scale the chosen critical value by p n′/n. A more accurate scaling given by Dallal and Wilkinson [2] did not offer additional benefit in our tests. We use at most n′ = 3/α, which is 3000 points for α = 0.001. The Monte Carlo simulations can be easily parallelized, and our implementation uses two computational threads. 3.3 Number of projections We wish to use a small but sufficient number of projections and tests to discover when a model does not fit data well. Each projection provides a different view of model fitness along that projection’s direction. However, a projection can cause the data from two or more true clusters to be collapsed together, so that the test cannot see that there should be multiple densities used to model them. Therefore multiple projections are necessary to see these model and data discrepancies. We can choose the projections in several different ways. Random projection [3] provides a useful framework, which is what we use in this paper. Other possible methods include using the leading directions from principal components analysis, which gives a stable set of vectors which can be re-used, or choosing k −1 vectors that span the same subspace spanned by the k cluster centers. Consider two cluster centers µ1 and µ2 in d dimensions and the vector which connects them, m = µ2−µ1. We assume for simplicity that the two clusters have the same spherical covariance Σ and are c-separated, that is, ||m|| ≥c p trace(Σ). We follow Dasgupta’s conclusion that c-separation is the natural measure for Gaussians [3]. Now consider the projection of m along some randomly chosen vector P ∼N(0, 1/dI). We use this distribution because in high dimension P will be approximately unit-length. The probability that P is a ‘good’ projection, i.e. that it maintains c-separation between the cluster means when projected, is Pr |P T m| ≥c √ P T ΣP > 1 −Erf c s dP T ΣP 2c2trace(Σ) ! = 1 −Erf p 1/2 where Erf is the standard Gaussian error function. Here we have used the relation P T ΣP = trace(Σ)/d when Σ is spherical and ||P|| = 1. If Σ is not spherical, then this is true in an expected sense, i.e. E[P T ΣP] = trace(Σ)/d when ||P|| = 1. If we perform p random projections, we wish that the probability that all p projections are ‘bad’ to be less than some ε: Pr(p bad projections) = Erf p 1/2 p < ε Therefore we need approximately p < log(ε)/ log(Erf( p 1/2)) ≈−2.6198 log(ε) projections to find a projection that keeps the two cluster means c-separated. For ε = 0.01, this is only 12 projections, and for ε = 0.001, this is only 18 projections. 3.4 Algorithm complexity PG-means converges as fast as EM on any given k, and it repeats EM every time it adds a cluster. Let K be the final learned number of clusters on n data points. PG-means runs in O(K2nd2l + Kn log(n)) time, where l is the number of iterations required for EM convergence. The n log(n) term comes from the sort required for each KS test, and the d2 comes from using full covariance matrices. PG-means uses a fixed number of projections for each model and each projection is linear in n, d, and k; therefore the projections do not increase the algorithm’s asymptotic run time. Note also that EM starts with k learned centers and one new randomly initialized center, so EM convergence is much faster in practice than if all k + 1 clusters were randomly initialized. We must also factor in the cost of the Monte Carlo simulations for determining the KS test critical value, which are O(Kd2n log(n)/α) for each simulation. For fixed alpha, this does not increase the runtime significantly, and in practice the simulations are a minor part of the running time. 2 4 8 16 10 15 20 25 Eccentricity=1 Learned k c=2 2 4 8 16 16 18 20 22 c=4 2 4 8 16 16 18 20 22 c=6 2 4 8 16 0 20 40 60 Eccentricity=4 Learned k dimension 2 4 8 16 10 20 30 40 dimension 2 4 8 16 10 20 30 40 dimension PG−means G−means X−means BKM Figure 1: Each point represents the average number of clusters learned for various types of synthetic datasets. The true number of clusters is 20. The error bars denote the standard errors for the experiments (except for BKM, which was run once for each dataset type). 0 10 20 0 0.5 1 Eccentricity=1 VI metric score c=2 0 10 20 0 0.2 0.4 c=4 0 10 20 0 0.2 0.4 c=6 0 10 20 0 0.5 1 1.5 Eccentricity=4 VI metric score dimension 0 10 20 0 0.5 1 dimension 0 10 20 0 0.5 1 dimension PG−means G−means X−means BKM Figure 2: Each point represents the average VI metric comparing the learned clustering to the correct labels for various types of synthetic datasets. Lower values are better. For each algorithm except BKM we provide standard error bars (BKM was run once for each dataset type). 4 Experimental evaluation We perform several experiments on synthetic and real-world datasets to illustrate the utility of PG-means and compare it with G-means, X-means, and Bayesian k-means (BKM). For synthetic datasets, we experiment with Gaussian and non-Gaussian data. We use α = 0.001 for both PGmeans and G-means. For each model, PG-means uses 12 projections and tests, corresponding to an error rate of ε < 0.01 that it incorrectly accepts. All our experiments use MATLAB on Linux 2.4 on a dual-processor dual-hyperthreaded Intel Xeon 3.06 GHz computer with 2 gigabytes of memory. Figure 1 shows the number of clusters found by running PG-means, G-means, X-means and BKM on many synthetic datasets. Each of these datasets has 4000 points in d = 2, 4, 8 and 16 dimensions. PG−means G−means X−means Figure 3: The leftmost dataset has 10 true clusters with significant overlap (c = 1). Though PGmeans finds only 4 clusters, the model is very reasonable. On the right are the results for PG-means, G-means, and X-means on a dataset with 5 true eccentric and overlapping clusters. PG-means finds the correct model, while the others overfit with 15 and 19 clusters. All of the data are drawn from a mixture of 20 true Gaussians. The centers of the clusters in each dataset are chosen randomly, and each cluster generates the same number of points. Each Gaussian mixture dataset is specified by the average c-separation between each cluster center and its nearest neighbor (either 2, 4 or 6) and each cluster’s eccentricity (either 1 or 4). The eccentricity of is defined as Ecc = p λmax/λmin where λmax and λmin are the maximum and minimum eigenvalues of the cluster covariance. An eccentricity of 1 indicates a spherical Gaussian. We generate 10 datasets of each type and run PG-means, G-means and X-means on each, and we run BKM on only one of them due to the running time of BKM. Each algorithm starts with one center, and we do not place an upper-bound on the number of clusters. It is clear that PG-means performs better than G-means and X-means when the data are eccentric (Ecc=4), especially when the clusters overlap (c = 2). In this situation G-means and X-means tend to overestimate the number of clusters. The rightmost plots in Figure 3 further illustrate this overfitting. PG-means is much more stable in its estimate of the number of clusters, unlike Gmeans and X-means which can dramatically overfit depending on the type of data. BKM generally does very well, but is less efficient than PG-means. For example, on a set of 24 different datasets each having 4000 points from 10 clusters, 2-16 dimensions and varying separations/eccentricities, PG-means was three times faster than BKM. Figure 1 only gives the information regarding the learned number of clusters, which is not enough to measure the true quality of learned models. In order to better evaluate the approaches, we use Meila’s VI (Variation of Information) metric [8] to compare the induced clustering to the true labels. The VI metric is non-negative and lower values are better. It is zero when the two compared clusterings are identical (modulo clusters being relabeled). Figure 2 shows the average VI metric obtained by running PG-means, G-means, X-means, and BKM on the same synthetic datasets as in Figure 1. PG-means does about as well as the other algorithms when the data are spherical and well-separated (see the top-right plot). However, the top-left plot shows that PG-means does not perform as well as G-means, X-means and BKM for spherical and overlapping data. The reason is that two spherical clusters overlap, they can look like a single eccentric cluster. Since PG-means can capture eccentric clusters effectively, it will accept these two overlapped spherical clusters as one cluster. But for the same case, G-means and X-means will probably recognize them as two different clusters. Therefore, although PG-means gives fewer clusters for spherical and overlapping data, the models it learns are reasonable. Figure 3 shows how 10 true overlapping clusters may look like far fewer clusters, and that PG-means can find an appropriate model with only 4 clusters. High dimensional data of any finite-variance distribution looks more Gaussian when linearly projected to a randomly chosen lower-dimensional space. Projection is a weighted sum of the original dimensions, and the sum of many random variables with finite variance tends to be Gaussian, according to the central limit theorem. Thus PG-means should be useful for high-dimensional data which are not Gaussian. To test this, we perform experiments on high-dimensional non-Gaussian synthetic datasets. These datasets are generated in a similar way of generating our synthetic Gaussian datasets, except that each true cluster has a uniform distribution. Each cluster is not necessarily axis-aligned or square; it is scaled for eccentricity and rotated. Each dataset has 4000 points in 8 dimensions equally distributed among 20 clusters. The eccentricity and c-separation values for the datasets are both 4. We run PG-means, G-means and X-means on 10 different datasets and BKM Table 1: Results for synthetic non-Gaussian data and the handwritten digits dataset. Each nonGaussian dataset contains 4000 points in 8 dimensions sampled from 20 true clusters each having uniform distribution. The eccentricity and c-separation are both 4. We run each algorithm except BKM on ten such datasets, and BKM on one. The digits dataset consists of 10 classes and 9298 examples. Non-Gaussian datasets (20 true clusters) Handwritten digits dataset (10 true classes) Algorithm Learned k VI metric Learned k VI metric PG-means 20 ± 0 0 ± 0 14 2.045 G-means 42.2 ± 3.67 0.673 ± 0.071 48 3.174 X-means 27.7 ± 1.28 0.355 ± 0.059 29 2.921 BKM 20 0 15 1.980 on one of them. The results are shown in the left part of Table 1. G-means and X-means overfit the non-Gaussian datasets, while PG-means and BKM both perform excellently in the number of clusters learned and in learning the true labels according to the VI metric. We tested all of these algorithms on the U.S. Postal Service handwritten digits dataset (both the train and test portions, obtained from http://www-stat.stanford.edu/˜tibs/ElemStatLearn/data.html). Each example is a grayscale image of a handwritten digit. There are 9298 examples in the dataset, and each example has 256 pixels (16 pixels on a side). The dataset has 10 true classes (digits 0-9). Our goal is to cluster the dataset without knowing the true labels and analyze the result to find out how well PG-means captures the true classes. We use random linear projection to project the dataset to 16 dimensions and run PG-means, Gmeans, X-means, and BKM on it. The results are shown in the right side of Table 1. PG-means gives 14 centers, which is closest to the true value. It also obtains nearly the best VI metric score. On the other hand, G-means and X-means find many more classes than the truth, which do not help them score well on the VI metric, and BKM takes over twice as long as PG-means. 5 Conclusions and future work We presented a new algorithm called PG-means for learning the number of Gaussian clusters k in data. Starting with one center, it grows k gradually. For each k, it learns a model using ExpectationMaximization. Then it projects both the model and the dataset to one dimension and tests for model fitness with the Kolmogorov-Smirnov test and its own critical values. It performs multiple projections and tests per model, to avoid being fooled by a poorly chosen projection. If the model does not fit well, PG-means adds an additional cluster. This procedure repeats until one model is accepted by all tests. We proved that only a small number of these fast tests are required to have good performance at finding model differences. In the future we will investigate methods of finding better projections for our task. We also hope to develop approximations to the critical values of the KS test on Gaussian mixtures, to avoid the cost of Monte Carlo simulations. PG-means finds better models than G-means and X-means when the true clusters are eccentric or overlap, especially in low-dimension. On high-dimensional data PG-means also performs very well. PG-means gives far more stable estimates of the number of clusters than the other two methods over many different types of data. Compared with Bayesian k-means, we showed that PG-means performs comparably, though PG-means is several times faster in our tests and uses less memory. Though PG-means looks for general Gaussian clusters, we showed that PG-means works well on high-dimensional non-Gaussian data, due to the central limit theorem and our use of projection. Our techniques would also be applicable as a wrapper around the k-means algorithm, which is really just a mixture of spherical Gaussians, or any other mixture of Gaussians with limited covariance. On the real-world handwritten digits dataset PG-means finds a very good clustering with nearly the correct number of classes, and PG-means and BKM are equally close to identifying the original labels among the algorithms we tested. We believe that the project-and-test procedure that PG-means uses is a useful method for determining fitness of a given mixture of Gaussians. However, the underlying standard EM clustering algorithm dominates the runtime and is difficult to initialize well, which are well-known problems. The project-and-test framework of PG-means does not depend on EM in any way, and could be wrapped around any other better method of finding a Gaussian mixture. Acknowledgements: We thank Dennis Johnston, Sanjoy Dasgupta, Charles Elkan, and the anonymous reviewers for helpful suggestions. We also thank Dan Pelleg and Ken Kurihara for sending us their source code. References [1] Hirotugu Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19:716–723, 1974. [2] Gerard E. Dallal and Leland Wilkinson. An analytic approximation to the distribution of Lilliefors’ test for normality. The American Statistician, 40:294–296, 1986. [3] Sanjoy Dasgupta. Experiments with random projection. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI-2000), pages 143–151. Morgan Kaufmann Publishers, 2000. [4] Greg Hamerly and Charles Elkan. Learning the k in k-means. In Proceedings of the seventeenth annual conference on neural information processing systems (NIPS), pages 281–288, 2003. [5] Robert E. Kass and Larry Wasserman. A reference Bayesian test for nested hypotheses and its relationship to the schwarz criterion. Journal of the American Statistical Association, 90(431):928–934, 1995. [6] Hubert W. Lilliefors. On the Kolmogorov-Smirnov test of normality with mean and variance unknown. Journal of the American Statistical Association, 62(318):399–402, 1967. [7] Frank J. Massey, Jr. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American Statistical Association, 46(253):68–78, 1951. [8] Marina Meila. Comparing clusterings by the variation of information. In COLT, pages 173–187, 2003. [9] Dan Pelleg and Andrew Moore. X-means: Extending k-means with efficient estimation of the number of clusters. In Proceedings of the 17th International Conf. on Machine Learning, pages 727–734. Morgan Kaufmann, 2000. [10] Jorma Rissanen. Modeling by shortest data description. Automatica, 14:465–471, 1978. [11] Peter Sand and Andrew W. Moore. Repairing faulty mixture models using density estimation. In Proceedings of the 18th International Conf. on Machine Learning, pages 457–464, 2001. [12] Gideon Schwarz. Estimating the dimension of a model. The Annnals of Statistics, 6(2):461–464, 1978. [13] Robert Tibshirani, Guenther Walther, and Trevor Hastie. Estimating the number of clusters in a dataset via the Gap statistic. Journal of the Royal Statistical Society B, 63:411–423, 2001. [14] Naonori Ueda and Ryohei Nakano. Deterministic annealing em algorithm. Neural Networks, 11(2):271– 282, 1998. [15] Max Welling and Kenichi Kurihara. Bayesian k-means as a ’maximization-expectation’ algorithm. In SIAM conference on Data Mining SDM06, 2006.
|
2006
|
135
|
2,961
|
Conditional Random Sampling: A Sketch-based Sampling Technique for Sparse Data Ping Li Department of Statistics Stanford University Stanford, CA 94305 pingli@stat.stanford.edu Kenneth W. Church Microsoft Research One Microsoft Way Redmond, WA 98052 church@microsoft.com Trevor J. Hastie Department. of Statistics Stanford University Stanford, CA 94305 hastie@stanford.edu Abstract We1 develop Conditional Random Sampling (CRS), a technique particularly suitable for sparse data. In large-scale applications, the data are often highly sparse. CRS combines sketching and sampling in that it converts sketches of the data into conditional random samples online in the estimation stage, with the sample size determined retrospectively. This paper focuses on approximating pairwise l2 and l1 distances and comparing CRS with random projections. For boolean (0/1) data, CRS is provably better than random projections. We show using real-world data that CRS often outperforms random projections. This technique can be applied in learning, data mining, information retrieval, and database query optimizations. 1 Introduction Conditional Random Sampling (CRS) is a sketch-based sampling technique that effectively exploits data sparsity. In modern applications in learning, data mining, and information retrieval, the datasets are often very large and also highly sparse. For example, the term-document matrix is often more than 99% sparse [7]. Sampling large-scale sparse data is challenging. The conventional random sampling (i.e., randomly picking a small fraction) often performs poorly when most of the samples are zeros. Also, in heavy-tailed data, the estimation errors of random sampling could be very large. As alternatives to random sampling, various sketching algorithms have become popular, e.g., random projections [17] and min-wise sketches [6]. Sketching algorithms are designed for approximating specific summary statistics. For a specific task, a sketching algorithm often outperforms random sampling. On the other hand, random sampling is much more flexible. For example, we can use the same set of random samples to estimate any lp pairwise distances and multi-way associations. Conditional Random Sampling (CRS) combines the advantages of both sketching and random sampling. Many important applications concern only the pairwise distances, e.g., distance-based clustering and classification, multi-dimensional scaling, kernels. For a large training set (e.g., at Web scale), computing pairwise distances exactly is often too time-consuming or even infeasible. Let A be a data matrix of n rows and D columns. For example, A can be the term-document matrix with n as the total number of word types and D as the total number of documents. In modern search engines, n ≈106 ∼107 and D ≈1010 ∼1011. In general, n is the number of data points and D is the number of features. Computing all pairwise associations AAT, also called the Gram matrix in machine learning, costs O(n2D), which could be daunting for large n and D. Various sampling methods have been proposed for approximating Gram matrix and kernels [2,8]. For example, using (normal) random projections [17], we approximate AAT by (AR) (AR)T, where the entries of R ∈RD×k are i.i.d. N(0, 1). This reduces the cost down to O(nDk+n2k), where k ≪min(n, D). 1The full version [13]: www.stanford.edu/∼pingli98/publications/CRS tr.pdf Sampling techniques can be critical in databases and information retrieval. For example, the database query optimizer seeks highly efficient techniques to estimate the intermediate join sizes in order to choose an “optimum” execution path for multi-way joins. Conditional Random Sampling (CRS) can be applied to estimating pairwise distances (in any norm) as well as multi-way associations. CRS can also be used for estimating joint histograms (two-way and multi-way). While this paper focuses on estimating pairwise l2 and l1 distances and inner products, we refer readers to the technical report [13] for estimating joint histograms. Our early work, [11,12] concerned estimating two-way and multi-way associations in boolean (0/1) data. We will compare CRS with normal random projections for approximating l2 distances and inner products, and with Cauchy random projections for approximating l1 distances. In boolean data, CRS bears some similarity to Broder’s sketches [6] with some important distinctions. [12] showed that in boolean data, CRS improves Broder’s sketches by roughly halving the estimation variances. 2 The Procedures of CRS Conditional Random Sampling is a two-stage procedure. In the sketching stage, we scan the data matrix once and store a fraction of the non-zero elements in each data point, as “sketches.” In the estimation stage, we generate conditional random samples online pairwise (for two-way) or groupwise (for multi-way); hence we name our algorithm Conditional Random Sampling (CRS). 2.1 The Sampling/Sketching Procedure 1 2 3 4 5 n 1 2 3 4 5 6 7 8 D (a) Original 1 2 3 4 5 n 1 2 3 4 5 6 7 8 D (b) Permuted 1 2 3 4 5 n (c) Postings 1 2 3 4 5 n (d) Sketches Figure 1: A global view of the sketching stage. Figure 1 provides a global view of the sketching stage. The columns of a sparse data matrix (a) are first randomly permuted (b). Then only the non-zero entries are considered, called postings (c). Sketches are simply the front of postings (d). Note that in the actual implementation, we only need to maintain a permutation mapping on the column IDs. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 1 0 2 0 1 0 0 1 2 1 0 1 0 2 1 3 0 0 1 2 0 1 0 0 3 0 0 2 1 1 2 u u (a) Data matrix and random samples 1 P : 1 (1) 2 (3) 5 (1) 6 (2) 8 (1) 11 (3) 14 (2) 15 (1) 2 P : 2 (1) 4 (2) 6 (1) 9 (1) 10 (2) 11 (1) 13 (1) 15 (2) (b) Postings 1 2 K : 2 (1) 4 (2) 6 (1) 9 (1) 10 (2) K : 1 (1) 2 (3) 5 (1) 6 (2) 8 (1) 11 (3) (c) Sketches Figure 2: (a): A data matrix with two rows and D = 15. If the column IDs are random, the first Ds = 10 columns constitute a random sample. ui denotes the ith row. (b): Postings consist of tuples “ID (Value).” (c): Sketches are the first ki entries of postings sorted ascending by IDs. In this example, k1 = 5, k2 = 6, Ds = min(10, 11) = 10. Excluding 11(3) in K2, we obtain the same samples as if we directly sampled the first Ds = 10 columns in the data matrix. Apparently sketches are not uniformly random samples, which may make the estimation task difficult. We show, in Figure 2, that sketches are almost random samples pairwise (or group-wise). Figure 2(a) constructs conventional random samples from a data matrix; and we show one can generate (retrospectively) the same random samples from sketches in Figure 2(b)(c). In Figure 2(a), when the column are randomly permuted, we can construct random samples by simply taking the first Ds columns from the data matrix of D columns (Ds ≪D in real applications). For sparse data, we only store the non-zero elements in the form of tuples “ID (Value),” a structure called postings. We denote the postings by Pi for each row ui. Figure 2(b) shows the postings for the same data matrix in Figure 2(a). The tuples are sorted ascending by their IDs. A sketch, Ki, of postings Pi, is the first ki entries (i.e., the smallest ki IDs) of Pi, as shown in Figure 2(c). The central observation is that if we exclude all elements of sketches whose IDs are larger than Ds = min (max(ID(K1)), max(ID(K2))) , (1) we obtain exactly the same samples as if we directly sampled the first Ds columns from the data matrix in Figure 2(a). This way, we convert sketches into random samples by conditioning on Ds, which differs pairwise and we do not know beforehand. 2.2 The Estimation Procedure The estimation task for CRS can be extremely simple. After we construct the conditional random samples from sketches K1 and K2 with the effective sample size Ds, we can compute any distances (l2, l1, or inner products) from the samples and multiply them by D Ds to estimate the original space. (Later, we will show how to improve the estimates by taking advantage of the marginal information.) We use ˜u1,j and ˜u2,j (j = 1 to Ds) to denote the conditional random samples (of size Ds) obtained by CRS. For example, in Figure 2, we have Ds = 10, and the non-zero ˜u1,j and ˜u2,j are ˜u1,2 = 3, ˜u1,4 = 2, ˜u1,6 = 1, ˜u1,9 = 1, ˜u1,10 = 2 ˜u2,1 = 1, ˜u2,2 = 3, ˜u2,5 = 1, ˜u2,6 = 2, ˜u2,8 = 1. Denote the inner product, squared l2 distance, and l1 distance, by a, d(2), and d(1), respectively, a = D X i=1 u1,iu2,i, d(2) = D X i=1 |u1,i −u2,i|2, d(1) = D X i=1 |u1,i −u2,i| (2) Once we have the random samples, we can then use the following simple linear estimators: ˆaMF = D Ds Ds X j=1 ˜u1,j ˜u2,j, ˆd(2) MF = D Ds Ds X j=1 (˜u1,j −˜u2,j)2 , ˆd(1) MF = D Ds Ds X j=1 |˜u1,j −˜u2,j|. (3) 2.3 The Computational Cost Th sketching stage requires generating a random permutation mapping of length D, and linear scan all the non-zeros. Therefore, generating sketches for A ∈Rn×D costs O(Pn i=1 fi), where fi is the number of non-zeros in the ith row, i.e., fi = |Pi|. In the estimation stage, we need to linear scan the sketches. While the conditional sample size Ds might be large, the cost for estimating the distance between one pair of data points would be only O(k1 + k2) instead of O(Ds). 3 The Theoretical Variance Analysis of CRS We give some theoretical analysis on the variances of CRS. For simplicity, we ignore the “finite population correction factor”, D−Ds D−1 , due to “sample-without-replacement.” We first consider ˆaMF = D Ds PDs j=1 ˜u1,j˜u2,j. By assuming “sample-with-replacement,”the samples, (˜u1,j ˜u2,j), j = 1 to Ds, are i.i.d, conditional on Ds. Thus, Var(ˆaMF |Ds) = „ D Ds «2 DsVar (˜u1,1˜u2,1) = D Ds D ` E (˜u1,1˜u2,1)2 −E2 (˜u1,1˜u2,1) ´ , (4) E (˜u1,1˜u2,1) = 1 D D X i=1 (u1,iu2,i) = a D , E (˜u1,1˜u2,1)2 = 1 D D X i=1 (u1,iu2,i)2 , (5) Var(ˆaMF |Ds) = D Ds D 1 D D X i=1 (u1,iu2,i)2 − “ a D ”2 ! = D Ds D X i=1 u2 1,iu2 2,i −a2 D ! . (6) The unconditional variance would be simply Var(ˆaMF ) = E (Var(ˆaMF |Ds)) = E „ D Ds « D X i=1 u2 1,iu2 2,i −a2 D ! , as Var( ˆX) = E “ Var( ˆ X|Ds) ” + Var “ E( ˆX|Ds) ” = E “ Var( ˆ X|Ds) ” , when ˆX is conditionally unbiased. No closed-form expression is known for E D Ds ; but we know E D Ds ≥max f1 k1 , f2 k2 (similar to Jensen’s inequality). Asymptotically (as k1 and k2 increase), the inequality becomes an equality E D Ds ≈max f1 + 1 k1 , f2 + 1 k2 ≈max f1 k1 , f2 k2 , (7) where f1 and f2 are the numbers of non-zeros in u1 and u2, respectively. See [13] for the proof. Extensive simulations in [13] verify that the errors of (7) are usually within 5% when k1, k2 > 20. We similarly derive the variances for ˆd(2) MF and ˆd(1) MF . In a summary, we obtain (when k1 = k2 = k) Var (ˆaMF ) = E „ D Ds « D X i=1 u2 1,iu2 2,i −a2 D ! ≈max(f1, f2) D 1 k D D X i=1 u2 1,iu2 2,i −a2 ! , (8) Var “ ˆd(2) MF ” = E „ D Ds « „ d(4) −[d(2)]2 D « ≈max(f1, f2) D 1 k “ Dd(4) −[d(2)]2” , (9) Var “ ˆd(1) MF ” = E „ D Ds « „ d(2) −[d(1)]2 D « ≈max(f1, f2) D 1 k “ Dd(2) −[d(1)]2” . (10) where we denote d(4) = PD i=1 (u1,i −u2,i)4. The sparsity term max(f1,f2) D reduces the variances significantly. If max(f1,f2) D = 0.01, the variances can be reduced by a factor of 100, compared to conventional random coordinate sampling. 4 A Brief Introduction to Random Projections We give a brief introduction to random projections, with which we compare CRS. (Normal) Random projections [17] are widely used in learning and data mining [2–4]. Random projections multiply the data matrix A ∈Rn×D with a random matrix R ∈RD×k to generate a compact representation B = AR ∈Rn×k. For estimating l2 distances, R typically consists of i.i.d. entries in N(0, 1); hence we call it normal random projections. For l1, R consists of i.i.d. Cauchy C(0, 1) [9]. However, the recent impossibility result [5] has ruled out estimators that could be metrics for dimension reduction in l1. Denote v1, v2 ∈Rk the two rows in B, corresponding to the original data points u1, u2 ∈RD. We also introduce the notation for the marginal l2 norms: m1 = ∥u1∥2, m2 = ∥u2∥2. 4.1 Normal Random Projections In this case, R consists of i.i.d. N(0, 1). It is easy to show that the following linear estimators of the inner product a and the squared l2 distance d(2) are unbiased ˆaNRP,MF = 1 k vT 1v2, ˆd(2) NRP,MF = 1 k∥v1 −v2∥2, (11) with variances [15,17] Var (ˆaNRP,MF ) = 1 k m1m2 + a2 , Var ˆd(2) NRP,MF = 2[d(2)]2 k . (12) Assuming that the margins m1 = ∥u1∥2 and m2 = ∥u2∥2 are known, [15] provides a maximum likelihood estimator, denoted by ˆaNRP,MLE, whose (asymptotic) variance is Var (ˆaNRP,MLE) = 1 k (m1m2 −a2)2 m1m2 + a2 + O(k−2). (13) 4.2 Cauchy Random Projections for Dimension Reduction in l1 In this case, R consisting of i.i.d. entries in Cauchy C(0, 1). [9] proposed an estimator based on the absolute sample median. Recently, [14] proposed a variety of nonlinear estimators, including, a biascorrected sample median estimator, a bias-corrected geometric mean estimator, and a bias-corrected maximum likelihood estimator. An analog of the Johnson-Lindenstrauss (JL) lemma for dimension reduction in l1 is also proved in [14], based on the bias-corrected geometric mean estimator. We only list the maximum likelihood estimator derived in [14], because it is the most accurate one. ˆd(1) CRP,MLE,c = 1 −1 k ˆd(1) CRP,MLE, (14) where ˆd(1) CRP,MLE solves a nonlinear MLE equation − k ˆd(1) CRP,MLE + k X j=1 2 ˆd(1) CRP,MLE (v1,j −v2,j)2 + “ ˆd(1) CRP,MLE ”2 = 0. (15) [14] shows that Var ˆd(1) CRP,MLE,c = 2[d(1)]2 k + 3[d(1)]2 k2 + O 1 k3 . (16) 4.3 General Stable Random Projections for Dimension Reduction in lp (0 < p ≤2) [10] generalized the bias-corrected geometric mean estimator to general stable random projections for dimension reduction in lp (0 < p ≤2), and provided the theoretical variances and exponential tail bounds. Of course, CRS can also be applied to approximating any lp distances. 5 Improving CRS Using Marginal Information It is often reasonable to assume that we know the marginal information such as marginal l2 norms, numbers of non-zeros, or even marginal histograms. This often leads to (much) sharper estimates, by maximizing the likelihood under marginal constraints. In the boolean data case, we can express the MLE solution explicitly and derive a closed-form (asymptotic) variance. In general real-valued data, the joint likelihood is not available; we propose an approximate MLE solution. 5.1 Boolean (0/1) Data In 0/1 data, estimating the inner product becomes estimating a two-way contingency table, which has four cells. Because of the margin constraints, there is only one degree of freedom. Therefore, it is not hard to show that the MLE of a is the solution, denoted by ˆa0/1,MLE, to a cubic equation s11 a − s10 f1 −a − s01 f2 −a + s00 D −f1 −f2 + a = 0, (17) where s11 = #{j : ˜u1,j = ˜u2,j = 1}, s10 = #{j : ˜u1,j = 1, ˜u2,j = 0}, s01 = #{j : ˜u1,j = 0, ˜u2,j = 1}, s00 = #{j : ˜u1,j = 0, ˜u2,j = 0}, j = 1, 2, ..., Ds. The (asymptotic) variance of ˆa0/1,MLE is proved [11–13] to be Var(ˆa0/1,MLE) = E D Ds 1 1 a + 1 f1−a + 1 f2−a + 1 D−f1−f2+a . (18) 5.2 Real-valued Data A practical solution is to assume some parametric form of the (bivariate) data distribution based on prior knowledge; and then solve an MLE considering various constraints. Suppose the samples (˜u1,j, ˜u2,j) are i.i.d. bivariate normal with moments determined by the population moments, i.e., » ˜v1,j ˜v2,j – = » ˜u1,j −¯u1 ˜u2,j −¯u2 – ∼N „» 0 0 – , ˜Σ « , (19) ˜Σ = 1 Ds Ds D » ∥u1∥2 −D¯u2 1 uT 1u2 −D¯u1¯u2 uT 1u2 −D¯u1¯u2 ∥u2∥2 −D¯u2 2 – = 1 Ds » ¨m1 ¨a ¨a ¨m2 – , (20) where ¯u1 = PD i=1 u1,i/D, ¯u2 = PD i=1 u2,i/D are the population means. ¨m1 = Ds D ∥u1∥2 −D¯u2 1 , ¨m2 = Ds D ∥u2∥2 −D¯u22 , ¨a = Ds D uT 1u2 −D¯u1¯u2 . Suppose that ¯u1, ¯u2, m1 = ∥u1∥2 and m2 = ∥u2∥2 are known, an MLE for a = uT 1u2, denoted by ˆaMLE,N, is ˆaMLE,N = D Ds ˆ¨a + D¯u1¯u2, (21) where, similar to Lemma 2 of [15], ˆ¨a is the solution to a cubic equation: ¨a3 −¨a2 ˜vT 1 ˜v2 + ¨a −¨m1 ¨m2 + ¨m1∥˜v2∥2 + ¨m2∥˜v1∥2 −¨m1 ¨m2˜vT 1 ˜v2 = 0. (22) ˆaMLE,N is fairly robust, although sometimes we observe the biases are quite noticeable. In general, this is a good bias-variance trade-off (especially when k is not too large). Intuitively, the reason why this (seemly crude) assumption of bivariate normality works well is because, once we have fixed the margins, we have removed to a large extent the non-normal component of the data. 6 Theoretical Comparisons of CRS With Random Projections As reflected by their variances, for general data types, whether CRS is better than random projections depends on two competing factors: data sparsity and data heavy-tailedness. However, in the following two important scenarios, CRS outperforms random projections. 6.1 Boolean (0/1) data In this case, the marginal norms are the same as the numbers of non-zeros, i.e., mi = ∥ui∥2 = fi. Figure 3 plots the ratio, Var(ˆaMF ) Var(ˆaNRP,MF ), verifying that CRS is (considerably) more accurate: Var (ˆaMF ) Var (ˆaNRP,MF ) = max(f1, f2) f1f2 + a2 1 1 a + 1 D−a ≤max(f1, f2)a f1f2 + a2 ≤1. Figure 4 plots Var(ˆa0/1,MLE) Var(ˆaNRP,MLE). In most possible range of the data, this ratio is less than 1. When u1 and u2 are very close (e.g., a ≈f2 ≈f1), random projections appear more accurate. However, when this does occur, the absolute variances are so small (even zero) that their ratio does not matter. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 a/f2 Variance ratio f2/f1 = 0.2 f1 = 0.05D f1 = 0.95D 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 a/f2 Variance ratio f2/f1 = 0.5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 a/f2 Variance ratio f2/f1 = 0.8 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 a/f2 Variance ratio f2/f1 = 1 Figure 3: The variance ratios, Var(ˆaMF ) Var(ˆaNRP,MF ), show that CRS has smaller variances than random projections, when no marginal information is used. We let f1 ≥f2 and f2 = αf1 with α = 0.2, 0.5, 0.8, 1.0. For each α, we plot from f1 = 0.05D to f1 = 0.95D spaced at 0.05D. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 a/f2 Variance ratio f2/f1 = 0.2 f1 = 0.05 D f1 = 0.95 D 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 a/f2 Variance ratio f2/f1 = 0.5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 a/f2 Variance ratio f2/f1 = 0.8 0 0.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 a/f2 Variance ratio f2/f1 = 1 Figure 4: The ratios, Var(ˆa0/1,MLE) Var(ˆaNRP,MLE), show that CRS usually has smaller variances than random projections, except when f1 ≈f2 ≈a. 6.2 Nearly Independent Data Suppose two data points u1 and u2 are independent (or less strictly, uncorrelated to the second order), it is easy to show that the variance of CRS is always smaller: Var (ˆaMF ) ≤max(f1, f2) D m1m2 k ≤Var (ˆaNRP,MF ) = m1m2 + a2 k , (23) even if we ignore the data sparsity. Therefore, CRS will be much better for estimating inner products in nearly independent data. Once we have obtained the inner products, we can infer the l2 distances easily by d(2) = m1 + m2 −2a, since the margins, m1 and m2, are easy to obtain exactly. In high dimensions, it is often the case that most of the data points are only very weakly correlated. 6.3 Comparing the Computational Efficiency As previously mentioned, the cost of constructing sketches for A ∈Rn×D would be O(nD) (or more precisely, O(Pn i=1 fi)). The cost of (normal) random projections would be O(nDk), which can be reduced to O(nDk/3) using sparse random projections [1]. Therefore, it is possible that CRS is considerably more efficient than random projections in the sampling stage.2 In the estimation stage, CRS costs O(2k) to compute the sample distance for each pair. This cost is only O(k) in random projections. Since k is very small, the difference should not be a concern. 7 Empirical Evaluations We compare CRS with random projections (RP) using real data, including n = 100 randomly sampled documents from the NSF data [7] (sparsity ≈1%), n = 100 documents from the NEWSGROUP data [4] (sparsity ≈1%), and one class of the COREL image data (n = 80, sparsity ≈5%). We estimate all pairwise inner products, l1 and l2 distances, using both CRS and RP. For each pair, we obtain 50 runs and average the absolute errors. We compare the median errors and the percentage in which CRS does better than random projections. The results are presented in Figures 5, 6, 7. In each panel, the dashed curve indicates that we sample each data point with equal sample size (k). For CRS, we can adjust the sample size according to the sparsity, reflected by the solid curves. We adjust sample sizes only roughly. The data points are divided into 3 groups according to sparsity. Data in different groups are assigned different sample sizes for CRS. For random projections, we use the average sample size. For both NSF and NEWSGROUP data, CRS overwhelmingly outperforms RP for estimating inner products and l2 distances (both using the marginal information). CRS also outperforms RP for approximating l1 and l2 distances (without using the margins). For the COREL data, CRS still outperforms RP for approximating inner products and l2 distances (using the margins). However, RP considerably outperforms CRS for approximating l1 distances and l2 distances (without using the margins). Note that the COREL image data are not too sparse and are considerably more heavy-tailed than the NSF and NEWSGROUP data [13]. 10 20 30 40 50 0.02 0.03 0.04 0.05 0.06 Sample size k Ratio of median errors Inner product 10 20 30 40 50 0.2 0.4 0.6 0.8 Sample size k L1 distance 10 20 30 40 50 0.6 0.8 1 Sample size k L2 distance 10 20 30 40 50 0.02 0.04 0.06 0.08 0.1 0.12 Sample size k L2 distance (Margins) 10 20 30 40 50 0.9985 0.999 0.9995 1 Sample size k Percentage Inner product 10 20 30 40 50 0.94 0.96 0.98 1 Sample size k L1 distance 10 20 30 40 50 0.2 0.4 0.6 0.8 1 Sample size k L2 distance 10 20 30 40 50 0.9994 0.9996 0.9998 1 Sample size k L2 distance (Margins) Figure 5: NSF data. Upper four panels: ratios (CRS over RP ( random projections)) of the median absolute errors; values < 1 indicate that CRS does better. Bottom four panels: percentage of pairs for which CRS has smaller errors than RP; values > 0.5 indicate that CRS does better. Dashed curves correspond to fixed sample sizes while solid curves indicate that we (crudely) adjust sketch sizes in CRS according to data sparsity. In this case, CRS is overwhelmingly better than RP for approximating inner products and l2 distances (both using margins). 8 Conclusion There are many applications of l1 and l2 distances on large sparse datasets. We propose a new sketch-based method, Conditional Random Sampling (CRS), which is provably better than random projections, at least for the important special cases of boolean data and nearly independent data. In general non-boolean data, CRS compares favorably, both theoretically and empirically, especially when we take advantage of the margins (which are easier to compute than distances). 2 [16] proposed very sparse random projections to reduce the cost O(nDk) down to O(n √ Dk). 10 20 30 0.05 0.1 0.15 0.2 Sample size k Ratio of median errors Inner product 10 20 30 0.3 0.4 0.5 0.6 0.7 Sample size k L1 distance 10 20 30 0.6 0.8 1 1.2 Sample size k L2 distance 10 20 30 0.05 0.1 0.15 0.2 Sample size k L2 distance (Margins) 10 20 30 0.985 0.99 0.995 1 Sample size k Percentage Inner product 10 20 30 0.9 0.95 1 Sample size k L1 distance 10 20 30 0.2 0.4 0.6 0.8 1 Sample size k L2 distance 10 20 30 0.99 0.995 1 Sample size k L2 distance (Margins) Figure 6: NEWSGROUP data. The results are quite similar to those in Figure 5 for the NSF data. In this case, it is more obvious that adjusting sketch sizes helps CRS. 10 20 30 40 50 0.26 0.27 0.28 0.29 0.3 Sample size k Ratio of median errors Inner product 10 20 30 40 50 1.7 1.8 1.9 2 Sample size k L1 distance 10 20 30 40 50 3 3.5 4 4.5 Sample size k L2 distance 10 20 30 40 50 0.4 0.5 0.6 0.7 0.8 Sample size k L2 distance (Margins) 10 20 30 40 50 0.75 0.8 0.85 0.9 Sample size k Percentage Inner product 10 20 30 40 50 0 0.01 0.02 0.03 0.04 Sample size k L1 distance 10 20 30 40 50 −0.1 −0.05 0 0.05 0.1 Sample size k L2 distance 10 20 30 40 50 0.5 0.6 0.7 0.8 0.9 Sample size k L2 distance (Margins) Figure 7: COREL image data. Acknowledgment We thank Chris Burges, David Heckerman, Chris Meek, Andrew Ng, Art Owen, Robert Tibshirani, for various helpful conversations, comments, and discussions. We thank Ella Bingham, Inderjit Dhillon, and Matthias Hein for the datasets. References [1] D. Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4):671–687, 2003. [2] D. Achlioptas, F. McSherry, and B. Sch¨olkopf. Sampling techniques for kernel methods. In NIPS, pages 335–342, 2001. [3] R. Arriaga and S. Vempala. An algorithmic theory of learning: Robust concepts and random projection. Machine Learning, 63(2):161– 182, 2006. [4] E. Bingham and H. Mannila. Random projection in dimensionality reduction: Applications to image and text data. In KDD, pages 245–250, 2001. [5] B. Brinkman and M. Charikar. On the impossibility of dimension reduction in l1. Journal of ACM, 52(2):766–788, 2005. [6] A. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21–29, 1997. [7] I. Dhillon and D. Modha. Concept decompositions for large sparse text data using clustering. Machine Learning, 42(1-2):143–175, 2001. [8] P. Drineas and M. Mahoney. On the nystrom method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6(Dec):2153–2175, 2005. [9] P. Indyk. Stable distributions, pseudorandom generators, embeddings and data stream computation. In FOCS, pages 189–197, 2000. [10] P. Li. Very sparse stable random projections, estimators and tail bounds for stable random projections. Technical report, http: //arxiv.org/PS cache/cs/pdf/0611/0611114.pdf, 2006. [11] P. Li and K. Church. Using sketches to estimate associations. In HLT/EMNLP, pages 708–715, 2005. [12] P. Li and K. Church. A sketch algorithm for estimating two-way and multi-way associations. Computational Linguistics, To Appear. [13] P. Li, K. Church, and T. Hastie. Conditional random sampling: A sketched-based sampling technique for sparse data. Technical Report 2006-08, Department of Statistics, Stanford University), 2006. [14] P. Li, K. Church, and T. Hastie. Nonlinear estimators and tail bounds for dimensional reduction in l1 using Cauchy random projections. (http://arxiv.org/PS cache/cs/pdf/0610/0610155.pdf), 2006. [15] P. Li, T. Hastie, and K. Church. Improving random projections using marginal information. In COLT, pages 635–649, 2006. [16] P. Li, T. Hastie, and K. Church. Very sparse random projections. In KDD, pages 287–296, 2006. [17] S. Vempala. The Random Projection Method. American Mathematical Society, Providence, RI, 2004.
|
2006
|
136
|
2,962
|
Multi-Robot Negotiation: Approximating the Set of Subgame Perfect Equilibria in General-Sum Stochastic Games Chris Murray Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Geoffrey J. Gordon Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract In real-world planning problems, we must reason not only about our own goals, but about the goals of other agents with which we may interact. Often these agents’ goals are neither completely aligned with our own nor directly opposed to them. Instead there are opportunities for cooperation: by joining forces, the agents can all achieve higher utility than they could separately. But, in order to cooperate, the agents must negotiate a mutually acceptable plan from among the many possible ones, and each agent must trust that the others will follow their parts of the deal. Research in multi-agent planning has often avoided the problem of making sure that all agents have an incentive to follow a proposed joint plan. On the other hand, while game theoretic algorithms handle incentives correctly, they often don’t scale to large planning problems. In this paper we attempt to bridge the gap between these two lines of research: we present an efficient game-theoretic approximate planning algorithm, along with a negotiation protocol which encourages agents to compute and agree on joint plans that are fair and optimal in a sense defined below. We demonstrate our algorithm and protocol on two simple robotic planning problems.1 1 INTRODUCTION We model the multi-agent planning problem as a general-sum stochastic game with cheap talk: the agents observe the state of the world, discuss their plans with each other, and then simultaneously select their actions. The state and actions determine a one-step reward for each player and a distribution over the world’s next state, and the process repeats. While talking allows the agents to coordinate their actions, it cannot by itself solve the problem of trust: the agents might lie or make false promises. So, we are interested in planning algorithms that find subgame-perfect Nash equilibria. In a subgame-perfect equilibrium, every deviation from the plan is deterred by the threat of a suitable punishment, and every threatened punishment is believable. To find these equilibria, planners must reason about their own and other agents’ incentives to deviate: if other agents have incentives to deviate then I can’t trust them, while if I have an incentive to deviate, they can’t trust me. In a given game there may be many subgame-perfect equilibria with widely differing payoffs: some will be better for some agents, and others will be better for other agents. It is generally not feasible to compute all equilibria [1], and even if it were, there would be no obvious way 1We gratefully acknowledge help and comments from Ron Parr on this research. This work was supported in part by DARPA contracts HR0011-06-0023 (the CS2P program) and 55-00069 (the RADAR program). All opinions, conclusions, and errors are our own. to select one to implement. It does not make sense for the agents to select an equilibrium without consulting one another: there is no reason that agent A’s part of one joint plan would be compatible with agent B’s part of another joint plan. Instead the agents must negotiate, computing and proposing equilibria until they find one which is acceptable to all parties. This paper describes a planning algorithm and a negotiation protocol which work together to ensure that the agents compute and select a subgame-perfect Nash equilibrium which is both approximately Pareto-optimal (that is, its value to any single agent cannot be improved very much without lowering the value to another another agent) and approximately fair (that is, near the so-called Nash bargaining point). Neither the algorithm nor the protocol is guaranteed to work in all games; however, they are guaranteed correct when they are applicable, and applicability is easy to check. In addition, our experiments show that they work well in some realistic situations. Together, these properties of fairness, enforceability, and Pareto optimality form a strong solution concept for a stochastic game. The use of this definition is one characteristic that distinguishes our work from previous research: ours is the first efficient algorithm that we know of to use such a strong solution concept for stochastic games. Our planning algorithm performs dynamic programming on a set-based value function: for P players, at a state s, V ∈V(s) ⊂RP is an estimate of the value the players can achieve. We represent V(s) by sampling points on its convex hull. This representation is conservative, i.e., guarantees that we find a subset of the true V∗(s). Based on the sampled points we can efficiently compute one-step backups by checking which joint actions are enforceable in an equilibrium. Our negotiation protocol is based on a multi-player version of Rubinstein’s bargaining game. Players together enumerate a set of equilibria, and then take turns proposing an equilibrium from the set. Until the players agree, the protocol ends with a small probability ϵ after each step and defaults to a low-payoffequilibrium; the fear of this outcome forces players to make reasonable offers. 2 BACKGROUND 2.1 STOCHASTIC GAMES A stochastic game represents a multi-agent planning problem in the same way that a Markov Decision Process [2] represents a single-agent planning problem. As in an MDP, transitions in a stochastic game depend on the current state and action. Unlike MDPs, the current (joint) action is a vector of individual actions, one for each player. More formally, a generalsum stochastic game G is a tuple (S, sstart, P, A, T, R, γ). S is a set of states, and sstart ∈S is the start state. P is the number of players. A = A1×A2×. . .×AP is the finite set of joint actions. We deal with fully observable stochastic games with perfect monitoring, where all players can observe previous joint actions. T : S × A →P(S) is the transition function, where P(S) is the set of probability distributions over S. R : S × A →RP is the reward function. We will write Rp(s, a) for the pth component of R(s, a). γ ∈[0, 1) is the discount factor. Player p wants to maximize her discounted total value for the observed sequence of states and joint actions s1, a1, s2, a2, . . ., Vp = ∞ t=1 γt−1Rp(st, at). A stationary policy for player p is a function πp : S →P(Ap). A stationary joint policy is a vector of policies π = (π1, . . . , πP ), one for each player. A nonstationary policy for player p is a function πp : (∪∞ t=0 (S × A)t × S) →P(Ap) which takes a history of states and joint actions and produces a distribution over player p’s actions; we can define a nonstationary joint policy analogously. For any nonstationary joint policy, there is a stationary policy that achieves the same value at every state [3]. The value function V π p : S →R gives expected values for player p under joint policy π. The value vector at state s, Vπ(s), is the vector with components V π p (s). (For a nonstationary policy π we will define V π p (s) to be the value if s were the start state, and V π p (h) to be the value after observing history h.) A vector V is feasible at state s if there is a π for which Vπ(s) = V, and we will say that π achieves V. We will assume public randomization: the agents can sample from a desired joint action distribution in such a way that everyone can verify the outcome. If public randomization is not directly available, there are cryptographic protocols which can simulate it [4]. This assumption means that the set of feasible value vectors is convex, since we can roll a die at the first time step to choose from a set of feasible policies. 2.2 EQUILIBRIA While optimal policies for MDPs can be determined exactly via various algorithms such as linear programming [2], it isn’t clear what it means to find an optimal policy for a general sum stochastic game. So, rather than trying to determine a unique optimal policy, we will define a set of reasonable policies: the Pareto-dominant subgame-perfect Nash equilibria. A (possibly nonstationary) joint policy π is a Nash equilibrium if, for each individual player, no unilateral deviation from the policy would increase that player’s expected value for playing the game. Nash equilibria can contain incredible threats, that is, threats which the agents have no intention of following through on. To remove this possibility, we can define the subgame-perfect Nash equilibria. A policy π is a subgame-perfect Nash equilibrium if it is a Nash equilibrium in every possible subgame: that is, if there is no incentive for any player to deviate after observing any history of joint actions. Finally, consider two policies π and φ. If V π p (sstart) ≥V φ p (sstart) for all players p, and if V π p (sstart) > V φ p (sstart) for at least one p, then we will say that π Pareto dominates φ. A policy which is not Pareto dominated by any other policy is Pareto optimal. 2.3 RELATED WORK Littman and Stone [5] give an algorithm for finding Nash equilibria in two-player repeated games. Hansen et al. [6] show how to eliminate very-weakly-dominated strategies in partially observable stochastic games. Doraszelski and Judd [7] show how to compute Markov perfect equilibria in continuous-time stochastic games. The above papers use solution concepts much weaker than Pareto-dominant subgame-perfect equilibrium, and do not address negotiation and coordination. Perhaps the closest work to the current paper is by Brafman and Tennenholtz [8]: they present learning algorithms which, in repeated self-play, find Pareto-dominant (but not subgame-perfect) Nash equilibria in matrix and stochastic games. By contrast, we consider a single play of our game, but allow “cheap talk” beforehand. And, our protocol encourages arbitrary algorithms to agree on Pareto-dominant equilibria, while their result depends strongly on the self-play assumption. 2.3.1 FOLK THEOREMS In any game, each player can guarantee herself an expected discounted value regardless of what actions the other players takes. We call this value the safety value. Suppose that there is a stationary subgame-perfect equilibrium which achieves the safety value for both players; call this the safety equilibrium policy. Suppose that, in a repeated game, some stationary policy π is better for both players than the safety equilibrium policy. Then we can build a subgame-perfect equilibrium with the same payoffas π: start playing π, and if someone deviates, switch to the safety equilibrium policy. So long as γ is sufficiently large, no rational player will want to deviate. This is the folk theorem for repeated games: any feasible value vector which is strictly better than the safety values corresponds to a subgame-perfect Nash equilibrium [9]. (The proof is slightly more complicated if there is no safety equilibrium policy, but the theorem holds for any repeated game.) There is also a folk theorem for general stochastic games [3]. This theorem, while useful, is not strong enough for our purposes: it only covers discount factors γ which are so close to 1 that the players don’t care which state they wind up in after a possible deviation. In most practical stochastic games, discount factors this high are unreasonably patient. When γ is significantly less than 1, the set of equilibrium vectors can change in strange ways as we change γ [10]. 0 0.5 1 1.5 2 2.5 3 0 0.5 1 Value to player 1 Value to player 2 Figure 1: Equilibria of a Rubinstein game with γ = 0.8. Shaded area shows feasible value vectors (U1(x), U2(x)) for outcomes x. Right-hand circle corresponds to equilibrium when player 1 moves first, left-hand circle when player 2 moves first. The Nash point is at 3. 2.3.2 RUBINSTEIN’S GAME Rubinstein [11] considered a game where two players divide a slice of pie. The first player offers a division x, 1 −x to the second; the second player either accepts the division, or refuses and offers her own division 1 −y, y. The game repeats until some player accepts an offer or until either player gives up. In the latter case neither player gets any pie. Rubinstein showed that if player p’s utility for receiving a fraction x at time t is Up(x, t) = γtUp(x) for a discount factor 0 ≤γ < 1 and an appropriate time-independent utility function Up(x) ≥0, then rational players will agree on a division near the so-called Nash bargaining point. This is the point which maximizes the product of the utilities that the players gain by cooperating, U1(x)U2(1 −x). As γ ↑1, the equilibrium will approach the Nash point. See Fig. 1 for an illustration. For three or more players, a similar result holds where agents take turns proposing multi-way divisions of the pie [12]. See the technical report [13] for more detail on the multi-player version of Rubinstein’s game and the Nash bargaining point. 3 NEGOTIATION PROTOCOL The Rubinstein game implicitly assumes that the result of a failure to cooperate is known to all players: nobody gets any pie. The multi-player version of the game assumes in addition that giving one player a share of the pie doesn’t force us to give a share to any other player. Neither of these properties holds for general stochastic games. They are, however, easy to check, and often hold or can be made to hold for planning domains of interest. So, we will assume that the players have agreed beforehand on a subgame-perfect equilibrium πdis, called the disagreement policy, that they will follow in the event of a negotiation failure. In addition, for games with three or more players, we will assume that each player can unilaterally reduce her own utility by any desired amount without affecting other players’ utilities. Given these assumptions, our protocol proceeds in two phases (pseudocode is given in the technical report [13]. In the first phase agents compute subgame-perfect equilibria and take turns revealing them. On an agent’s turn she either reveals an equilibrium or passes; if all agents pass consecutively, the protocol proceeds to the second phase. When an agent states a policy π, the other agents verify that π is a subgame-perfect equilibrium and calculate its payoffvector Vπ(sstart); players who state non-equilibrium policies miss their turn. At the end of the first phase, suppose the players have revealed a set Π of policies. Define Xp(π) = V π p (sstart) −V dis p (sstart) U = convhull {X(π) | π ∈Π} U = {u ≥0 | (∃v ∈U | u ≤v)} where Vdis is the value function of πdis, Xp(π) is the excess of policy π for player p, and U is the set of feasible excess vectors. In the second phase, players take turns proposing points u ∈U along with policies or mixtures of policies in Π that achieve them. After each proposal, all agents except the proposer decide whether to accept or reject. If everyone accepts, the proposal is implemented: everyone starts executing the agreed equilibrium. Otherwise, the players who accepted are removed from future negotiation and have their utilities fixed at the proposed levels. Fixing player p’s utility at up means that all future proposals must give p exactly up. Invalid proposals cause the proposer to lose her turn. To achieve this, the proposal may require p to voluntarily lower her own utility; this requirement is enforced by the threat that all players will revert to πdis if p fails to act as required. If at some point we hit the ϵ chance of having the current round of communication end, all remaining players are assigned their disagreement values. The players execute the last proposed policy π (or πdis if there has been no valid proposal), and any player p for whom V π p (sstart) is greater than her assigned utility up voluntarily lowers her utility to the correct level. (Again, failure to do so results in all players reverting to πdis.) Under the above protocol, player’s preferences are the same as in a Rubinstein game with utility set U: because we have assumed that negotiation ends with probability ϵ after each message, agreeing on u after t additional steps is exactly as good as agreeing on u(1−ϵ)t now. So with ϵ sufficiently small, the Rubinstein or Krishna-Serrano results show that rational players will agree on a vector u ∈U which is close to the Nash point argmaxu∈UΠpup. 4 COMPUTING EQUILIBRIA In order to use the protocol of Sec. 3 for bargaining in a stochastic game, the players must be able to compute some subgame-perfect equilibria. Computing equilibria is a hard problem, so we cannot expect real agents to find the entire set of equilibria. Fortunately, each player will want to find the equilibria which are most advantageous to herself to influence the negotiation process in her favor. But equilibria which offer other players reasonably high reward have a higher chance of being accepted in negotiation. So, self interest will naturally distribute the computational burden among all the players. In this section we describe an efficient dynamic-programming algorithm for computing equilibria. The algorithm takes some low-payoffequilibria as input and (usually) outputs higherpayoffequilibria. It is based on the intuition that we can use low-payoffequilibria as enforcement tools: by threatening to switch to an equilibrium that has low value to player p, we can deter p from deviating from a cooperative policy. In more detail, we will assume that we are given P different equilibria πpun 1 , . . . , πpun P ; we will use πpun p to punish player p if she deviates. We can set πpun p = πdis for all p if πdis is the only equilibrium we know; or, we can use any other equilibrium policies that we happen to have discovered. The algorithm will be most effective when the value of πpun p to player p is as low as possible in all states. We will then search for cooperative policies that we can enforce with the given threats πpun p . We will first present an algorithm which pretends that we can efficiently take direct sums and convex hulls of arbitrary sets. This algorithm is impractical, but finds all enforceable value vectors. We will then turn it into an approximate algorithm which uses finite data structures to represent the set-valued variables. As we allow more and more storage for each set, the approximate algorithm will approach the exact one; and in any case the result will be a set of equilibria which the agents can execute. 4.1 THE EXACT ALGORITHM Our algorithm maintains a set of value vectors V(s) for each state s. It initializes V(s) to a set which we know contains the value vectors for all equilibrium policies. It then refines V by dynamic programming: it repeatedly attempts to improve the set of values at each state by backing up all of the joint actions, excluding joint actions from which some agent has an incentive to deviate. In more detail, we will compute V dis p (s) ≡V πdis p (s) for all s and p and use the vector Vdis(s) in our initialization. (Recall that we have defined V π p (s) for a nonstationary policy π as the value of π if s were the start state.) We also need the values of the punishment policies for Initialization for s ∈S V(s) ←{V | V dis p (s) ≤Vp ≤Rmax/(1 −γ)} end Repeat until converged for iteration ←1, 2, . . . for s ∈S Compute value vector set for each joint action, then throw away unenforceable vectors for a ∈A Q(s, a) ←{R(s, a)} + γ s′∈S T(s, a)(s′)V(s′) Q(s, a) ←{Q ∈Q(s, a) | Q ≥Vdev(s, a)} end We can now randomize among joint actions V(s) ←convhull a Q(s, a) end end Figure 2: Dynamic programming using exact operations on sets of value vectors their corresponding players, V pun p (s) ≡V πpun p p (s) for all p and s. Given these values, define Qdev p (s, a) = Rp(s, a) + γ s′∈S T(s, a)(s′)V pun p (s′) (1) to be the value to player p of playing joint action a from state s and then following πpun p forever after. From the above Qdev p values we can compute player p’s value for deviating from an equilibrium which recommends action a in state s: it is Qdev p (s, a′) for the best possible deviation a′, since p will get the one-step payofffor a′ but be punished by the rest of the players starting on the following time step. That is, V dev p (s, a) = max a′ p∈Ap Qdev p (s, a1 × . . . × a′ p × . . . × aP ) (2) V dev p (s, a) is the value we must achieve for player p in state s if we are planning to recommend action a and punish deviations with πpun p : if we do not achieve this value, player p would rather deviate and be punished. Our algorithm is shown in Fig. 2. After k iterations, each vector in V(s) corresponds to a k-step policy in which no agent ever has an incentive to deviate. In the k +1st iteration, the first assignment to Q(s, a) computes the value of performing action a followed by any k-step policy. The second assignment throws out the pairs (a, π) for which some agent would want to deviate from a given that the agents plan to follow π in the future. And the convex hull accounts for the fact that, on reaching state s, we can select an action a and future policy π at random from the feasible pairs.2 Proofs of convergence and correctness of the exact algorithm are in the technical report [13]. Of course, we cannot actually implement the algorithm of Fig. 2, since it requires variables whose values are convex sets of vectors. But, we can approximate V(s) by choosing a finite set of witness vectors W ⊂RP and storing V(s, w) = arg maxv∈V(s)(v·w) for each w ∈W. V(s) is then approximated by the convex hull of {V(s, w) | w ∈W}. If W samples the Pdimensional unit hypersphere densely enough, the maximum possible approximation error will be small. (In practice, each agent will probably want to pick W differently, to focus her computation on policies in the portion of the Pareto frontier where her own utility is relatively high.) As |W| increases, the error introduced at each step will go to zero. The approximate algorithm is given in more detail in the technical report [13]. 2It is important for this randomization to occur after reaching state s to avoid introducing incentives to deviate, and it is also important for the randomization to be public. P1 1 P1 2 P2 1 P2 2 P1 1 P1 2 P2 1 P2 2 P1 1 P1 2 P2 1 P2 2 Figure 3: Execution traces for our motion planning example. Left and Center: with 2 witness vectors , the agents randomize between two selfish paths. Right: with 4–32 witnesses, the agents find a cooperative path. Steps where either player gets a goal are marked with ×. shop D ABC A E D A B E D C 40 50 60 70 80 60 65 70 75 80 85 90 Value to Player 1 Value to Player 2 Figure 4: Supply chain management problem. In the left figure, Player 1 is about to deliver part D to the shop, while player 2 is at the warehouse which sells B. The right figure shows the tradeoffbetween accuracy and computation time. The solid curve is the Pareto frontier for sstart, as computed using 8 witnesses per state. The dashed and dotted lines were computed using 2 and 4 witnesses, respectively. Dots indicate computed value vectors; × marks indicate the Nash points. 5 EXPERIMENTS We tested our value iteration algorithm and negotiation procedure on two robotic planning domains: a joint motion planning problem and a supply-chain management problem. In our motion planning problem (Fig. 3), two players together control a two-wheeled robot, with each player picking the rotational velocity for one wheel. Each player has a list of goal landmarks which she wants to cycle through, but the two players can have different lists of goals. We discretized states based on X, Y, θ and the current goals, and discretized actions into stop, slow (0.45 m s ), and fast (0.9 m s ), for 9 joint actions and about 25,000 states. We discretized time at ∆t = 1s, and set γ = 0.99. For both the disagreement policy and all punishment policies, we used “always stop,” since by keeping her wheel stopped either player can prevent the robot from moving. Planning took a few hours of wall clock time on a desktop workstation for 32 witnesses per state. Based on the planner’s output, we ran our negotiation protocol to select an equilibrium. Fig. 3 shows the results: with limited computation the players pick two selfish paths and randomize equally between them, while with more computation they find the cooperative path. Our experiments also showed that limiting the computation available to one player allows the unrestricted player to reveal only some of the equilibria she knows about, tilting the outcome of the negotiation in her favor (see the technical report [13] for details). For our second experiment we examined a more realistic supply-chain problem. Here each player is a parts supplier competing for the business of an engine manufacturer. The manufacturer doesn’t store items and will only pay for parts which can be used immediately. Each player controls a truck which moves parts from warehouses to the assembly shop; she pays for parts when she picks them up, and receives payment on delivery. Each player gets parts from different locations at different prices and no one player can provide all of the parts the manufacturer needs. Each player’s truck can be at six locations along a line: four warehouse locations (each of which provides a different type of part), one empty location, and the assembly shop. Building an engine requires five parts, delivered in the order A, {B, C}, D, E (parts B and C can arrive in either order). After E, the manufacturer needs A again. Players can move left or right along the line at a small cost, or wait for free. They can also buy parts at a warehouse (dropping any previous cargo), or sell their cargo if they are at the shop and the manufacturer wants it. Each player can only carry one part at a time and only one player can make a delivery at a time. Finally, any player can retire and sell her truck; in this case the game ends and all players get the value of their truck plus any cargo. The disagreement policy is for all players to retire at all states. Fig. 4 shows the computed sets V(sstart) for various numbers of witnesses. The more witnesses we use, the more accurately we represent the frontier, and the closer our final policy is to the true Nash point. All of the policies computed are “intelligent” and “cooperative”: a human observer would not see obvious ways to improve them, and in fact would say that they look similar despite their differing payoffs. Players coordinate their motions, so that one player will drive out to buy part E while the other delivers part D. They sit idle only in order to delay the purchase of a part which would otherwise be delivered too soon. 6 CONCLUSION Real-world planning problems involve negotiation among multiple agents with varying goals. To take all agents incentives into account, the agents should find and agree on Paretodominant subgame-perfect Nash equilibria. For this purpose, we presented efficient planning and negotiation algorithms for general-sum stochastic games, and tested them on two robotic planning problems. References [1] V. Conitzer and T. Sandholm. Complexity results about Nash equilibria. Technical Report CMU-CS-02-135, School of Computer Science, Carnegie-Mellon University, 2002. [2] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, Massachusetts, 1995. [3] Prajit K. Dutta. A folk theorem for stochastic games. Journal of Economic Theory, 66:1–32, 1995. [4] Yevgeniy Dodis, Shai Halevi, and Tal Rabin. A cryptographic solution to a game theoretic problem. In Lecture Notes in Computer Science, volume 1880, page 112. Springer, Berlin, 2000. [5] Michael L. Littman and Peter Stone. A polynomial-time Nash equilibrium algorithm for repeated games. In ACM Conference on Electronic Commerce, pages 48–54. ACM, 2003. [6] E. Hansen, D. Bernstein, and S. Zilberstein. Dynamic programming for partially observable stochastic games. In Proceedings of the Nineteenth National Conference on Artificial Intelligence, pages 709–715, 2004. [7] Ulrich Doraszelski and Kenneth L. Judd. Avoiding the curse of dimensionality in dynamic stochastic games. NBER Technical Working Paper No. 304, January 2005. [8] R. Brafman and M. Tennenholtz. Efficient learning equilibrium. Artificial Intelligence, 2004. [9] D Fudenberg and E. Maskin. The folk theorem in repeated games with discounting or with incomplete information. Econometrica, 1986. [10] David Levine. The castle on the hill. Review of Economic Dynamics, 3(2):330–337, 2000. [11] Ariel Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97–109, 1982. [12] V. Krishna and R. Serrano. Multilateral bargaining. Review of Economic Studies, 1996. [13] Chris Murray and Geoffrey J. Gordon. Multi-robot negotiation: approximating the set of subgame perfect equilibria in general-sum stochastic games. Technical Report CMU-ML-06114, Carnegie Mellon University, 2006.
|
2006
|
137
|
2,963
|
Optimal Single-Class Classification Strategies Ran El-Yaniv Department of Computer Science Technion- Israel Institute of Technology Technion, Israel 32000 rani@cs.technion.ac.il Mordechai Nisenson Department of Computer Science Technion - Israel Institute of Technology Technion, Israel 32000 motin@cs.technion.ac.il Abstract We consider single-class classification (SCC) as a two-person game between the learner and an adversary. In this game the target distribution is completely known to the learner and the learner’s goal is to construct a classifier capable of guaranteeing a given tolerance for the false-positive error while minimizing the false negative error. We identify both “hard” and “soft” optimal classification strategies for different types of games and demonstrate that soft classification can provide a significant advantage. Our optimal strategies and bounds provide worst-case lower bounds for standard, finite-sample SCC and also motivate new approaches to solving SCC. 1 Introduction In Single-Class Classification (SCC) the learner observes a training set of examples sampled from one target class. The goal is to create a classifier that can distinguish the target class from other classes, unknown to the learner during training. This problem is the essence of a great many applications such as intrusion, fault and novelty detection. SCC has been receiving much research attention in the machine learning and pattern recognition communities (for example, the survey papers [7, 8, 4] cite, altogether, over 100 papers). The extensive body of work on SCC, which encompasses mainly empirical studies of heuristic approaches, suffers from a lack of theoretical contributions and few principled (empirical) comparative studies of the proposed solutions. Thus, despite the extent of the existing literature, some of the very basic questions have remained unresolved. Let P(x) be the underlying distribution of the target class, defined over some space Ω. We call P the target distribution. Let 0 < δ < 1 be a given tolerance parameter. The learner observes a training set sampled from P and should then construct a classifier capable of distinguishing the target class. We view the SCC problem as a game between the learner and an adversary. The adversary selects another distribution Q over Ωand then a new element of Ωis drawn from γP + (1 −γ)Q, where γ is a switching parameter (unknown to the learner). The goal of the learner is to minimize the false negative error, while guaranteeing that the false positive error will be at most δ. The main consideration in previous SCC studies has been statistical: how can we guarantee a prescribed false positive rate (δ) given a finite sample from P? This question led to many solutions, almost all revolving around the idea of low-density rejection. The proposed approaches are typically generative or discriminative. Generative solutions range from full density estimation [2], to partial density estimation such as quantile estimation [5], level set estimation [1, 9] or local density estimation [3]. In discriminative methods one attempts to generate a decision boundary appropriately enclosing the high density regions of the training set [11]. In this paper we abstract away the statistical estimation component of the problem and model a setting where the learner has a very large sample from the target class. In fact, we assume that the learner knows the target distribution P precisely. While this assumption would render almost the entire body of SCC literature superfluous, it turns out that a significant, decision-theoretic component of the SCC problem remains – one that has so far been overlooked. In any case, the results we obtain here immediately apply to other SCC instances as lower bounds. The fundamental question arising in our setting is: What are optimal strategies for the learner? In particular, is the popular low-density rejection strategy optimal? While most or all SCC papers adopted this strategy, nowhere in the literature could we find a formal justification. The partially good news is that low-density rejection is worst-case optimal, but only if the learner is confined to “hard” decision strategies. In general, the worst-case optimal learner strategy should be “soft”; that is, the learner should play a randomized strategy, which could result in a very significant gain. We first identify a monotonicity property of optimal SCC strategies and use it to establish the optimality of low-density rejection in the “hard” case. We then show an equivalence between low-density rejection and a constrained two-class classification problem where the other class is the uniform distribution over Ω. This equivalence motivates a new approach to solving SCC problems. We next turn our attention to the power of the adversary, an issue that has been overlooked in the literature but has crucial impact on the relevancy of SCC solutions in applications. For example, when considering an intrusion detection application (see, e.g., [6]), it is necessary to assume that the “attacking distribution” has some worst-case characteristics and it is important to quantify precisely what the adversary knows or can do. The simple observation in this setting is that an omniscient and unlimited adversary, who knows all parameters of the game including the learner’s strategy, would completely demolish the learner who uses hard strategies. By using a soft strategy, however, the learner can achieve on average the biased coin false negative rate of 1 −δ. We then analyze the case of an omniscient but limited adversary, who must select a sufficiently distant Q satisfying DKL(Q||P) ≥Λ, for some known parameter Λ. One of our main contributions is a complete analysis of this game, including identification of the optimal strategy for the learner and the adversary, as well as the best achievable false negative rate. The optimal learner strategy and best achievable rate are obtained via a solution of a linear program specified in terms of the problem parameters. These results are immediately applicable as lower bounds for standard (finite-sample) SCC problems, but may also be used to inspire new types of algorithms for standard SCC. While we do not have a closed form expression for the best achievable false-negative rate, we provide a few numerical examples demonstrating and comparing the optimal “hard” and “soft” performance. 2 Problem Formulation The single-class classification (SCC) problem is defined as a game between the learner and an adversary. The learner receives a training sample of examples from a target distribution P defined over some space Ω. On the basis of this training sample, the learner should select a rejection function r : Ω→[0, 1], where for each ω ∈Ω, rω = r(ω) is the probability with which the learner will reject ω. On the basis of any knowledge of P and/or r(·), the adversary selects selects an attacking distribution Q, defined over Ω. Then, a new example is drawn from γP+(1−γ)Q, where 0 < γ < 1, is a switching probability unknown to the learner. The rejection rate of the learner, using a rejection function r, with respect to any distribution D (over Ω), is ρ(D) = ρ(r, D) △= ED{r(ω)}. For notational convenience whenever we decorate r (e.g., r′,r∗), the corresponding ρ will be decorated accordingly (e.g., ρ′,ρ∗). The two main quantities of interest here are the false positive rate (type I error) ρ(P), and the false negative rate (type II error) 1 −ρ(Q). Before the start of the game, the learner receives a tolerance parameter 0 < δ < 1, giving the maximally allowed false positive rate. A rejection function r(·) is valid if its false positive rate ρ(P) ≤δ. A valid rejection function (strategy) is optimal if it guarantees the smallest false negative rate amongst all valid strategies. We consider a model where the learner knows the target distribution P exactly, thus focusing on the decision-theoretic component in SCC. Clearly, our model approximates a setting where the learner has a very large training set, but the results we obtain immediately apply, in any case, as lower bounds to other SCC instances. This SCC game is a two-person zero sum game where the payoff to the learner is ρ(Q). The set Rδ(P) △= {r : ρ(P) ≤δ} of valid rejection functions is the learner’s strategy space. Let Q be the strategy space of the adversary, consisting of all allowable distributions Q that can be selected by the adversary. We are concerned with optimal learner strategies for game variants distinguished by the adversary’s knowledge of the learner’s strategy, P and/or of δ and by other limitations on Q. We distinguish a special type of this game, which we call the hard setting, where the learner must deterministically reject or accept new events; that is, r : Ω→{0, 1}, and such rejection functions are termed “hard.” The more general game defined above (with “soft” functions) is called the soft setting. As far as we know, only the hard setting has been considered in the SCC literature thus far. In the soft setting, given any rejection function, the learner can reduce the type II error by rejecting more (i.e., by increasing r(·)). Therefore, for an optimal r(·) we have ρ(P) = δ (rather than ρ(P) ≤δ). It follows that the switching parameter γ is immaterial to the selection of an optimal strategy. Specifically, the combined error of an optimal strategy is γρ(P) + (1 −γ)(1 −ρ(Q)) = γδ + (1 −γ)(1 −ρ(Q)), which is minimized by minimizing the type II error, 1 −ρ(Q). We assume throughout this paper a finite support of size N; that is, Ω= {1, . . . , N} and P △= {p1, . . . , pN} and Q △= {q1, . . . , qN} are probability mass functions. Additionally, a “probability distribution” refers to a distribution over the fixed support set Ω. Note that this assumption still leaves us with an infinite game because the learner’s pure strategy space, Rδ(P), is infinite.1 3 Characterizing Monotone Rejection Functions In this section we characterize the structure of optimal learner strategies. Intuitively, it seems plausible that the learner should not assign higher rejection values to higher probability events under P. That is, one may expect that a reasonable rejection function r(·) would be monotonically decreasing with probability values (i.e., if pj ≤pk then rj ≥rk). Such monotonicity is a key justification for a very large body of SCC work, which is based on low density rejection strategies. Surprisingly, optimal monotone strategies are not always guaranteed as shown in the following example. Example 3.1 (Non-Monotone Optimality) In the hard setting, take N = 3, P = (0.06, 0.09, 0.85) and δ = 0.1. The two δ-valid hard rejection functions are r′ = (1, 0, 0) and r′′ = (0, 1, 0). Let Q = {Q = (0.01, 0.02, 0.97)}. Clearly ρ′(Q) = 0.01 and ρ′′(Q) = 0.02 and therefore, r′′(·) is optimal despite breaking monotonicity. More generally, this example holds if Q = {Q : q2 −q1 ≥ε} for any 0 < ε ≤1. In the soft setting, let N = 2, P = (0.2, 0.8), and δ = 0.1. We note that Rδ(P) = {rε = (0.1 + 4ε, 0.1 −ε)}, for ε ∈[−0.025, 0.1]. We take Q = {Q = (0.1, 0.9)}. Then ρǫ(Q) = 0.1 + 0.4ε−0.9ε = 0.1−0.5ε. This is clearly maximized when we minimize ε by taking ε = −0.025, and then the optimal rejection function is (0, 0.125), which clearly breaks monotonicity. This example also holds for Q = {Q : q2 ≥cq1} for any c > 4. Fix P and δ. For any adversary strategy space, Q, let R∗ δ(P) be the set of optimal valid rejection functions, R∗ δ △= {r ∈Rδ(P) : minQ∈Q ρ(Q) = maxr′∈Rδ(P ) minQ∈Q ρ′(Q)}.2 We note that R∗ δ is never empty in the cases we consider. A simple observation is that for any r ∈R∗ δ there exists r′ ∈R∗ δ such that r′(i) = r(i) for all i such that pi > 0 and for zero probabilities, pj = 0, r′(j) = 1. The following property ensures that R∗ δ will include a monotone (optimal) hard strategy, which means that the search space for the learner can be conveniently confined to monotone strategies. While the set of all distributions satisfies this property, later on we will consider limited strategic adversary spaces where this property still holds.3 1The game is conveniently described in extensive form (i.e., game tree) where in the first move the learner selects a rejection function, followed by a chance move to determine the source (either P or Q) of the test example (with probability γ). In the case where Q is selected, the adversary chooses (randomly using Q) the test example. In this game the choice of Q depends on knowledge of P and r(·). 2For certain strategy spaces, Q, it may be necessary to consider the infimum rather than the minimum. In such cases it may be necessary to replace ‘Q ∈Q’ (in definitions, theorems, etc.) with ‘Q ∈cl(Q)’, where cl(Q) is the closure of Q. 3All properties defined in this paper could be made weaker for the purposes of the proofs, but this would needlessly complicate them. Indeed, the way they are currently defined is sufficient for most “reasonable” Q. Definition 3.2 (Property A) Let P be a distribution. A set of distributions Q has Property A w.r.t. P if for all j, k and Q ∈Q such that pj < pk and qj < qk, there exists Q′ ∈Q such that q′ k ≤qj, q′ j ≥qk and for all i ̸= j, k, we have q′ i = qi. Theorem 3.3 (Monotone Hard Decisions) When the learner is restricted to hard-decisions and Q satisfies Property A w.r.t. P, then ∃r ∈R∗ δ such that pj < pk ⇒r(j) ≥r(k).4 Proof: Let us assume by contradiction that no such rejection function exists in R∗ δ. Let r ∈R∗ δ. Let j be such that pj = minω:r(ω)=0 pω. Then, there must exist k, such that pj < pk and r(k) = 1 (otherwise r is monotone). Define r∗to be r with the values of j and k swapped; that is, r∗(j) = 1, r∗(k) = 0 and for all other i, r∗(i) = r(i). We note that ρ∗(P) = ρ(P) + pj −pk < ρ(P) ≤ δ. Let Q∗∈Q be such that minQ ρ∗(Q) = ρ∗(Q∗) = ρ(Q∗) + q∗ j −q∗ k. Thus, if q∗ j ≥q∗ k, ρ∗(Q∗) ≥ρ(Q∗). Otherwise, there exists Q∗′ as in Property A and in particular, q∗′ k ≤q∗ j . As a result, ρ∗(Q∗) = ρ(Q∗′) + q∗ j −q∗′ k ≥ρ(Q∗′). Therefore, there always exists Q ∈Q such that ρ∗(Q∗) ≥ρ(Q) (either Q = Q∗or Q = Q∗′). Consequently, minQ ρ∗(Q) ≥minQ ρ(Q), and thus, r∗∈R∗ δ. As long as there are more j, k pairs which need to have their rejection levels fixed, we label r = r∗and repeat the above procedure. Since the only changes are made to r∗(j) and r∗(k), and since j is the non-rejected event with minimal probability, the procedure will be repeated at most N times. The final r∗is in R∗ δ and satisfies pj < pk ⇒r(j) ≥r(k). Contradiction. □ Theorem 3.3 provides a formal justification for the low-density rejection strategy (LDRS), popular in the SCC literature. Specifically, assume w.l.o.g. p1 ≤p2 ≤· · · ≤pN. The corresponding δ-valid low density rejection function places rj = 1 iff Pj i=1 pi ≤δ. Our discussion on soft decisions is facilitated by Property B and Theorem 3.5 that follow. Definition 3.4 (Property B) Let P be a distribution. A set of distributions Q has Property B w.r.t. P if for all j, k and Q ∈Q such that 0 < pj ≤pk and qj pj < qk pk , there exists Q′ ∈Q such that q′ j pj ≥q′ k pk and for all i ̸= j, k, q′ i = qi. The rather technical proof of the following theorem is omitted for lack of space (and appears in the adjoining, supplementary appendix). Theorem 3.5 (Monotone Soft Decisions) If Q satisfies Property B w.r.t. P, then ∃r ∈R∗ δ such that: (i)pi = 0 ⇒r(i) = 1; (ii) pj < pk ⇒r(j) ≥r(k); and (iii) pj = pk ⇒r(j) = r(k). 4 Low-Density Rejection and Two-Class Classification In this section we focus on the hard setting. We show that the low-density rejection strategy (LDRS - defined in Section 3) is optimal. Moreover we show that the optimal hard performance can be obtained by solving a constrained two-class classification problem where the other class is the uniform distribution over Ω. The results here consider families Q that satisfy the following property. Definition 4.1 (Property C) Let P be a distribution. A set of distributions Q has Property C w.r.t. P if for all j, k and Q ∈Q such that pj = pk there exists Q′ ∈Q such that q′ k = qj, q′ j = qk and for all i ̸= j, k, q′ i = qi. We state without proof the following lemma (the proof can be found in the appendix). Lemma 4.2 Let r∗be a δ-valid low-density rejection function (LDRS). Let r be any monotone δvalid rejection function. Then minQ∈Q ρ∗(Q) ≥minQ∈Q ρ(Q) for any Q satisfying Property C. Example 4.3 (Violation of Property C) We illustrate here that violating Property C may result in a violation of Lemma 4.2. Let N = 5, P = (0.02, 0.03, 0.05, 0.05, 0.85), and δ = 0.1. Then the two δ-valid LDRS rejection functions are r = (1, 1, 1, 0, 0) and r′ = (1, 1, 0, 1, 0). Let Q = {Q : q3 −q4 > ε} for some 0 < ε < 1. Then, for any Q ∈Q, ρ(Q) −ρ′(Q) = q3 −q4 > ε, and therefore, for the LDRS, r′, there exists a monotone r such that minQ∈Q ρ′(Q) < minQ∈Q ρ(Q). 4Here we must consider a weaker notion of monotonicity for hard strategies to be both valid and optimal. When Q satisfies Property A, then by Theorem 3.3 there exists a monotone optimal rejection function. Therefore, the following corollary of Lemma 4.2 establishes the optimality of any LDRS. Corollary 4.4 Any δ-valid LDRS is optimal if Q satisfies both Property A and Property C. Thus, any LDRS strategy is indeed worst-case optimal when the learner is willing to be confined to hard rejection functions and when the adversary’s space satisfies Property A and Property C. We now show that an (optimal) LDRS solution is equivalent to an optimal solution of the following constrained Bayesian two-class decision problem. Let the first class c1 have distribution P(x) and the second class, c2, have the uniform distribution U(x) = 1/N. Let 0 < c < 1 and 0 < ǫ < (Nδc+1−c)/Nδc. The classes have priors Pr{c1} = c and Pr{c2} = 1−c. The loss function λij, giving the cost of deciding ci instead of cj (i, j = 1, 2), is λ11 = λ22 = 0, λ12 = (Nc+1−c)/(1−c) and λ21 = ǫ. The goal is to construct a classifier C(x) ∈{c1, c2) that minimizes the total Bayesian risk under the constraint that, for a given δ, P x:C(x)=c2 P(x) ≤δ. We term this problem “the Bayesian binary problem.” Theorem 4.5 An optimal binary classifier for the Bayesian binary problem induces an optimal (hard) solution to the SCC problem (an LDRS) when Q satisfies properties A and C. Proof Sketch: Let C∗(·) be an optimal classifier for the Bayesian binary problem. Any classifier C(·) induces a hard rejection function r(·) by taking r(x) = 1 ⇔C(x) = c2. Therefore, the set of feasible classifiers (satisfying the constraint) clearly induces Rδ(P). Let Mi(C) △= {x : C(x) = i}. Note that the constraint is equivalent to P x∈M2(C) P(x) ≤δ. The Bayes risk for classifying x as i is Ri(x) △= λii Pr{ci|x} + λi(3−i) Pr{c3−i|x} = λi(3−i) Pr{c3−i|x}. The total Bayes risk is R(C) △= P x∈M1(C) R1(x) + P x∈M2(C) R2(x), which is minimized at C∗(·). It is not difficult to show that R1(·) and R2(·) are monotonically decreasing and increasing, respectively. It therefore follows that x ∈M1(C∗), y ∈M2(C∗) ⇒P(x) ≥P(y) (otherwise, by swapping C∗(x) and C∗(y), the constraint can be maintained and R(C∗) decreased). It is also not difficult to show that R1(x) ≥1 > R2(x) for any x. Thus, it follows that P y∈M2(C∗) P(y) + minx∈M1(C∗) P(x) > δ (otherwise, some x could be transferred from M1(C∗) to M2(C∗), reducing R(C∗)). Together, these two properties immediately imply that C∗(·) induces a δ-valid LDRS. □ Theorem 4.5 motivates a different approach to SCC in which we sample from the uniform distribution over Ωand then attempt to approximate the optimal Bayes solution to the constrained binary problem. It also justifies certain heuristics found in the literature [10, 11]. 5 The Omniscient Adversary: Games, Strategies and Bounds 5.1 Unrestricted Adversary In the first game we analyze an adversary who is completely unrestricted. This means that Q is the set of all distributions. Unsurprisingly, this game leaves little opportunity for the learner. For any rejection function r(·), define rmin △= mini r(i) and Imin(r) △= {i : r(i) = rmin}. For any distribution D, ρ(D) = PN i=1 dir(i) ≥PN i=1 dirmin = rmin, in particular, δ = ρ(P) ≥rmin and minQ ρ(Q) ≥rmin. By choosing Q such that qi = 1 for some i ∈Imin(r), the adversary can achieve ρ(Q) = rmin (the same rejection rate is achieved by taking any Q with qi = 0 for all i ̸∈Imin(r)). In the soft setting, minQ ρ(Q) is maximized by the rejection function rδ(i) △= δ for all pi > 0 (rδ(i) △= 1 for all pi = 0) This is equivalent to flipping a δ-biased coin for non-null events (under P). The best achievable Type II Error is 1 −δ. In the hard setting, clearly rmin = 0 (otherwise 1 > δ ≥1), and the best achievable Type II Error is precisely 1. That is, absolutely nothing can be achieved. This simple analysis shows the futility of the SCC game when the adversary is too powerful. In order to consider SCC problems at all one must consider reasonable restrictions on the adversary that lead to more useful games. One type of restriction would be to limit the adversary’s knowledge of r(·), P and/or of δ. Another type would be to directly limit the strategic choices available to the adversary. In the next section we focus on the latter type. 5.2 A Constrained Adversary In seeking a quantifiable constraint on Q it is helpful to recall that the essence of the SCC problem is to try to distinguish between two probability distributions (albeit one of them unknown). A natural constraint is a lower bound on the “distance” between these distributions. Following similar results in hypothesis testing, we would like to consider games in which the adversary must select Q such that D(P||Q) ≥Λ, for some constant Λ > 0, where D(·||·) is the KL-divergence. Unfortunately, this constraint is vacuous since D(P||Q) explodes when qi ≪pi (for any i). In this case the adversary can optimally play the same strategy as in the unrestricted game while meeting the KL-divergence constraint. Fortunately, by taking D(Q||P) ≥Λ, we can effectively constrain the adversary. We note, as usual, that the learner can (and should) reject with probability 1 any null events under P. Thus, an adversary would be foolish to choose a distribution Q that has any probability for these events. Therefore, we henceforth assume w.l.o.g. that Ω= Ω(P) △= {ω : pω > 0}. Taking D(Q||P) △= PN i=1 qi log(qi/pi), we then define Q = QΛ △= {Q : D(Q||P) ≥Λ}. We note that QΛ possesses properties A, B and C w.r.t. P,5 and by Theorems 3.3 and 3.5 there exists a monotone r ∈R∗ δ (in both the hard and soft settings) and by Corollary 4.4, any δ-valid LDRS is hard-optimal. If maxi pi ≤2−Λ, then any Q which is concentrated on a single event meets the constraint D(Q||P) ≥Λ. Then, the adversary can play the same strategy as in the unrestricted game, and the learner should select rδ as before. For the game to be non-trivial it is thus required that Λ > log(1/ maxi pi). Similarly, if the optimal r is such that there exists j ∈Imin(r) (that is r(j) = rmin) and pj ≤2−Λ, then a distribution Q that is completely concentrated on j has D(Q||P) ≥Λ and achieves ρ(Q) = rmin as in the unrestricted game. Therefore, r = rδ, and so maximizes rmin. We thus assume that the optimal r has no such j. We begin our analysis of the game by identifying some useful characteristics of optimal adversary strategies in Lemma 5.1. Then Theorem 5.2 shows that the effective support of an optimal Q has a size of two at most. Based on these properties, we provide in Theorem 5.3 a linear program that computes the optimal rejection function. The following lemma is stated without its (technical) proof. Lemma 5.1 If Q minimizes ρ(Q) and meets the constraint D(Q||P) ≥Λ then: (i) D(Q||P) = Λ; (ii) pj < pk and qk > 0 ⇒r(j) > r(k); (iii) pj < pk and qj > 0 ⇒qj log qj pj + qk log qk pk > (qj + qk) log qj+qk pk ; (iv) pj < pk and qj > 0 ⇒qj pj > qk pk ; and (v) qj, qk > 0 ⇒pj ̸= pk. Theorem 5.2 Any optimal adversarial strategy Q has an effective support of size at most two. Proof Sketch: Assume by contradiction that an optimal Q∗has an effective support of size J ≥3. W.l.o.g. we rename events such that the first J events are the effective support of Q∗(i.e., q∗ i > 0, i = 1, . . . , J). From part (i) of Lemma 5.1, Q∗is a global minimizer of ρ(Q) subject to the constraints PJ i=1 qi log qi pi = Λ, qi > 0 (i = 1, . . . , J) and PJ i=1 qi = 1. The Lagrangian of this problem is L(Q, λ) = J X i=1 r(i)qi + λ1 J X i=1 qi log qi pi −Λ ! + λ2 J X i=1 qi −1 ! . (1) It is not hard to show, using parts (iv) and (v) of Lemma 5.1, that Q∗is an extremum point of (1). Taking the partial derivatives of (1) we have: ∂L(Q∗,λ) ∂qi = r(i)+λ1 log q∗ i pi + 1 +λ2 = 0. Solving ∂L(Q∗,λ) ∂q1 = ∂L(Q∗,λ) ∂q2 for λ1, we get λ1 = (r(2) −r(1))/(log q∗ 1 p1 −log q∗ 2 p2 ). If we assume (w.l.o.g.) that p1 < p2, then, from parts (ii) and (iv) of Lemma 5.1, r(2) < r(1) and q∗ 1/p1 > q∗ 2/p2. Thus λ1 < 0. Therefore, for all i, ∂2L(Q,λ) ∂q2 i = λ1 qi < 0, and (1) is strictly concave. Therefore, since Q∗is an extremum of the (strictly concave) Lagrangian function, it is the unique global maximum. By part (iv) of Lemma 5.1, the smooth function fP,Λ(q1, q2, . . . , qJ−1) △= D(Q||P) −Λ has a root at Q∗where no partial derivative is zero. Therefore, it has an infinite number of roots in any convex 5For any pair j, k such that pj ≤pk, D(Q||P) does not decrease by transferring all the probability from k to j in Q: qj log qj pj + qk log qk pk ≤(qj + qk) log qj +qk pj . domain where Q∗is an internal point. Thus, there exists another distribution, ˜Q ̸= Q∗, where ˜qi > 0 for i = 1, . . . , J, which meets the equality criteria of the Lagrangian. Since Q∗is the unique global maximum of L(Q, λ): ρ( ˜Q) = L( ˜Q, λ) < L(Q∗, λ) = ρ(Q∗). Contradiction. □ We now turn our attention to the learner’s selection of r(·). As already noted, it is sufficient for the learner to consider only monotone rejection functions. Since for these functions pj = pk ⇒r(j) = r(k), the learner can partition Ωinto K = K(P) event subsets, which correspond, by probability, to “level sets”, S1, S2, . . . , SK (all events in a level set S have probability PS). We re-index these subsets such that 0 < PS1 < PS2 < · · · < PSK. Define K variables r1, r2, . . . , rK, representing the rejection rate assigned to each of the K level sets (∀ω ∈Si, r(ω) = ri). We group our level sets by probability: L = {S : PS < 2−Λ}, M = {S : PS = 2−Λ}, and H = {S : PS > 2−Λ}. By Theorem 5.2, the optimal Q which the adversary selects will have an effective support of size 2 at most. If it has an effective support of size 1, then the event ω for which qω = 1 cannot be from a level set in L or H (otherwise, part (i) of Lemma 5.1 would be violated). Therefore it must belong to the single level set in M. Thus, if M = {Sm} (for some index m), then there are feasible solutions Q such that qω = 1 (for ω ∈Sm), all of which have ρ(Q) = rm. If, on the other hand, Q has an effective support of size 2, then it is not hard to show that one of the two events must be from a level set Sl ∈L, and the other, from a level set Sh ∈H (since all other combinations result in a violation of either part (i) or part (iii) of Lemma 5.1). Then, there is a single solution to ql log ql PSl + (1 −ql) log 1−ql PSh = Λ, where ql and 1 −ql are the probabilities that Q assigns to the events from Sl and Sh, respectively. For such a distribution, ρ(Q) = qlrl + (1 −ql)rh. Therefore, the adversary’s choice of an optimal distribution, Q, must have one of |L||H| + |M| ≤ ⌈K2 4 ⌉(possibly different) rejection rates. Each of these rates, ρ1, ρ2, . . . , ρ|L||H|+|M|, is a linear combination of at most two variables, ri and rj. We introduce an additional variable, z, to represent the max-min rejection rate. We thus have: Theorem 5.3 An optimal soft rejection function and the lower-bound on the Type II Error, 1 −z, is obtained by solving the following linear program:6 maximizer1,r2,...,rK,z z, subject to: K X i=1 ri|Si|PSi = δ, 1 ≥r1 ≥r2 ≥· · · ≥rK ≥0, ρi ≥z, i ∈{1, 2, . . ., |L||H| + |M|}. 5.2.1 Numerical Examples We now compare the performance of hard and soft rejection strategies for this constrained game (D(Q||P) ≥Λ) for various values of Λ, and two different families of target distributions, P over support N = 50. The families are arbitrary probability mass functions over N events and discretized Gaussians (over N bins). For each Λ we generated 50 random distributions P for each of the families.7 For each such P we solved the optimal hard and soft strategies and computed the corresponding worst-case optimal type II error, 1 −ρ(Q). The results for δ = 0.05 are shown in Figure 5.2.1. Other results (not presented) for a wide variety of the problem parameters (e.g., N, δ) are qualitatively the same. It is evident that both the soft and hard strategies are ineffective for small Λ. Clearly, the soft approach has significantly lower error than the hard approach (until Λ becomes “sufficiently large”). 6Let r∗be the solution to the linear program. Our derivation of the linear program is dependent on the assumption that there is no event j ∈Imin(r∗) such that pj ≤2−Λ (see discussion preceding Lemma 5.1). If r∗contradicts this assumption then, as discussed, the optimal strategy is rδ, which is optimal. It is not hard to prove that in this case r∗= rδ anyway, and thus the solution to the linear program is always optimal. 7Since maxQ D(Q||P) = log(1/ mini pi), it is necessary that mini pi ≤2−Λ when generating P (to ensure that a Λ-distant Q exists). Distributions in the first family of arbitrarily random distributions (a) are generated by sampling a point (p1) uniformly in (0, 2−Λ]. The other N −1 points are drawn i.i.d. ∼U(0, 1], and then normalized so that their sum is 1 −p1. The second family (b) are Gaussians centered at 0 and discretized over N evenly spaced bins in the range [−10, 10]. A (discretized) random Gaussian N(0, σ) is selected by choosing σ uniformly in some range [σmin, σmax]. σmin is set to the minimum σ ensuring that the first/last bin will not have “zero” probability (due to limited precision). σmax was set so that the cumulative probability in the first/last bin will be 2−Λ, if possible (otherwise σmax is arbitrarily set to 10 ∗σmin). 0 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 Lambda (Λ) Worst Case Type II Error Soft Hard 0 2 4 6 8 10 12 0.4 0.5 0.6 0.7 0.8 0.9 1 Lambda (Λ) Worst Case Type II Error Soft Hard (a) Arbitrary (b) Gaussians Figure 1: Type II Error vs. Λ, for N = 50 and δ = 0.05. 50 distributions were generated for each value of Λ (Λ = 0.5, 0.1, · · · , 12.5). Error bars depict standard error of the mean (SEM). 6 Concluding Remarks We have introduced a game-theoretic approach to the SCC problem. This approach lends itself well to analysis, allowing us to prove under what conditions low-density rejection is hard-optimal and if an optimal monotone rejection function is guaranteed to exist. Our analysis introduces soft decision strategies, which allow for significantly better performance. Observing the learner’s futility when facing an omniscient and unlimited adversary, we considered restricted adversaries and provided full analysis of an interesting family of constrained games. This work opens up many new avenues for future research. We believe that our results could be useful for inspiring new algorithms for finite-sample SCC problems. For example, the equivalence of low-density rejection to the Bayesian binary problem as shown in Section 3.3 obviously motivates a new approach. Clearly, the utilization of randomized strategies should be carried over to the finite sample case as well. Our approach can be extended and developed in several ways. A very interesting setting to consider is one in which the adversary has partial knowledge of the problem parameters and the learner’s strategy. For example, the adversary may only know that P is in some subspace. Additionally, it is desirable to extend our analysis to infinite and continuous event spaces. Finally, it would be very nice to determine an explicit expression for the lower bound obtained by the linear program of Theorem 5.3. References [1] S. Ben-David and M. Lindenbaum. Learning distributions by their density-levels - a paradigm for learning without a teacher. In EuroCOLT, pages 53–68, 1995. [2] C.M. Bishop. Novelty detection and neural network validation. IEE Proceedings - Vision, Image, and Signal Processing, 141(4):217–222, 1994. [3] M.M. Breunig, H.P. Kriegel, R.T. Ng, and J. Sander. Lof: Identifying density-based local outliers. In SIGMOD Conference, pages 93–104, 2000. [4] V. Hodge and J. Austin. A survey of outlier detection methodologies. Artificial Intelligence Review, 22(2):85–126, 2004. [5] G.R.G. Lanckriet, L. El Ghaoui, and M.I. Jordan. Robust novelty detection with single-class mpm. In NIPS, pages 905–912, 2002. [6] A. Lazarevic, L. Ert¨oz, V. Kumar, A. Ozgur, and J. Srivastava. A comparative study of anomaly detection schemes in network intrusion detection. In SDM, 2003. [7] M. Markou and S. Singh. Novelty detection: a review – part 1: statistical approaches. Signal Processing, 83(12):2481–2497, 2003. [8] M. Markou and S. Singh. Novelty detection: a review – part 2: neural network based approaches. Signal Processing, 83(12):2499–2521, 2003. [9] I. Steinwart, D. Hush, and C. Scovel. A classification framework for anomaly detection. Journal of Machine Learning Research, 6, 2005. [10] David M. J. Tax and Robert P. W. Duin. Uniform object generation for optimizing one-class classifiers. Journal of Machine Learning Research, 2:155–173, 2002. [11] H. Yu. Single-class classification with mapping convergence. Machine Learning, 61(1-3):49–69, 2005.
|
2006
|
138
|
2,964
|
Learning to Rank with Nonsmooth Cost Functions Christopher J.C. Burges Microsoft Research One Microsoft Way Redmond, WA 98052, USA cburges@microsoft.com Robert Ragno Microsoft Research One Microsoft Way Redmond, WA 98052, USA rragno@microsoft.com Quoc Viet Le Statistical Machine Learning Program NICTA, ACT 2601, Australia quoc.le@anu.edu.au Abstract The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted order of the documents returned for a given query. Thus, the derivatives of the cost with respect to the model parameters are either zero, or are undefined. In this paper, we propose a class of simple, flexible algorithms, called LambdaRank, which avoids these difficulties by working with implicit cost functions. We describe LambdaRank using neural network models, although the idea applies to any differentiable function class. We give necessary and sufficient conditions for the resulting implicit cost function to be convex, and we show that the general method has a simple mechanical interpretation. We demonstrate significantly improved accuracy, over a state-of-the-art ranking algorithm, on several datasets. We also show that LambdaRank provides a method for significantly speeding up the training phase of that ranking algorithm. Although this paper is directed towards ranking, the proposed method can be extended to any non-smooth and multivariate cost functions. 1 Introduction In many inference tasks, the cost function1 used to assess the final quality of the system is not the one used during training. For example for classification tasks, an error rate for a binary SVM classifier might be reported, although the cost function used to train the SVM only very loosely models the number of errors on the training set, and similarly neural net training uses smooth costs, such as MSE or cross entropy. Thus often in machine learning tasks, there are actually two cost functions: the desired cost, and the one used in the optimization process. For brevity we will call the former the ‘target’ cost, and the latter the ‘optimization’ cost. The optimization cost plays two roles: it is chosen to make the optimization task tractable (smooth, convex etc.), and it should approximate the desired cost well. This mismatch between target and optimization costs is not limited to classification tasks, and is particularly acute for information retrieval. For example, [10] list nine target quality measures that are commonly used in information retrieval, all of which depend only on the sorted order of the documents2 and their labeled relevance. The target costs are usually averaged over a large number of queries to arrive at a single cost that can be used to assess the algorithm. These target costs present severe challenges to machine learning: they are either flat (have zero gradient with respect to the model scores), or are discontinuous, everywhere. It is very likely that a significant mismatch between the target and optimizations costs will have a substantial adverse impact on the accuracy of the algorithm. 1Throughout this paper, we will use the terms “cost function” and “quality measure” interchangeably, with the understanding that the cost function is some monotonic decreasing function of the corresponding quality measure. 2For concreteness we will use the term ‘documents’ for the items returned for a given query, although the returned items can be more general (e.g. multimedia items). In this paper, we propose one method for attacking this problem. Perhaps the first approach that comes to mind would be to design smoothed versions of the cost function, but the inherent ’sort’ makes this very challenging. Our method bypasses the problems introduced by the sort, by defining a virtual gradient on each item after the sort. The method is simple and very general: it can be used for any target cost function. However, in this paper we restrict ourselves to the information retrieval domain. We show that the method gives significant benefits (for both training speed, and accuracy) for applications of commercial interest. Notation: for the search problem, we denote the score of the ranking function by sij, where i = 1, . . . , NQ indexes the query, and j = 1, . . . , ni indexes the documents returned for that query. The general cost function is denoted C({sij}, {lij}), where the curly braces denote sets of cardinality ni, and where lij is the label of the j’th document returned for the i’th query, where j indexes the documents sorted by score. We will drop the query index i when the meaning is clear. Ranked lists are indexed from the top, which is convenient when list length varies, and to conform with the notion that high rank means closer to the top of the list, we will take “higher rank” to mean “lower rank index”. Terminology: for neural networks, we will use ‘fprop’ and ‘backprop’ as abbreviations for a forward pass, and for a weight-updating backward pass, respectively. Throughout this paper we also use the term “smooth”to denote C 1 (i.e. with first derivatives everywhere defined). 2 Common Quality Measures Used in Information Retrieval We list some commonly used quality measures for information retrieval tasks: see [10] and references therein for details. We distinguish between binary and multilevel measures: for binary measures, we assume labels in {0, 1}, with 1 meaning relevant and 0 meaning not. Average Precision is a binary measure where for each relevant document, the precision is computed at its position in the ordered list, and these precisions are then averaged over all relevant documents. The corresponding quantity averaged over queries is called ‘Mean Average Precision’. Mean Reciprocal Rank (MRR) is also a binary measure: if ri is the rank of the highest ranking relevant document for the i’th query, then the MRR is just the reciprocal rank, averaged over queries: MRR = 1 NQ PNQ i=1 1/ri. MRR was used, for example, in TREC evaluations of Question Answering systems, before 2002 [14]. Winner Takes All (WTA) is a binary measure for which, if the top ranked document for a given query is relevant, the WTA cost is zero, otherwise it is one. WTA is used, for example, in TREC evaluations of Question Answering systems, after 2002 [14]. Pair-wise Correct is a multilevel measure that counts the number of pairs that are in the correct order, as a fraction of the maximum possible number of such pairs, for a given query. In fact for binary classification tasks, the pair-wise correct is the same as the AUC, which has led to work exploring optimizing the AUC using ranking algorithms [15, 3]. bpref biases the pairwise correct to the top part of the ranking by choosing a subset of documents from which to compute the pairs [1, 10]. The Normalized Discounted Cumulative Gain (NDCG) is a cumulative, multilevel measure of ranking quality that is usually truncated at a particular rank level [6]. For a given query Qi the NDCG is computed as Ni ≡Ni L X j=1 (2r(j) −1)/ log(1 + j) (1) where r(j) is the relevance level of the j’th document, and where the normalization constant Ni is chosen so that a perfect ordering would result in Ni = 1. Here L is the ranking truncation level at which the NDCG is computed. The Ni are then averaged over the query set. NDCG is particularly well suited to Web search applications because it is multilevel and because the truncation level can be chosen to reflect how many documents are shown to the user. For this reason we will use the NDCG measure in this paper. 3 Previous Work The ranking task is the task of finding a sort on a set, and as such is related to the task of learning structured outputs. Our approach is very different, however, from recent work on structured outputs, such as the large margin methods of [12, 13]. There, structures are also mapped to the reals (through choice of a suitable inner product), but the best output is found by estimating the argmax over all possible outputs. The ranking problem also maps outputs (documents) to the reals, but solves a much simpler problem in that the number of documents to be sorted is tractable. Our focus is on a very different aspect of the problem, namely, finding ways to directly optimize the cost that the user ultimately cares about. As in [7], we handle cost functions that are multivariate, in the sense that the number of documents returned for a given query can itself vary, but the key challenge we address in this paper is how to work with costs that are everywhere either flat or non-differentiable. However, we emphasize that the method also handles the case of multivariate costs that cannot be represented as a sum of terms, each depending on the output for a single feature vector and its label. We call such functions irreducible (such costs are also considered by [7]). Most cost functions used in machine learning are instead reducible (for example, MSE, cross entropy, log likelihood, and the costs commonly used in kernel methods). The ranking problem itself has attracted increasing attention recently (see for example [4, 2, 8]), and in this paper we will use the RankNet algorithm of [2] as a baseline, since it is both easy to implement and performs well on large retrieval tasks. 4 LambdaRank One approach to working with a nonsmooth target cost function would be to search for an optimization function which is a good approximation to the target cost, but which is also smooth. However, the sort required by information retrieval cost functions makes this problematic. Even if the target cost depends on only the top few ranked positions after sorting, the sort itself depends on all documents returned for the query, and that set can be very large; and since the target costs depend on only the rank order and the labels, the target cost functions are either flat or discontinuous in the scores of all the returned documents. We therefore consider a different approach. We illustrate the idea with an example which also demonstrates the perils introduced by a target / optimization cost mismatch. Let the target cost be WTA and let the chosen optimization cost be a smooth approximation to pairwise error. Suppose that a ranking algorithm A is being trained, and that at some iteration, for a query for which there are only two relevant documents D1 and D2, A gives D1 rank one and D2 rank n. Then on this query, A has WTA cost zero, but a pairwise error cost of n −2. If the parameters of A are adjusted so that D1 has rank two, and D2 rank three, then the WTA error is now maximized, but the number of pairwise errors has been reduced by n −4. Now suppose that at the next iteration, D1 is at rank two, and D2 at rank n ≫1. The change in D1’s score that is required to move it to top position is clearly less (possibly much less) than the change in D2’s score required to move it to top position. Roughly speaking, we would prefer A to spend a little capacity moving D1 up by one position, than have it spend a lot of capacity moving D2 up by n −1 positions. If j1 and j2 are the rank indices of D1, D2 respectively, then instead of pairwise error, we would prefer an optimization cost C that has the property that | ∂C ∂sj1 | ≫| ∂C ∂sj2 | (2) whenever j2 ≫j1. This illustrates the two key intuitions behind LambdaRank: first, it is usually much easier to specify rules determining how we would like the rank order of documents to change, after sorting them by score for a given query, than to construct a general, smooth optimization cost that has the desired properties for all orderings. By only having to specify rules for a given ordering, we are defining the gradients of an implicit cost function C only at the particular points in which we are interested. Second, the rules can encode our intuition of the limited capacity of the learning algorithm, as illustrated by Eq. (2). Let us write the gradient of C with respect to the score of the document at rank position j, for the i’th query, as ∂C ∂sj = −λj(s1, l1, · · · , sni, lni) (3) The sign is chosen so that positive λj means that the document must move up the ranked list to reduce the cost. Thus, in this framework choosing an implicit cost function amounts to choosing suitable λj, which themselves are specified by rules that can depend on the ranked order (and scores) of all the documents. We will call these choices the λ functions. At this point two questions naturally arise: first, given a choice for the λ functions, when does there exist a function C for which Eq. (3) holds; and second, given that it exists, when is C convex? We have the following result from multilinear algebra (see e.g. [11]): Theorem (Poincar´e Lemma): If S ⊂Rn is an open set that is star-shaped with respect to the origin, then every closed form on S is exact. Note that since every exact form is closed, it follows that on an open set that is star-shaped with respect to the origin, a form is closed if and only if it is exact. Now for a given query Qi and corresponding set of returned Dij, the ni λ’s are functions of the scores sij, parameterized by the (fixed) labels lij. Let dxj be a basis of 1-forms on Rn and define the 1-form λ ≡ X j λjdxj (4) Then assuming that the scores are defined over Rn, the conditions for the theorem are satisfied and λ = dC for some function C if and only if dλ = 0 everywhere. Using classical notation, this amounts to requiring that ∂λj ∂sk = ∂λk ∂sj ∀j, k ∈{1, . . . , ni} (5) This provides a simple test on the λ’s to determine if there exists a cost function for which they are the derivatives: the Jacobian (that is, the matrix Jjk ≡∂λj/∂sk) must be symmetric. Furthermore, given that such a cost function C does exist, then since its Hessian is just the above Jacobian, the condition that C be convex is that the Jacobian be positive semidefinite everywhere. Under these constraints, the Jacobian looks rather like a kernel matrix, except that while an entry of a kernel matrix depends on two elements of a vector space, an entry of the Jacobian can depend on all of the scores sj. Note that for constant λ’s, the above two conditions are trivially satisfied, and that for other choices that give rise to symmetric J, positive definiteness can be imposed by adding diagonal regularization terms of the form λj 7→λj + αjsj, αj > 0. LambdaRank has a clear physical analogy. Think of the documents returned for a given query as point masses. λj then corresponds to a force on the point mass Dj. If the conditions of Eq. (5) are met, then the forces in the model are conservative, that is, they may be viewed as arising from a potential energy function, which in our case is the implicit cost function C. For example, if the λ’s are linear in the outputs s, then this corresponds to a spring model, with springs that are either compressed or extended. The requirement that the Jacobian is positive semidefinite amounts to the requirement that the system of springs have a unique global minimum of the potential energy, which can be found from any initial conditions by gradient descent (this is not true in general, for arbitrary systems of springs). The physical analogy provides useful guidance in choosing λ functions. For example, for a given query, the forces (λ’s) should sum to zero, since otherwise the overall system (mean score) will accelerate either up or down. Similarly if a contribution to a document A’s λ is computed based on its position with respect to document B, then B’s λ should be incremented by an equal and opposite amount, to prevent the pair itself from accelerating (Newton’s third law, [9]). Finally, we emphasize that LambdaRank is a very simple method. It requires only that one provide rules for the derivatives of the implicit cost for any given sorted order of the documents, and as we will show, such rules are easy to come up with. 5 A Speedup for RankNet Learning RankNet [2] uses a neural net as its function class. Feature vectors are computed for each query/document pair. RankNet is trained on those pairs of feature vectors, for a given query, for which the corresponding documents have different labels. At runtime, single feature vectors are fpropped through the net, and the documents are ordered by the resulting scores. The RankNet cost consists of a sigmoid (to map the outputs to [0, 1]) followed by a pair-based cross entropy cost, and takes the form given in Eq. (8) below. Training times for RankNet thus scale quadratically with the mean number of pairs per query, and linearly with the number of queries. The ideas proposed in Section 4 suggest a simple method for significantly speeding up RankNet training, making it also approximately linear in the number of labeled documents per query, rather than in the number of pairs per query. This is a very significant benefit for large training sets. In fact the method works for any ranking method that uses gradient descent and for which the cost depends on pairs of items for each query. Most neural net training, RankNet included, uses a stochastic gradient update, which is known to give faster convergence. However here we will use batch learning per query (that is, the weights are updated for each query). We present the idea for a general ranking function f : Rn 7→R with optimization cost C : R 7→R. It is important to note that adopting batch training alone does not give a speedup: to compute the cost and its gradients we would still need to fprop each pair. Consider a single query for which n documents have been returned. Let the output scores of the ranker be sj, j = 1, . . . , n, the model parameters be wk ∈R, and let the set of pairs of document indices used for training be P. The total cost is CT ≡P {i,j}∈P C(si, sj) and its derivative with respect to wk is ∂CT ∂wk = X {i,j}∈P ∂C(si, sj) ∂si ∂si ∂wk + ∂C(si, sj) ∂sj ∂sj ∂wk (6) It is convenient to refactor the sum: let Pi be the set of indices j for which {i, j} is a valid pair, and let D be the set of document indices. Then we can write the first term as ∂CT ∂wk = X i∈D ∂si ∂wk X j∈Pi ∂C(si, sj) ∂si (7) and similarly for the second. The algorithm is as follows: instead of backpropping each pair, first n fprops are performed to compute the si (and for the general LambdaRank algorithm, this would also be where the sort on the scores is performed); then for each i = 1, . . . , n the λi ≡P j∈Pi ∂C(si,sj) ∂si are computed; then to compute the gradients ∂si ∂wk , n fprops are performed, and finally the n backprops are done. The key point is that although the overall computation still has an n2 dependence arising from the second sum in (7), computing the terms ∂C(si,sj) ∂si = −1 1+es1−s2 is far cheaper than the computation required to perform the 2n fprops and n backprops. Thus we have effectively replaced a O(n2) algorithm with an O(n) one3. 6 Experiments We performed experiments to (1) demonstrate the training speedup for RankNet, and (2) assess whether LambdaRank improves the NDCG test performance. For the latter, we used RankNet as a baseline. Even though the RankNet optimization cost is not NDCG, RankNet is still very effective at optimizing NDCG, using the method proposed in [2]: after each epoch, compute the NDCG on a validation set, and after training, choose the net for which the validation NDCG is highest. Rather than attempt to derive from first principles the optimal Lambda function for the NDCG target cost (and for a given dataset), which is beyond the scope of this paper, we wrote several plausible λfunctions and tested them on the Web search data. We then picked the single λ function that gave the best results on that particular validation set, and then used that λ function for all of our experiments; this is described below. 6.1 RankNet Speedup Results Here the training scheme is exactly LambdaRank training, but with the RankNet gradients, and with no sort: we call the corresponding λ function G. We will refer to the original RankNet training as V1 and LambdaRank speedup as V2. We compared V1 and V2 in two sets of experiments. In the first we used 1000 queries taken from the Web data described below, and in the second we varied the number of documents for a given query, using the artificial data described below. Experiments were run on a 2.2GHz 32 bit Opteron machine. We compared V1 to V2 for 1 layer and 2 layer (with 10 hidden nodes) nets. V1 was also run using batch update per query, to clearly show the gain (the convergence as a function of epoch was found to be similar for batch and non-batch updates; furthermore running time for batch and non-batch is almost identical). For the single layer net, on the Web data, LambdaRank with G was measured to be 5.1 times faster, and for two layer, 8.0 times faster: the left panel of Figure 1 shows the results (where max validation NDCG is plotted). Each point on the graph is one epoch. Results for the two layer nets were similar. The right panel shows a log log plot of training time versus number of documents, as the number of documents per query 3Two further speedups are possible, and are not explored here: first, only the first n fprops need be performed if the node activations are stored, since those stored activations could then be used during the n backprops; second, the esi could be precomputed before the pairwise sum is done. varies from 4,000 to 512,000 in the artificial set. Fitting the curves using linear regression gives the slopes of V1 and V2 to be 1.943 and 1.185 respectively. Thus V1 is close to quadratic (but not exactly, due to the fact that only a subset of pairs is used, namely, those with documents whose labels differ), and V1 is close to linear, as expected. 0 5 10 15 20 25 30 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Seconds NDCG LambdaRank Speedup RankNet Training 8 9 10 11 12 13 14 -3 -2 -1 0 1 2 3 4 5 6 7 Log Seconds per Epoch Log Number of Documents RankNet Training LambdaRank Speedup Figure 1: Speeding up RankNet training. Left: linear nets. Right: two layer nets. 6.2 λ-function Chosen for Ranking Experiments To implement LambdaRank training, we must first choose the λ function (Eq. (3)), and then substitute in Eq. (5). Using the physical analogy, specifying a λ function amounts to specifying rules for the ‘force’ on a document given its neighbors in the ranked list. We tried two kinds of λ function: those where a document’s λ gets a contribution from all pairs with different labels (for a given query), and those where its λ depends only on its nearest neighbors in the sorted list. All λ functions were designed with the NDCG cost function in mind, and most had a margin built in (that is, a force is exerted between two documents even if they are in the correct order, until their difference in scores exceeds that margin). We investigated step potentials, where the step sizes are proportional to the NDCG gain found by swapping the pair; spring models; models that estimated the NDCG gradient using finite differences; and models where the cost was estimated as the gradient of a smooth, pairwise cost, also scaled by NDCG gain from swapping the two documents. We tried ten different λ functions in all. Due to space limitations we will not give results on all these functions here: instead we will use the one that worked best on the Web validation data for all experiments. This function used the RankNet cost, scaled by the NDCG gain found by swapping the two documents in question. The RankNet cost combines a sigmoid output and the cross entropy cost, and is similar to the negative binomial log-likelihood cost [5], except that it is based on pairs of items: if document i is to be ranked higher than document j, then the RankNet cost is [2]: CR i,j = sj −si + log(1 + esi−sj) (8) and if the corresponding document ranks are ri and rj, then taking derivatives of Eq. (8) and combining with Eq. (1) gives λ = N 1 1 + esi−sj 2li −2lj 1 log(1 + i) − 1 log(1 + j) (9) where N is the reciprocal max DCG for the query. Thus for each pair, after the sort, we increment each document’s force by ±λ, where the more relevant document gets the positive increment. 6.3 Ranking for Search Experiments We performed experiments on three datasets: artificial, web search, and intranet search data. The data are labeled from 0 to M, in order of increasing relevance: the Web search and artificial data have M = 4, and the intranet search data, M = 3. The corresponding NDCG gains (the numerators in Eq. (1)) were therefore 0, 3, 7, 15 and 31. In all graphs, 95% confidence intervals are shown. In all experiments, we varied the learning rate from as low as 1e-7 to as high as 1e-2, and for each experiment we picked that rate that gave the best validation results. For all training, the learning rate was reduced be a factor of 0.8 if the training cost (Eq. (8), for RankNet, and the NDCG at truncation level 10, for LambdaRank) increased over the value for the previous epoch. Training was done for 300 epochs for the artificial and Web search data, and for 200 epochs for the intranet data, and training was restarted (with random weights) if the cost did not reduce for 50 iterations. 6.3.1 Artificial Data We used artificial data to remove any variance stemming from the quality of the features or of the labeling. We followed the prescription given in [2] for generating random cubic polynomial data. However, here we use five levels of relevance instead of six, a label distribution corresponding to real datasets, and more data, all to more realistically approximate a Web search application. We used 50 dimensional data, 50 documents per query, and 10K/5K/10K queries for train/valid/test respectively. We report the NDCG results in Figure 2 for ten NDCG truncation levels. In this clean dataset, LambdaRank clearly outperforms RankNet. Note that the gap increases at higher relevance levels, as one might expect due to the more direct optimization of NDCG. 1 2 3 4 5 6 7 8 9 10 0.50 0.52 0.54 0.56 0.58 0.60 0.62 Truncation Level LambdaRankLinear RankNetLinear 1 2 3 4 5 6 7 8 9 10 0.55 0.60 0.65 0.70 0.75 Truncation Level NDCG LambdaRankTwoLayer RankNetTwoLayer LambdaRankLinear RankNetLinear Figure 2: Left: Cubic polynomial data. Right: Intranet search data. 6.3.2 Intranet Search Data This data has dimension 87, and only 400 queries in all were available. The average number of documents per query is 59.4. We used 5 fold cross validation, with 2+2+1 splits between train/validation/test sets. We found that it was important for such a small dataset to use a relatively large validation set to reduce variance. The results for the linear nets are shown in Figure 2: although LambdaRank gave uniformly better mean NDCGs, the overlapping error bars indicate that on this set, LambdaRank does not give statistically significantly better results than RankNet at 95% confidence. For the two layer nets the NDCG means are even closer. This is an example of a case where larger datasets are needed to see the difference between two algorithms (although it’s possible that more powerful statistical tests would find a difference here also). 6.4 Web Search Data This data is from a commercial search engine and has 367 dimensions, with on average 26.1 documents per query. The data was created by shuffling a larger dataset and then dividing into train, validation and test sets of size 10K/5K/10K queries, respectively. In Figure 3, we report the NDCG scores on the dataset at truncation levels from 1 to 10. We show separate plots to clearly show the differences: in fact, the linear LambdaRank results lie on top of the two layer RankNet results, for the larger truncation values. 7 Conclusions We have demonstrated a simple and effective method for learning non-smooth target costs. LambdaRank is a general approach: in particular, it can be used to implement RankNet training, and it 1 2 3 4 5 6 7 8 9 10 0.60 0.62 0.64 0.66 0.68 0.70 0.72 Truncation Level NDCG RankNetLinear LambdaRankLinear 1 2 3 4 5 6 7 8 9 10 0.60 0.62 0.64 0.66 0.68 0.70 0.72 Truncation Level NDCG RankNetTwoLayer LambdaRankTwoLayer Figure 3: NDCG for RankNet and LambdaRank. Left: linear nets. Right: two layer nets furnishes a significant training speedup there. We studied LambdaRank in the context of the NDCG target cost for neural network models, but the same ideas apply to any non-smooth target cost, and to any differentiable function class. It would be interesting to investigate using the same method starting with other classifiers such as boosted trees. Acknowledgments We thank M. Taylor, J. Platt, A. Laucius, P. Simard and D. Meyerzon for useful discussions and for providing data. References [1] C. Buckley and E. Voorhees. Evaluating evaluation measure stability. In SIGIR, pages 33–40, 2000. [2] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to Rank using Gradient Descent. In ICML 22, Bonn, Germany, 2005. [3] C. Cortes and M. Mohri. Confidence Intervals for the Area Under the ROC Curve. In NIPS 18. MIT Press, 2005. [4] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933–969, 2003. [5] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2):337–374, 2000. [6] K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents. In SIGIR 23. ACM, 2000. [7] T. Joachims. A support vector method for multivariate performance measures. In ICML 22, 2005. [8] I. Matveeva, C. Burges, T. Burkard, A. Lauscius, and L. Wong. High accuracy retrieval with multiple nested rankers. In SIGIR, 2006. [9] I. Newton. Philosophiae Naturalis Principia Mathematica. The Royal Society, 1687. [10] S. Robertson and H. Zaragoza. On rank-based effectiveness measures and optimisation. Technical Report MSR-TR-2006-61, Microsoft Research, 2006. [11] M. Spivak. Calculus on Manifolds. Addison-Wesley, 1965. [12] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediciton models: A large margin approach. In ICML 22, Bonn, Germany, 2005. [13] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML 24, 2004. [14] E.M. Voorhees. Overview of the TREC 2001/2002 Question Answering Track. In TREC, 2001,2002. [15] L. Yan, R. Dodlier, M.C. Mozer, and R. Wolniewicz. Optimizing Classifier Performance via an Approximation to the Wilcoxon-Mann-Whitney Statistic. In ICML 20, 2003.
|
2006
|
139
|
2,965
|
Geometric entropy minimization (GEM) for anomaly detection and localization Alfred O Hero, III University of Michigan Ann Arbor, MI 48109-2122 hero@umich.edu Abstract We introduce a novel adaptive non-parametric anomaly detection approach, called GEM, that is based on the minimal covering properties of K-point entropic graphs when constructed on N training samples from a nominal probability distribution. Such graphs have the property that as N →∞their span recovers the entropy minimizing set that supports at least ρ = K/N(100)% of the mass of the Lebesgue part of the distribution. When a test sample falls outside of the entropy minimizing set an anomaly can be declared at a statistical level of significance α = 1 −ρ. A method for implementing this non-parametric anomaly detector is proposed that approximates this minimum entropy set by the influence region of a K-point entropic graph built on the training data. By implementing an incremental leave-one-out k-nearest neighbor graph on resampled subsets of the training data GEM can efficiently detect outliers at a given level of significance and compute their empirical p-values. We illustrate GEM for several simulated and real data sets in high dimensional feature spaces. 1 Introduction Anomaly detection and localization are important but notoriously difficult problems. In such problems it is crucial to identify a nominal or baseline feature distribution with respect to which statistically significant deviations can be reliably detected. However, in most applications there is seldom enough information to specify the nominal density accurately, especially in high dimensional feature spaces for which the baseline shifts over time. In such cases standard methods that involve estimation of the multivariate feature density from a fixed training sample are inapplicable (high dimension) or unreliable (shifting baseline). In this paper we propose an adaptive non-parametric method that is based on a class of entropic graphs [1] called K-point minimal spanning trees [2] and overcomes the limitations of high dimensional feature spaces and baseline shift. This method detects outliers by comparing them to the most concentrated subset of points in the training sample. It follows from [2] that this most concentrated set converges to the minimum entropy set of probability ρ as N →∞and K/N →ρ. Thus we call this approach to anomaly detection the geometric entropy minimization (GEM) method. Several approaches to anomaly detection have been previously proposed. Parametric approaches such as the generalized likelihood ratio test lead to simple and classical algorithms such as the Student t-test for testing deviation of a Gaussian test sample from a nominal mean value and the Fisher F-test for testing deviation of a Gaussian test sample from a nominal variance. These methods fall under the statistical nomenclature of the classical slippage problem [3] and have been applied to detecting abrupt changes in dynamical systems, image segmentation, and general fault detection applications [4]. The main drawback of these algorithms is that they rely on a family of parameterically defined nominal (no-fault) distributions. An alternative to parametric methods of anomaly detection are the class of novelty detection algorithms and include the GEM approach described herein. Scholkopf and Smola introduced a kernelbased novelty detection scheme that relies on unsupervised support vector machines (SVM) [5]. The single class minimax probability machine of Lanckriet etal [6] derives minimax linear decision regions that are robust to unknown anomalous densities. More closely related to our GEM approach is that of Scott and Nowak [7] who derive multiscale approximations of minimum-volume-sets to estimate a particular level set of the unknown nominal multivariate density from training samples. For a simple comparative study of several of these methods in the context of detecting network intrusions the reader is referred to [8]. The GEM method introduced here has several features that are summarized below. (1) Unlike the MPM method of Lanckriet etal [6] the GEM anomaly detector is not restricted to linear or even convex decision regions. This translates to higher power for specified false alarm level. (2) GEMs computational complexity scales linearly in dimension and can be applied to level set estimation in feature spaces of unprecedented (high) dimensionality. (3) GEM has no complicated tuning parameters or function approximation classes that must be chosen by the user. (4) Like the method of Scott and Nowak [7] GEM is completely non-parametric, learning the structure of the nominal distribution without assumptions of linearity, smoothness or continuity of the level set boundaries. (5) Like Scott and Nowak’s method, GEM is provably optimal - indeed uniformly most powerful of specified level - for the case that the anomaly density is a mixture of the nominal and a uniform density. (6) GEM easily adapts to local structure, e.g. changes in local dimensionality of the support of the nominal density. We introduce an incremental Leave-one-out (L1O) kNNG as a particularly versatile and fast anomaly detector in the GEM class. Despite the similarity in nomenclature, the L1O kNNG is different from k nearest neighbor (kNN) anomaly detection of [9]. The kNN anomaly detector is based on thresholding the distance from the test point to the k-th nearest neighbor. The L1O kNNG detector computes the change in the topology of the entire kNN graph due to the addition of a test sample and does not use a decision threshold. Furthermore, the parent GEM anomaly detection methodology has proven theoretical properties, e.g. the (restricted) optimality property for uniform mixtures and general consistency properties. We introduce the statistical framework for anomaly detection in the next section. We then describe the GEM approach in Section . Several simulations are presented n Section 4. 2 Statistical framework The setup is the following. Assume that a training sample Xn = {X1, . . . , Xn} of d-dimensional vectors Xi is available. Given a new sample X the objective is to declare X to be a ”nominal” sample consistent with Xn or an ”anomalous” sample that is significantly different from Xn. This declaration is to be constrained to give as few false positives as possible. To formulate this problem we adopt the standard statistical framework for testing composite hypotheses. Assume that Xn is an independent identically distributed (i.i.d.) sample from a multivariate density f0(x) supported on the unit d-dimensional cube [0, 1]d. Let X have density f(x). Anomaly detection can be formulated as testing the hypotheses H0 : f = fo versus H0 : f ̸= fo at a prescribed level α of significance P(declare H1|H0) ≤α. The minimum-volume-set of level α is defined as a set Ωα in IRd which minimizes the volume |Ωα| = Ωα dx subject to the constraint Ωα f0(x)dx ≥1 −α. The minimum-entropy-set of level α is defined as a set Λα in IRd which minimizes the R´enyi entropy Hν(Λα) = 1 1−α ln Λα f ν(x)dx subject to the constraint Λα f0(x)dx ≥1 −α. Here ν is any real valued parameter between 0 < ν < 1. When f is a Lebesgue density in IRd it is easy to show that these three sets are identical almost everywhere. The test ”decide anomaly if X ̸∈Ωα” is equivalent to implementing the test function φ(x) = 1, x ̸∈Ωα 0, o.w. . This test has a strong optimality property: when f0 is Lebesgue continuous it is a uniformly most powerful (UMP) level α for testing anomalies that follow a uniform mixture distribution. Specifically, let X have density f(x) = (1 −ϵ)f0(x) + ϵU(x) where U(x) is the uniform density over [0, 1]d and ϵ ∈[0, 1]. Consider testing the hypotheses H0 : ϵ = 0 (1) H1 : ϵ > 0 (2) Proposition 1 Assume that under H0 the random vector X has a Lebesgue continuous density f0 and that Z = f0(X) is also a continuous random variable. Then the level-set test of level α is uniformly most powerful for testing (2). Furthermore, its power function β = P(X ̸∈Ωα|H1) is given by β = (1 −ϵ)α + ϵ(1 −|Ωα|). A sufficient condition for the random variable Z above to be continuous is that the density f0(x) have no flat spots over its support set {f0(x) > 0}. The proof of this proposition is omitted. There are two difficulties with implementing the level set test. First, for known f0 the level set may be very difficult if not impossible to determine in high dimensions d ≫2. Second, when only a training sample from f0 is available and f0 is unknown the level sets have to be learned from the training data. There are many approaches to doing this for minimum volume tests and these are reviewed in [7]. These methods can be divided into two main approaches: density estimation followed by plug in estimation of Ωα via variational methods; and (2) direct estimation of the level set using function approximation and non-parametric estimation. Since both approaches involve explicit approximation of high dimensional quantities, e.g. the multivariate density or the boundary of the set Ωα, these methods are difficult to apply in high dimensional problems, i.e. d > 2. The GEM method we propose in the next section overcomes these difficulties. 3 GEM and entropic graphs GEM is a method that directly estimates the critical region for detecting anomalies using minimum coverings of subsets of points in a nominal training sample. These coverings are obtained by constructing minimal graphs, e.g. a MST or kNNG, covering a K-point subset that is a given proportion of the training sample. Points not covered by these K-point minimal graphs are identified as tail events and allow one to adaptively set a pvalue for the detector. For a set of n points Xn in IRd a graph G over Xn is a pair (V, E) where V = Xn is the set of vertices and E = {e} is the set of edges of the graph. The total power weighted length, or, more simply, the length, of G is L(Xn) = e∈E |e|γ where γ > 0 is a specified edge exponent parameter. 3.1 K-point MST The MST with power weighting γ is defined as the graph that spans Xn with minimum total length: LMST (Xn) = min T ∈T e∈T |e|γ. where T is the set of all trees spanning Xn. Definition 1 K-point MST: Let Xn,K denote one of the n K subsets of K distinct points from Xn. Among all of the MST’s spanning these sets, the K-MST is defined as the one having minimal length minXn,K⊂Xn LMST (Xn,k). The K-MST thus specifies the minimal subset of K points in addition to specifying the minimum length. This subset of points, which we call a minimal graph covering of Xn of size K, can be viewed as capturing the densest region of Xn. Furthermore, if Xn is a i.i.d. sample from a multivariate density f(x) and if limK,n→∞K/n = ρ and a greedy version of the K-MST is implemented, this set converges a.s. to the minimum ν-entropy set containing a proportion of at least ρ = K/n of the mass of the (Lebesgue component of) f(x), where ν = (d −γ)/d. This fact was used in [2] to motivate the greedy K-MST as an outlier resistant estimator of entropy for finite n, K. Define the K-point subset X ∗ n,K = argminXn,K⊂XnLMST (Xn,K) selected by the greedy K-MST. Then we have the following As the minimum entropy set and minimum volume set are identical, this suggests the following minimal-volume-set anomaly detection algorithm, which we call the ”K-MST anomaly detector.” K-MST anomaly detection algorithm [1]Process training sample: Given a level of significance α and a training sample Xn = {X1, . . . , Xn}, construct the greedy K-MST and retain its vertex set X ∗ n,K. [2]Process test sample: Given a test sample X run the K-MST on the merged training-test sample Xn+1 = Xn ∪{X} and store the minimal set of points X ∗ n+1,K. [3]Make decision: Using the test function φ defined below decide H1 if φ(X) = 1 and decide H0 if φ(X) = 0. φ(x) = 1, X ̸∈X ∗ n+1,K 0, o.w. . When the density f0 generating the training sample is Lebesgue continuous, it follows from [2, Theorem 2] that as K, n →∞the K-MST anomaly detector has false alarm probability that converges to α = 1 −K/n and power that converges to that of the minimum-volume-set test of level α. When the density f0 is not Lebesgue continuous some optimality properties of the K-MST anomaly detector still hold. Let this nominal density have the decomposition f0 = λ0 + δ0, where λ0 is Lebesgue continuous and δ0 is singular. Then, according to [2, Theorem 2], the K-MST anomaly detector will have false alarm probability that converges to (1 −ψ)α, where ψ is the mass of the singular component of f0, and it is a uniformly most powerful test for anomalies in the continuous component, i.e. for the test of H0 : λ = λ0, δ = δ0 against H1 : λ = (1 −ϵ)λ0 + ϵU(x), δ = δ0. It is well known that the K-MST construction is of exponential complexity in n [10]. In fact, even for K = n −1, a case one can call the leave-one-out MST, there is no simple fast algorithm for computation. However, the leave-one-out kNNG, described below, admits a fast incremental algorithm. 3.2 K-point kNNG Let Xn = {X1, . . . , Xn} be a set of n points. The k nearest neighbors (kNN) {Xi(1), . . . Xi(k)} of a point Xi ∈Xn are the k closest points to Xi points in Xn −{Xi}. Here the measure of closeness is the Euclidean distance. Let {ei(1), . . . , ei(k)} be the set of edges between Xi and its k nearest neighbors. The kNN graph (kNNG) over Xn is defined as the union of all of the kNN edges {ei(1), . . . , ei(k)}n i=1 and the total power weighted edge length of the kNN graph is LkNN(Xn) = n i=1 k l=1 |ei(l)|γ. Definition 2 K-point kNNG: Let Xn,K denote one of the n K subsets of K distinct points from Xn. Among all of the kNNG over each of these sets, the K-kNNG is defined as the one having minimal length minXn,K⊂Xn LkNN(Xn,k). As the kNNG length is also a quasi additive continuous functional [11], the asymptotic KMST theory of [2] extends to the K-point kNNG. Of course, computation of the K-point kNNG also has exponential complexity. However, the same type of greedy approximation introduced by Ravi [10] for the K-MST can be implemented to reduce complexity of the K-point kNNG. This approximation to the K-point kNNG will satisfy the tightly coverable graph property of [2, Defn. 2]. We have the following result that justifies the use of such an approximation as an anomaly detector of level α = 1 −ρ, where ρ = K/n: Proposition 2 Let X ∗ n,K be the set of points in Xn that results from any approximation to the Kpoint kNNG that satisfies the property [2, Defn. 2]. Then limn→∞P0(X ∗ n,K ⊂Ωα) →1 and limn→∞P0(X ∗ n,K ∩Ωα) →0, where K = K(n) = floor(ρn), Ωα is a minimum-volume-set of level α = 1 −ρ and Ωα = [0, 1]d −Ωα. Proof: We provide a rough sketch using the terminology of [2]. Recall that a set Bm ⊂[0, 1]d of resolution 1/m is representable by a union of elements of the uniform partition of [0, 1]d into hypercubes of volume 1/md. Lemma 3 of [2] asserts that there exists an M such that for m > M the limits claimed in Proposition 2 hold with Ωα replaced by Am α , a minimum volume set of resolution 1/m that contains Ωα. As limm→∞Am ρ = Ωα this establishes the proposition. Figures 1-2 illustrate the use of the K-point kNNG as an anomaly detection algorithm. Bivariate Gaussian mixture density −6 −4 −2 0 2 4 −5 −4 −3 −2 −1 0 1 2 3 −6 −4 −2 0 2 4 −5 −4 −3 −2 −1 0 1 2 3 K−point kNNG, k=5, N=200, ρ=0.9, K=180 Figure 1: Left: level sets of the nominal bivariate mixture density used to illustrate the K point kNNG anomaly detection algorithms. Right: K-point kNNG over N=200 random training samples drawn from the nominal bivariate mixture at left. Here k=5 and K=180, corresponding to a significance level of α = 0.1. −6 −4 −2 0 2 4 −5 −4 −3 −2 −1 0 1 2 3 K−point kNNG, k=5, N=200, ρ=0.9, K=180 −6 −4 −2 0 2 4 −5 −4 −3 −2 −1 0 1 2 3 K−point kNNG, k=5, N=200, ρ=0.9, K=180 Figure 2: Left: The test point ’*’ is declared anomalous at level α = 0.1 as it is not captured by the K-point kNNG (K=180) constructed over the combined test sample and the training samples drawn from the nominal bivariate mixture shown in Fig. 1. Right: A different test point ’*’ is declared non-anomalous as it is captured by this K-point kNNG. 3.3 Leave-one-out kNNG (L1O-kNNG) The theoretical equivalence between the K-point kNNG and the level set anomaly detector motivates a low complexity anomaly detection scheme, which we call the leave-one-out kNNG, discussed in this section and adopted for the experiments below. As before, assume a single test sample X = Xn+1 and a training sample Xn. Fix k and assume that the kNNG over the set Xn has been computed. To determine the kNNG over the combined sample Xn+1 = Xn ∪{Xn+1} one can execute the following algorithm: L1O kNNG anomaly detection algorithm 1. For each Xi ∈Xn+1, i = 1, . . . , n + 1, compute the kNNG total length difference ∆iLkNN = LkNN(Xn+1) −LkNN(Xn+1 −{Xi}) by the following steps. For each i: (a) Find the k edges Ek i→∗of all of the kNN’s of Xi. (b) Find the edges Ek ∗→i of other points in Xn+1 −{Xi} that have Xi as one of their kNNs. For these points find the edges Ek+1 ∗ to their respective k + 1st NN point. (c) Compute ∆iLkNN = e∈Ek i→∗|e|γ + e∈Ek ∗→i |e|γ − e∈Ek+1 ∗ |e|γ 2. Define the kNNG most ”outlying point” as Xo = argmaxi=1,...,n+1∆iLkNN. 3. Declare the test sample Xn+1 an anomaly if Xn+1 = Xo. This algorithm will detect anomalies with a false alarm level of approximately 1/(n+1). Thus larger sizes n of the training sample will correspond to more stringent false alarm constraints. Furthermore, the p-value of each test point Xi is easily computed by recursing over the size n of the training sample. In particular, let n ′ vary from k to n and define n∗as the minimum value of n ′ for which Xi is declared an anomaly. Then the p-value of Xi is approximately 1/(n∗+ 1). A useful relative influence coefficient η can be defined for each point Xi in the combined sample Xn+1 η(Xi) = ∆iLkNN maxi ∆iLkNN . (3) The coefficient η(Xn+1) = 1 when the test point Xn+1 is declared an anomaly. Using matlab’s matrix sort algorithm step 1 of this algorithm can be computed an order of magnitude faster than the K-point MST (N 2logN vs N 3logN). For example, the experiments below have shown that the above algorithm can find and determine the p-value of 10 outliers among 1000 test samples in a few seconds on a Dell 2GHz processor running Matlab 7.1. 4 Illustrative examples Here we focus on the L1O kNNG algorithm due to its computational speed. We show a few representative experiments for simple Gaussian and Gaussian mixture nominal densities f0. 100 200 300 400 500 600 700 800 900 1000 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 sample number score = ∆n/maxi(∆i) L1O kNN scores. rho=0.998, Mmin=500 , detection rate=0.009 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 20, pvalue 0.001 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 203, pvalue 0.001 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 246, pvalue 0.001 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 294, pvalue 0.001443 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 307, pvalue 0.001 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 334, pvalue 0.001 −5 0 5 −6 −4 −2 0 2 4 iteration 574, pvalue 0.001 −2 0 2 4 6 −4 −2 0 2 4 iteration 712, pvalue 0.0011614 −2 0 2 4 6 −6 −4 −2 0 2 4 iteration 791, pvalue 0.0011682 Figure 3: Left: The plot of the anomaly curve for the L1O kNNG anomaly detector for detecting deviations from a nominal 2D Gaussian density with mean (0,0) and correlation coefficient -0.5. The boxes on peaks of curve correspond to positions of detected anomalies and the height of the boxes are equal to one minus the computed p-value. Anomalies were generated (on the average) every 100 samples and drawn from a 2D Gaussian with correlation coefficient 0.8. The parameter ρ is equal to 1 −α, where α is the user defined false alarm rate. Right: the resampled nominal distribution (”•”) and anomalous points detected (”*”) at the iterations indicated at left. First we illustrate the L1O kNNG algorithm for detection of non-uniformly distributed anomalies from training samples following a bivariate Gaussian nominal density. Specifically, a 2D Gaussian density with mean (0,0) and correlation coefficient -0.5 was generated to train of the L1O kNNG detector. The test sample consisted of a mixture of this nominal and a zero mean 2D Gaussian with correlation coefficient 0.8 with mixture coefficient ϵ = 0.01. In Fig. 3 the results of simulation with a training sample of 2000 samples and 1000 tests samples are shown. Fig. 3 is a plot of the relative influence curve (3) over the test samples as compared to the most outlying point in the (resampled) training sample. When the relative influence curve is equal to 1 the corresponding test sample is the most outlying point and is declared an anomaly. The 9 detected anomalies in Fig. 3 have p-values less than 0.001 and therefore one would expect an average of only one false alarm at this level of significance. In the right panel of Fig. 3 the detected anomalies (asterisks) are shown along with the training sample (dots) used to grow the L1O kNNG for that particular iteration - note that to protect against bias the training sample is resampled at each iteration. Next we compare the performance of the L1O kNNG detector to that of the UMP test for the hypotheses (2). We again trained on a bivariate Gaussian f0 with mean zero, but this time with identical component variances of σ = 0.1. This distribution has essential support on the unit square. For this simple case the minimum volume set of level α is a disk centered at the origin with radius √ 2σ2 ln 1/α and the power of the of the UMP can be computed in closed form: β = (1 −ϵ)α + ϵ(1 −2πσ2 ln 1/α). We implemented the GEM anomaly detector with the incremental leave-one-out kNNG using k = 5. The training set consisted of 1000 samples from f0 and the test set consisted of 1000 samples from the mixture of a uniform density and f0 with parameter ϵ ranging from 0 to 0.2. Figure 4 shows the empirical ROC curves obtained using the GEM test vs the theoretical curves (labeled ”clairvoyant”) for several different values of the mixing parameter. Note the good agreement between theoretical prediction and the GEM implementation of the UMP using the kNNG. 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 ε=0 ε=0.1 ε=0.3 ε=0.5 ROC curves for Gaussian+uniform mixture. k=5, N=1000, Nrep=10 α β L1O−kNN Clairvoyant Figure 4: ROC curves for the leave-one-out kNNG anomaly detector described in Sec. 3.3. The labeled ”clairvoyant” curve is the ROC of the UMP anomaly detector. The training sample is a zero mean 2D spherical Gaussian distribution with standard deviation 0.1 and the test sample is a this 2D Gaussian and a 2D uniform-[0, 1]2 mixture density. The plot is for various values of the mixture parameter ϵ. 5 Conclusions A new and versatile anomaly detection method has been introduced that uses geometric entropy minimization (GEM) to extract minimal set coverings that can be used to detect anomalies from a set of training samples. This method can be implemented through the K-point minimal spanning tree (MST) or the K-point nearest neighbor graph (kNNG). The L1O kNNG is significantly less computationally demanding than the K-point MST. We illustrated the L1O kNNG method on simulated data containing anomalies and showed that it comes close to achieving the optimal performance of the UMP detector for testing the nominal against a uniform mixture with unknown mixing parameter. As the L1O kNNG computes p-values on detected anomalies it can be easily extended to account for false discovery rate constraints. By using a sliding window, the methodology derived in this paper is easily extendible to on-line applications and has been applied to non-parametric intruder detection using our Crossbow sensor network testbed (reported elsewhere). Acknowledgments This work was partially supported by NSF under Collaborative ITR grant CCR-0325571. References [1] A. Hero, B. Ma, O. Michel, and J. Gorman, “Applications of entropic spanning graphs,” IEEE Signal Processing Magazine, vol. 19, pp. 85–95, Sept. 2002. www.eecs.umich.edu/˜hero/imag_proc.html. [2] A. Hero and O. Michel, “Asymptotic theory of greedy approximations to minimal k-point random graphs,” IEEE Trans. on Inform. Theory, vol. IT-45, no. 6, pp. 1921–1939, Sept. 1999. [3] T. S. Ferguson, Mathematical Statistics - A Decision Theoretic Approach. Academic Press, Orlando FL, 1967. [4] I. V. Nikiforov and M. Basseville, Detection of abrupt changes: theory and applications. Prentice-Hall, Englewood-Cliffs, NJ, 1993. [5] B. Scholkopf, R. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt, “Support vector method for novelty detection,” in Advances in Neural Information Processing Systems (NIPS), vol. 13, 2000. [6] G. R. G. Lanckriet, L. El Ghaoui, and M. I. Jordan, “Robust novelty detection with single-class mpm,” in Advances in Neural Information Processing Systems (NIPS), vol. 15, 2002. [7] C. Scott and R. Nowak, “Learning minimum volume sets,” Journal of Machine Learning Research, vol. 7, pp. 665–704, April 2006. [8] A. Lazarevic, A. Ozgur, L. Ertoz, J. Srivastava, and V. Kumar, “A comparative study of anomaly detection schemes in network intrusion detection,” in SIAM Conference on data mining, 2003. [9] S. Ramaswamy, R. Rastogi, and K. Shim, “Efficient algorithms for mining outliers from large data sets,” in Proceedings of the ACM SIGMOD Conference, 2000. [10] R. Ravi, M. Marathe, D. Rosenkrantz, and S. Ravi, “Spanning trees short or small,” in Proc. 5th Annual ACM-SIAM Symposium on Discrete Algorithms, (Arlington, VA), pp. 546–555, 1994. [11] J. E. Yukich, Probability theory of classical Euclidean optimization, vol. 1675 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1998.
|
2006
|
14
|
2,966
|
Analysis of Representations for Domain Adaptation Shai Ben-David School of Computer Science University of Waterloo shai@cs.uwaterloo.ca John Blitzer, Koby Crammer, and Fernando Pereira Department of Computer and Information Science University of Pennsylvania {blitzer, crammer, pereira}@cis.upenn.edu Abstract Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set. 1 Introduction We are all familiar with the situation in which someone learns to perform a task on training examples drawn from some domain (the source domain), but then needs to perform the same task on a related domain (the target domain). In this situation, we expect the task performance in the target domain to depend on both the performance in the source domain and the similarity between the two domains. This situation arises often in machine learning. For example, we might want to adapt for a new user (the target domain) a spam filter trained on the email of a group of previous users (the source domain), under the assumption that users generally agree on what is spam and what is not. Then, the challenge is that the distributions of emails for the first set of users and for the new user are different. Intuitively, one might expect that the closer the two distributions are, the better the filter trained on the source domain will do on the target domain. Many other instances of this situation arise in natural language processing. In general, labeled data for tasks like part-of-speech tagging, parsing, or information extraction are drawn from a limited set of document types and genres in a given language because of availability, cost, and project goals. However, applications for the trained systems often involve somewhat different document types and genres. Nevertheless, part-of-speech, syntactic structure, or entity mention decisions are to a large extent stable across different types and genres since they depend on general properties of the language under consideration. Discriminative learning methods for classification are based on the assumption that training and test data are drawn from the same distribution. This assumption underlies both theoretical estimates of generalization error and the many experimental evaluations of learning methods. However, the assumption does not hold for domain adaptation [5, 7, 13, 6]. For the situations we outlined above, the challenge is the difference in instance distribution between the source and target domains. We will approach this challenge by investigating how a common representation between the two domains can make the two domains appear to have similar distributions, enabling effective domain adaptation. We formalize this intuition with a bound on the target generalization error of a classifier trained from labeled data in the source domain. The bound is stated in terms of a representation function, and it shows that a representation function should be designed to minimize domain divergence, as well as classifier error. While many authors have analyzed adaptation from multiple sets of labeled training data [3, 5, 7, 13], our theory applies to the setting in which the target domain has no labeled training data, but plentiful unlabeled data exists for both target and source domains. As we suggested above, this setting realistically captures the problems widely encountered in real-world applications of machine learning. Indeed recent empirical work in natural language processing [11, 6] has been targeted at exactly this setting. We show experimentally that the heuristic choices made by the recently proposed structural correspondence learning algorithm [6] do lead to lower values of the relevant quantities in our theoretical analysis, providing insight as to why this algorithm achieves its empirical success. Our theory also points to an interesting new algorithm for domain adaptation: one which directly minimizes a tradeoff between source-target similarity and source training error. The remainder of this paper is structured as follows: In the next section we formally define domain adaptation. Section 3 gives our main theoretical results. We discuss how to compute the bound in section 4. Section 5 shows how the bound behaves for the structural correspondence learning representation [6] on natural language data. We discuss our findings, including a new algorithm for domain adaptation based on our theory, in section 6 and conclude in section 7. 2 Background and Problem Setup Let X be an instance set. In the case of [6], this could be all English words, together with the possible contexts in which they occur. Let Z be a feature space (Rd is a typical choice) and {0, 1} be the label set for binary classification1. A learning problem is specified by two parameters: a distribution D over X and a (stochastic) target function f : X →[0, 1]. The value of f(x) corresponds to the probability that the label of x is 1. A representation function R is a function which maps instances to features R : X →Z. A representation R induces a distribution over Z and a (stochastic) target function from Z to [0, 1] as follows: Pr ˜ D [B] def = PrD R−1(B) ˜f(z) def = ED [f(x)|R(x) = z] for any A ⊆Z such that R−1(B) is D-measurable. In words, the probability of an event B under ˜D is the probability of the inverse image of B under R according to D, and the probability that the label of z is 1 according to ˜f is the mean of probabilities of instances x that z represents. Note that ˜f(z) may be a stochastic function even if f(x) is not. This is because the function R can map two instances with different f-labels to the same feature representation. In summary, our learning setting is defined by fixed but unknown D and f, and our choice of representation function R and hypothesis class H ⊆{g : Z →{0, 1}} of deterministic hypotheses to be used to approximate the function f. 2.1 Domain Adaptation We now formalize the problem of domain adaptation. A domain is a distribution D on the instance set X. Note that this is not the domain of a function. To avoid confusion, we will always mean a specific distribution over the instance set when we say domain. Unlike in inductive transfer, where the tasks we wish to perform may be related but different, in domain adaptation we perform the same task in multiple domains. This is quite common in natural language processing, where we might be performing the same syntactic analysis task, such as tagging or parsing, but on domains with very different vocabularies [6, 11]. 1The same type of analysis hold for multiclass classification, but for simplicty we analyze the binary case. We assume two domains, a source domain and a target domain. We denote by DS the source distribution of instances and ˜DS the induced distribution over the feature space Z. We use parallel notation, DT , ˜DT , for the target domain. f : X →[0, 1] is the labeling rule, common to both domains, and ˜f is the induced image of f under R. A predictor is a function, h, from the feature space, Z to [0, 1]. We denote the probability, according the distribution DS, that a predictor h disagrees with f by ǫS(h) = Ez∼˜ DS h Ey∼˜ f(z) [y ̸= h(z)] i = Ez∼˜ DS ˜f(z) −h(z) . Similarly, ǫT (h) denotes the expected error of h with respect to DT . 3 Generalization Bounds for Domain Adaptation We now proceed to develop a bound on the target domain generalization performance of a classifier trained in the source domain. As we alluded to in section 1, the bound consists of two terms. The first term bounds the performance of the classifier on the source domain. The second term is a measure of the divergence between the induced source marginal ˜DS and the induced target marginal ˜DT . A natural measure of divergence for distributions is the L1 or variational distance. This is defined as dL1(D, D′) = 2 sup B∈B |PrD [B] −PrD′ [B]| where B is the set of measureable subsets under D and D′. Unfortunately the variational distance between real-valued distributions cannot be computed from finite samples [2, 9] and therefore is not useful to us when investigating representations for domain adaptation on real-world data. A key part of our theory is the observation that in many realistic domain adaptation scenarios, we do not need such a powerful measure as variational distance. Instead we can restrict our notion of domain distance to be measured only with respect to function in our hypothesis class. 3.1 The A-distance and labeling function complexity We make use of a special measure of distance between probability distributions, the A-distance, as introduced in [9]. Given a domain X and a collection A of subsets of X, let D, D′ be probability distributions over X, such that every set in A is measurable with respect to both distributions. the A-distance between such distributions is defined as dA(D, D′) = 2 sup A∈A |PrD [A] −PrD′ [A]| In order to use the A-distance, we need to limit the complexity of the true function f in terms of our hypothesis class H. We say that a function ˜f : Z →[0, 1] is λ-close to a function class H with respect to distributions ˜DS and ˜DT if inf h∈H [ǫS(h) + ǫT (h)] ≤λ . A function ˜f is λ-close to H when there is a single hypothesis h ∈H which performs well on both domains. This embodies our domain adaptation assumption, and we will assume will assume that our induced labeling function ˜f is λ-close to our hypothesis class H for a small λ. We briefly note that in standard learning theory, it is possible to achieve bounds with no explicit assumption on labeling function complexity. If H has bounded capacity (e.g., a finite VC-dimension), then uniform convergence theory tells us that whenever ˜f is not λ-close to H, large training samples have poor empirical error for every h ∈H. This is not the case for domain adaptation. If the training data is generated by some DS and we wish to use some H as a family of predictors for labels in the target domain, T, then one can construct a function which agrees with some h ∈H with respect to ˜DS and yet is far from H with respect to ˜DT . Nonetheless we believe that such examples do not occur for realistic domain adaptation problems when the hypothesis class H is sufficiently rich, since for most domain adaptation problems of interest the labeling function is ’similarly simple’ for both the source and target domains. 3.2 Bound on the target domain error We require one last piece of notation before we state and prove the main theorems of this work: the correspondence between functions and characteristic subsets. For a binary-valued function g(z), we let Zg ⊆Z be the subset whose characteristic function is g Zg = {z ∈Z : g(z) = 1} . In a slight abuse of notation, for a binary function class H we will write dH(·, ·) to indicate the A-distance on the class of subsets whose characteristic functions are functions in H. Now we can state our main theoretical result. Theorem 1 Let R be a fixed representation function from X to Z and H be a hypothesis space of VC-dimension d. If a random labeled sample of size m is generated by applying R to a DS-i.i.d. sample labeled according to f, then with probability at least 1 −δ, for every h ∈H: ǫT (h) ≤ˆǫS(h) + s 4 m d log 2em d + log 4 δ + dH( ˜DS, ˜DT ) + λ where e is the base of the natural logarithm. Proof: Let h∗= argminh∈H (ǫT (h) + ǫS(h)), and let λT and λS be the errors of h∗with respect to DT and DS respectively. Notice that λ = λT + λS. ǫT (h) ≤ λT + PrDT [Zh∆Zh∗] ≤ λT + PrDS [Zh∆Zh∗] + |PrDS [Zh∆Zh∗] −PrDT [Zh∆Zh∗]| ≤ λT + PrDS [Zh∆Zh∗] + dH( ˜DS, ˜DT ) ≤ λT + λS + ǫS(h) + dH( ˜DS, ˜DT ) ≤ λ + ǫS(h) + dH( ˜DS, ˜DT ) The theorem now follows by a standard application Vapnik-Chervonenkis theory [14] to bound the true ǫS(h) by its empirical estimate ˆǫS(h). Namely, if S is an m-size .i.i.d. sample, then with probability exceeding 1 −δ, ǫS(h) ≤ˆǫS(h) + s 4 m d log 2em d + log 4 δ The bound depends on the quantity dH( ˜DS, ˜DT ). We chose the A-distance, however, precisely because we can measure this from finite samples from the distrbutions ˜DS and ˜DT [9]. Combining 1 with theorem 3.2 from [9], we can state a computable bound for the error on the target domain. Theorem 2 Let R be a fixed representation function from X to Z and H be a hypothesis space of VC-dimension d. If a random labeled sample of size m is generated by applying R to a DS - i.i.d. sample labeled according to f, and ˜US, ˜UT are unlabeled samples of size m′ each, drawn from ˜DS and ˜DT respectively, then with probability at least 1 −δ (over the choice of the samples), for every h ∈H: ǫT (h) ≤ˆǫS(h) + 4 m s d log 2em d + log 4 δ + λ + dH( ˜US, ˜UT ) + 4 s d log(2m′) + log( 4 δ ) m′ Let us briefly examine the bound from theorem 2, with an eye toward feature representations, R. Under the assumption of subsection 3.1, we assume that λ is small for reasonable R. Thus the two main terms of interest are the first and fourth terms, since the representation R directly affects them. The first term is the empirical training error. The fourth term is the sample A-distance between domains for hypothesis class H. Looking at the two terms, we see that a good representation R is one which achieves low values for both training error and domain A-distance simultaneously. 4 Computing the A-distance for Signed Linear Classifiers In this section we discuss practical considerations in computing the A-distance on real data. BenDavid et al. [9] show that the A-distance can be approximated arbitrarily well with increasing sample size. Recalling the relationship between sets and their characteristic functions, it should be clear that computing the A-distance is closely related to learning a classifier. In fact they are identical. The set Ah ∈H which maximizes the H-distance between ˜DS and ˜DT has a characteristic function h. Then h is the classifier which achieves minimum error on the binary classification problem of discriminating between points generated by the two distributions. To see this, suppose we have two samples ˜US and ˜UT , each of size m′ from ˜DS and ˜DT respectively. Define the error of a classifier h on the task of discriminating between points sampled from different distributions as err(h) = 1 2m′ 2m′ X i=1 h(zi) −Izi∈˜US , where Izi∈˜US is the indicator function for points lying in the sample ˜US. In this case, it is straightforward to show that dA( ˜US, ˜UT ) = 2 1 −2 min h′∈H err(h′) . Unfortunately it is a known NP-hard problem even to approximate the error of the optimal hyperplane classifier for arbitrary distributions [4]. We choose to approximate the optimal hyperplane classifier by minimizing a convex upper bound on the error, as is standard in classification. It is important to note that this does not provide us with a valid upper bound on the target error, but as we will see it nonetheless provides us with useful insights about representations for domain adaptation. In the subsequent experiments section, we train a linear classifier to discriminate between points sampled from different domains to illustrate a proxy for the A-distance. We minimize a modified Huber loss using stochastic gradient descent, described more completely in [15]. 5 Natural Language Experiments In this section we use our theory to analyze different representations for the task of adapting a part of speech tagger from the financial to biomedical domains [6]. The experiments illustrate the utility of the bound and all of them have the same flavor. First, we choose a representation R. Then we train a classifier using R and measure the different terms of the bound. As we shall see, represenations which minimize both relevant terms of the bound also have small empirical error. Part of speech (PoS) tagging is the task of labeling a word in context with its grammatical function. For instance, in the previous sentence we would the tag for “speech” is singular common noun, the tag for “labeling” is gerund, and so on. PoS tagging is a common preprocessing step in many pipelined natural language processing systems and is described in more detail in [6]. Blitzer et al. empirically investigate methods for adpating a part of speech tagger from financial news (the Wall Street Journal, henceforth also WSJ) to biomedical abstracts (MEDLINE) [6]. We have obtained their data, and we will use it throughout this section. As in their investigation, we treat the financial data as our source, for which we have labeled training data and the biomedical abstracts as our target, for which we have no labeled training data. The representations we consider in this section are all linear projections of the original feature space into Rd. For PoS tagging, the original feature space consists of high-dimensional, sparse binary vectors [6]. In all of our experiments we choose d to be 200. Now at train time we apply the projection to the binary feature vector representation of each instance and learn a linear classifier in the d-dimensional projected space. At test time we apply the projection to the binary feature vector representation and classify in the d-dimensional projected space. 5.1 Random Projections If our original feature space is of dimension d′, our random projection matrix is a matrix P ∈Rd×d′. The entries of P are drawn i.i.d. from N(0, 1). The Johnson-Lindenstrauss lemma [8] guarantees (a) Plot of SCL representation for financial (squares) vs. biomedical (circles) (b) Plot of SCL representation for nouns (diamonds) vs. verbs (triangles) Figure 1: 2D plots of SCL representations for the (a) A-distance and (b) empirical risk parts of theorem 2 that random projections approximate well distances in the original high dimensional space, as long as d is sufficiently large. Arriaga and Vempala [1] show that one can achieve good prediction with random projections as long as the margin is sufficiently large. 5.2 Structural Correspondence Learning Blitzer et al. [6] describe a heuristic method for domain adaptation that they call structural correspondence learning (henceforth also SCL). SCL uses unlabeled data from both domains to induce correspondences among features in the two domains. Its first step is to identify a small set of domainindependent “pivot” features which occur frequently in the unlabeled data of both domains. Other features are then represented using their relative co-occurrence counts with these pivot features. Finally they use a low-rank approximation to the co-occurence count matrix as a projection matrix P. The intuition is that by capturing these important correlations, features from the source and target domains which behave similarly for PoS tagging will be represented similarly in the projected space. 5.3 Results We use as our source data set 100 sentences (about 2500 words) of PoS-tagged Wall Street Journal text. The target domain test set is the same set as in [6]. We use one million words (500 thousand from each domain) of unlabeled data to estimate the A-distance between the financial and biomedical domains. The results in this section are intended to illustrate the different parts of theorem 2 and how they can affect the target domain generalization error. We give two types of results. The first are pictorial and appear in figures 1(a), 1(b) and 2(a). These are intended to illustrate either the A-distance (figures 1(a) and 2(a)) or the empirical error (figure 1(b)) for different representations. The second type are empirical and appear in 2(b). In this case we use the Huber loss as a proxy from the empirical training error. Figure 1(a) shows one hundred random instances projected onto the space spanned by the best two discriminating projections from the SCL projection matrix for part of the financial and biomedical dataset. Instances from the WSJ are depicted as filled red squares, whereas those from MEDLINE are depicted as empty blue circles. An approximating linear discrimnator is also shown. Note, however, that the discriminator performs poorly, and recall that if the best discriminator performs poorly the A-distance is low. On the other hand, figure 1(b) shows the best two discriminating components for the task of discriminating between nouns and verbs. Note that in this case, a good discriminating divider is easy to find, even in such a low-dimensional space. Thus these pictures lead us to believe that SCL finds a representation which results both in small empirical classification error and small A-distance. In this case theorem 2 predicts good performance. (a) Plot of random projections representation for financial (squares) vs. biomedical (circles) (b) Comparison of bound terms vs.target domain error for different choices of representation. Reprentations are linear projections of the original feature space. Huber loss is the labeled training loss after training, and the A-distance is approximated as described in the previous subsection. Error refers to tagging error for the full tagset on the target domain. Representation Huber loss A-distance Error Identity 0.003 1.796 0.253 Random Proj 0.254 0.223 0.561 SCL 0.07 0.211 0.216 Figure 2: (a) 2D plot of random projection representation and (b) results summary on large data Figure 2(a) shows one hundred random instances projected onto the best two discriminating projections for WSJ vs. MEDLINE from a random matrix of 200 projections. This also seems to be difficult to separate. The random projections don’t reveal any useful structure for learning, either, though. Not shown is the corresponding noun vs. verb plot for random projections. It looks identical to 2(a). Thus theorem 2 predicts that using two random projections as a representation will perform poorly, since it minimizes only the A-distance and not the empirical error. Figure 2(b) gives results on a large training and test set showing how the value of the bound can affect results. The identity representation achieves very low Huber loss (corresponding to empirical error). The original feature set consists of 3 million binary-valued features, though, and it is quite easy to separate the two domains using these features. The approximate A-distance is near the maximum possible value. The random projections method achieves low A-distance but high Huber loss, and the classifier which uses this representation achieves error rates much lower than the a classifier which uses the identity representation. Finally, the structural correspondence learning representation achieves low Huber loss and low A-distance, and the error rate is the lowest of the three representations. 6 Discussion and Future Work Our theory demonstrates an important tradeoff inherent in designing good representations for domain adaptation. A good representation enables achieving low error rate on the source domain while also minimizing the A-distance between the induced marginal distributions of the two domains. The previous section demonstrates empirically that the heuristic choices of the SCL algorithm [6] do achieve low values for each of these terms. Our theory is closely related to theory by Sugiyama and Mueller on covariate shift in regression models [12]. Like this work, they consider the case where the prediction functions are identical, but the input data (covariates) have different distributions. Unlike their work, though, we bound the target domain error using a finite source domain labeled sample and finite source and target domain unlabeled samples. Our experiments illustrate the utility of our bound on target domain error, but they do not explore the accuracy of our approximate H-distance. This is an important area of exploration for future work. Finally our theory points toward an interesting new direction for domain adapation. Rather than heuristically choosing a representation, as previous research has done [6], we can try to learn a representation which directly minimizes a combination of the terms in theorem 2. If we learn mappings from some parametric family (linear projections, for example), we can give a bound on the error in terms of the complexity of this family. This may do better than the current heuristics, and we are also investigating theory and algorithms for this. 7 Conclusions We presented an analysis of representations for domain adaptation. It is reasonable to think that a good representation is the key to effective domain adaptation, and our theory backs up that intuition. Theorem 2 gives an upper bound on the generalization of a classifier trained on a source domain and applied in a target domain. The bound depends on the representation and explicitly demonstrates the tradeoff between low empirical source domain error and a small difference between distributions. Under the assumption that the labeling function ˜f is close to our hypothesis class H, we can compute the bound from finite samples. The relevant distributional divergence term can be written as the Adistance of Kifer et al [9]. Computing the A-distance is equivalent to finding the minimum-error classifier. For hyperplane classifiers in Rd, this is an NP-hard problem, but we give experimental evidence that minimizing a convex upper bound on the error, as in normal classification, can give a reasonable approximation to the A-distance. Our experiments indicate that the heuristic structural correspondence learning method [6] does in fact simultaneously achieve low A-distance as well as a low margin-based loss. This provides a justification for the heuristic choices of SCL “pivots”. Finally we note that our theory points to an interesting new algorithm for domain adaptation. Instead of making heuristic choices, we are investigating algorithms which directly minimize a combination of the A-distance and the empirical training margin. References [1] R. Arriaga and S. Vempala. An algorithmic theory of learning robust concepts and random projection. In FOCS, volume 40, 1999. [2] T. Batu, L. Fortnow, R. Rubinfeld, W. Smith, and P. White. Testing that distributions are close. In FOCS, volume 41, pages 259–269, 2000. [3] J. Baxter. Learning internal representations. In COLT ’95: Proceedings of the eighth annual conference on Computational learning theory, pages 311–320, New York, NY, USA, 1995. [4] S. Ben-David, N. Eiron, and P. Long. On the difficulty of approximately maximizing agreements. Journal of Computer and System Sciences, 66:496–514, 2003. [5] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. In COLT 2003: Proceedings of the sixteenth annual conference on Computational learning theory, 2003. [6] J. Blitzer, R. McDonald, and F. Pereira. Domain adaption with structural correspondence learning. In EMNLP, 2006. [7] K. Crammer, M. Kearns, and J. Wortman. Learning from data of variable quality. In Neural Information Processing Systems (NIPS), Vancouver, Canada, 2005. [8] W. Johnson and J. Lindenstrauss. Extension of lipschitz mappings to hilbert space. Contemporary Mathematics, 26:189–206, 1984. [9] D. Kifer, S. Ben-David, and J. Gehrke. Detecting change in data streams. In Very Large Databases (VLDB), 2004. [10] C. Manning. Foundations of Statistical Natural Language Processing. MIT Press, Boston, 1999. [11] D. McClosky, E. Charniak, and M. Johnson. Reranking and self-training for parser adaptation. In ACL, 2006. [12] M. Sugiyama and K. Mueller. Generalization error estimation under covariate shift. In Workshop on Information-Based Induction Sciences, 2005. [13] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Sharing clusters among related groups: Hierarchical Dirichlet processes. In Advances in Neural Information Processing Systems, volume 17, 2005. [14] V. Vapnik. Statistical Learning Theory. John Wiley, New York, 1998. [15] T. Zhang. Solving large-scale linear prediction problems with stochastic gradient descent. In ICML, 2004.
|
2006
|
140
|
2,967
|
Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation Jason M. Samonds Brian R. Potetz Tai Sing Lee Center for the Neural Basis CNBC and Computer CNBC and Computer of Cognition (CNBC) Science Department Science Department Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 Pittsburgh, PA 15213 samondjm@cnbc.cmu.edu bpotetz@cs.cmu.edu tai@cnbc.cmu.edu Abstract Although there has been substantial progress in understanding the neurophysiological mechanisms of stereopsis, how neurons interact in a network during stereo computation remains unclear. Computational models on stereopsis suggest local competition and long-range cooperation are important for resolving ambiguity during stereo matching. To test these predictions, we simultaneously recorded from multiple neurons in V1 of awake, behaving macaques while presenting surfaces of different depths rendered in dynamic random dot stereograms. We found that the interaction between pairs of neurons was a function of similarity in receptive fields, as well as of the input stimulus. Neurons coding the same depth experienced common inhibition early in their responses for stimuli presented at their nonpreferred disparities. They experienced mutual facilitation later in their responses for stimulation at their preferred disparity. These findings are consistent with a local competition mechanism that first removes gross mismatches, and a global cooperative mechanism that further refines depth estimates. 1 Introduction The human visual system is able to extract three-dimensional (3D) structures in random noise stereograms even when such images evoke no perceptible patterns when viewed monocularly [1]. Bela Julesz proposed that this is accomplished by a stereopsis mechanism that detects correlated shifts in 2D noise patterns between the two eyes. He also suggested that this mechanism likely involves cooperative neural processing early in the visual system. Marr and Poggio formalized the computational constraints for solving stereo matching (Fig. 1a) and devised an algorithm that can discover the underlying 3D structures in a variety of random dot stereogram patterns [2]. Their algorithm was based on two rules: (1) each element or feature is unique (i.e., can be assigned only one disparity) and (2) surfaces of objects are cohesive (i.e., depth changes gradually across space). To describe their algorithm in neurophysiological terms, we can consider neurons in primary visual cortex as simple element or feature detectors. The first rule is implemented by introducing competitive interactions (mutual inhibition) among neurons of different disparity tuning at each location (Fig. 1b, blue solid horizontal or vertical lines), allowing only one disparity to be detected at each location. The second rule is implemented by introducing cooperative interactions (mutual facilitation) among neurons tuned to the same depth (image disparity) across different spatial locations (Fig. 1b, along the red dashed diagonal lines). In other words, a disparity estimate at one location is more likely to be correct if neighboring locations have similar disparity estimates. A dynamic system under such constraints can relax to a stable global disparity map. Here, we present neurophysiological evidence of interactions between disparity-tuned neurons in the primary visual cortex that is consistent with this general approach. We sampled from a variety of spatially distributed disparity tuned neurons (see electrodes Fig. 1b) while displaying DRDS stimuli defined at various disparities (see stimulus Fig.1b). We then measured the dynamics of interactions by assessing the temporal evolution of correlation in neural responses. Figure 1: (a) Left and right images of random dot stereogram (right image has been shifted to the right). (b) 1D graphical depiction of competition (blue solid lines) and cooperation (red dashed lines) among disparity-tuned neurons with respect to space as defined by Marr and Poggio’s stereo algorithm [2]. 2 Methods 2.1 Recording and stimulation Recordings were made in V1 of two awake, behaving macaques. We simultaneously recorded from 4-8 electrodes providing data from up to 10 neurons in a single recording session (some electrodes recorded from as many as 3 neurons). We collected data from 112 neurons that provided 224 pairs for cross-correlation analysis. For stimuli, we used 12 Hz dynamic random dot stereograms (DRDS; 25% density black and white pixels on a mean luminance background) presented in a 3.5-degree aperture. Liquid crystal shutter goggles were used to present random dot patterns to each eye separately. Eleven horizontal disparities between the two eyes, ranging from ±0.9 degrees, were tested. Seventy-four neurons (66%) had significant disparity tuning and 99 pairs (44%) were comprised of neurons that both had significant disparity tuning (1-way ANOVA, p<0.05). Figure 2: (a) Example recording session from five electrodes in V1. (b) Receptive field (white box—arrow represents direction preference) and random dot stereogram locations for same recording session (small red square is the fixation spot). ? Right Image Left Image Left Image Right Image Disparity Stimulus Electrodes a b 100µV 0.2ms Medial - Lateral Posterior - Anterior 1 ° 1 ° 5mm a b 2.2 Data analysis Interaction between neurons was described as "effective connectivity" defined by crosscorrelation methods [3]. First, the probability of all joint spikes (x and y) between the two neurons was calculated for all times from stimulus onset (t1 and t2) including all possible lag times (t1 - t2) between the two neurons (2D joint peristimulus time histogram—JPSTH). Next, the cross-product of each neuron’s PSTH (joint probabilities expected from chance) was subtracted from the JPSTH; this difference is referred to as the cross-covariance histogram. Finally, the cross-covariance histogram was normalized by the geometric mean of the auto-covariance histograms: ( )( ) ) t( y ) t( y ) t( y ) t( y ) t( x ) t( x ) t( x ) t( x ) t( y ) t( x ) t( y ) t( x ) t, t( C 2 2 2 2 1 1 1 1 2 1 2 1 2 1 y, x − − − = (1) This normalized cross-covariance histogram is a 2D matrix of Pearson’s correlation coefficients between the two neurons where the axes represent time from stimulus onset (Figure 3). The principal diagonal also represents time from stimulus onset for correlation and the opposite diagonal represents lag time between the two neurons. We derived three measurements from this matrix to describe the “effective connectivity” between neuron pairs. Using bootstrapped samples of stimulus trials, we estimated 95% confidence intervals for these three measurements [4]. We first integrated along the principal diagonal to produce correlation versus lag time (i.e., the traditional cross-correlation histogram—CCH). We used CCHs to find significant correlation at or near 0 ms lag times (suggesting synaptic connectivity between the neurons). Second, we integrated under the half-height full bandwidth of significant correlation peaks to quantify effective connectivity. Figure 4 shows the population average of normalized CCHs (n = 27) and 95% confidence intervals. Finally, we repeated this integration along the principal diagonal to obtain the temporal evolution of effective connectivity (computed with a running 100 ms window). Figure 3: Example normalized cross-covariance histogram. In computing effective connectivity with Equation 1, we assume trial-to-trial stationarity. If this is not true (e.g., due to difference in attentional effort in different trials), correlation peaks can emerge that are not due to effective connectivity [5]. We applied a correction to equation 1 [5,6] based on the average firing rate for each trial. However, no significant difference in correlation peaks was observed. In addition, changes in DRDS properties other than disparity did not cause significant changes to correlation peak properties. Finally, alternative cross-correlation methods (CCG) [7] using responses to the same exact random dot pattern to predict correlation expected from chance, again, lead to no significant difference in correlation peak properties. These observations justify our assumption that the effective connectivity computed in our case does not arise due to trial-to-trial non-stationarity. t1 x(t1) t2 y(t2) JPSTH CCH Figure 4: (a) Population average CCH for 27 neuron pairs with a significant correlation peak. (b) Same as (a), but zoomed into ±50 ms lag times with statistics of peak properties (mean ± s.e.m.). 3 Interaction depends on tuning properties The primary indicator of whether or not a neuron pair had a significant correlation peak at or near a 0 ms lag time, for this class of stimuli, was similarity in disparity tuning between the two neurons. Neuron pairs with significant correlation peaks (n = 27; 27%) tended to have more similar disparity peaks, bandwidths, and frequencies (determined from fitted Gabor functions) than neuron pairs that did not have significant correlation peaks. We quantified similarity in tuning using the similarity index (SI), which is Pearson’s product-moment correlation [8]: (2) where i is each point on the disparity tuning curve, x and y are the firing rates at each point for each neuron, and x and y are the mean firing rates across the tuning curve. Figure 5a and 5b clearly show that both the probability of correlation and strength in correlation increase with greater SI (n = 27 pairs). This relationship is limited to long-range interactions among neurons because our electrodes were all at least 1 mm apart. This suggests they are likely mediated by the well known long-range intracortical connections in V1 that link neurons of similar orientation across space [9]. Our results suggest that these connections might also be shared to link similar disparity neurons together. Because connectivity also depended on orientation (Figure 5c), V1 connectivity among neurons appears to depend on similarity across multiple cue dimensions. Figure 5: (a) Likelihood of significant correlation peak with respect to similar disparity tuning. (b) Strength of correlation increases with similarity. (c) Correlation is also more likely if orientation preference is similar. ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − − − = ∑ ∑ ∑ = = = n 1 i 2 i n 1 i 2 i n 1 i i i ) y y ( ) x x ( ) y y )( x x ( SI a c b 0 20 40 60 80 ∆ Orientation Preference r = -0.40 p = 0.04 0.0 0.1 0.2 0.3 0.4 -1.0 -0.5 0.0 0.5 1.0 SI Correlation r = 0.49 p = 0.01 0 0.1 0.2 0.3 0.4 -0.9 -0.3 0.3 0.9 Similarity Index (SI) Percentage Pairs (n=27) 25 50 0 Lag Time (ms) Half-Height Half Bandwidth Peak Lag Time a -0.004 0.000 0.004 0.008 -300 -100 100 300 Lag Time (ms) Correlation 0.17 ± 0.02 -0.002 0.000 0.002 0.004 0.006 -50 -25 0 25 50 b From the 12 pairs of neurons recorded on a single electrode, correlation was observed among neuron pairs with very similar disparity tuning as well as among neurons with nearly opposite disparity tuning (see also [8]). This suggests that antagonistic disparity-tuned neurons tend to spatially coexist, and their interactions are likely competitive. 4 Interaction is stimulus-dependent The interaction between pairs of neurons was not simply a function of the similarity between their receptive field properties but was also a function of the input stimuli (or stimulus disparity in our case). The effective connectivity was significantly modulated (1-way ANOVA, p<0.05) by the stimulus disparity for 25 out of the 27 pairs. We are not suggesting synaptic connections physically change, but rather that the effectiveness of those connections can change depending on the spiking activity and therefore the stimulus input. For neuron pairs with similar disparity tuning, the strongest correlation was observed at their shared preferred disparity, i.e. the peak of the disparity tuning curves based on firing rate (as shown in Figure 6). This suggests facilitation is strongest when a frontal parallel plane activated these neurons simultaneously at their preferred depth. As the stimulus plane moved away from this depth, the effective connectivity between the neurons became weaker. This was observed in 10 pairs (e.g., Figure 6c). For the other 17 pairs (e.g., Figure 6d), the correlation or effective connectivity was again strongest at the neuron pair's shared preferred disparity. However, these pairs in addition exhibited secondary correlation peaks for disparity stimuli that produced the lowest firing rates (even below the baseline for DRDSs). Figure 6: Top row are disparity tuning curves based on firing rates (mean ± s.e.m.). Bottom row are disparity tuning curves based on correlation for the corresponding pairs of neurons in the top row. Error bars are 95% confidence intervals and dashed lines represent 95% confidence of the mean correlation. Cross-correlation peaks are interpreted as a result of effective circuits that may represent any combination of a variety of synaptic connections that may have a bias in direction (one neuron drives the other) or may not have a bias in direction (zero lag time; both neurons receive a common drive) [10]. As correlation peaks become broader, as in our case (mean = 42 ms), this interpretation becomes more ambiguous (more possible circuits). The broader positive correlation peaks can even be caused by common inhibitory circuitry. One way to potentially disambiguate our interpretations is to consider firing rate behavior. The positive correlation measured at the preferred disparity suggests that the interaction was likely facilitatory in nature based on the increased firing of the neurons. The positive correlation measured at the disparity where both neurons' firing rates were depressed, i.e. at the valley of the firing0.00 0.10 0.20 0.30 -0.9 0.0 0.9 Horizontal Disparity (°) Correlation -0.9 0.9 Firing Rate 50 sps 80 sps 0.0 0.00 0.05 0.10 0.15 -0.9 0.0 0.9 Horizontal Disparity (°) -0.9 0.0 0.9 40 sps 20 sps a b c d rate based disparity tuning curves, suggests that the correlation likely arose from common inhibition (presumably from neurons that preferred that disparity). 5 Temporal dynamics of interaction We can compare the temporal dynamics of the correlation with the temporal dynamics of the firing rate of the neurons to gain more insight into the possible underlying circuitry. We computed the correlation every 1 msec over a 100 ms running window, and found that the correlation peak at the preferred disparity (based on firing rates) occurred at a later time (250-350 ms post-stimulus onset) than the correlation peaks at the non-preferred disparity (100-200 ms). Figure 7 illustrates the temporal dynamics of correlation for the example neuron pair shown in Figure 6b and 6d. The distinct interval in which correlation emerged at the preferred and the non-preferred disparities was consistently observed for all 27 pairs of neurons. Even for the example shown in Figure 6c, there were peaks in correlation in the early part of the response at the most non-preferred disparities. The timing of these two phases of correlation was also rather consistent over the population of pairs. Figure 7: Temporal dynamics of correlation for example neuron pair shown in Figure 6, right. From left to right: Correlation versus time for preferred (red) and non-preferred (blue) disparities. Contour map of correlation versus time and disparity. Disparity tuning based on correlation for the early (blue) and late (red) portion of the response (95% confidence intervals). Correlation was calculated every 1 ms over 100 ms windows. By examining the interplay between firing rate and correlation, we were able to gain even greater insight about the interactions among neuron pairs. To summarize this interplay across our population, we compared the temporal evolution of the correlation at three distinct disparities with the temporal evolution of the firing rates at the same disparities (also smoothed with 100 ms time windows). The first disparity, the preferred disparity A, is where we measured the strongest correlation and was at or near the highest firing rate measured in individual neurons (see Figure 8, left). The second important disparity, the most non-preferred disparity C, was where we measured secondary correlation peaks and coincided with the lowest firing rates observed in individual neurons. Lastly, we looked at a disparity B that was in between disparities A and C. Figure 8 shows that neurons responded better to their preferred disparity over other disparities very early, resulting in immediate moderate firing rate-based disparity tuning. Then shortly after (100 ms), a correlation peak emerges at the least preferred disparity C. This coincides with suppression of firing rate for all disparities (Figure 8, blue dashed line). However, the suppression in firing rate is much stronger for C where the firing rate diverges downward from the firing rates for A and B sharpening the disparity tuning (Figure 8, blue arrow; see also [11]). The strong correlation coupled with the decrease in firing suggests strong common inhibition. -0.9 0 0.9 100 200 300 400 Horizontal Disparity (°) Time (ms) 0.0 0.2 0.4 Correlation -0.9 0 0.9 Correlation Correlation Horizontal Disparity (°) -0.1 0 0.1 0.2 0.3 -0.1 0 0.1 0.2 0.3 Figure 8: Population average of normalized correlation versus time (top) for three disparities shown on the left. Population average of normalized PSTHs for same three disparities (bottom). Both correlation and firing rates were calculated every 1 ms over 100 ms windows. Once the correlation peak at C subsided (200 ms), the correlation increased for A (red dashed line). When the correlation for A peaked, the correlation decreased for B and C, leading to very sharp correlation-based disparity tuning (see also Figure 7). This correlationbased tuning can facilitate depth estimates by changing how effectively these signals are integrated downstream as a function of disparity [12]. Our interpretation is that the initial firing rate bias leads to antagonistic disparity-tuned neurons generating common inhibition that suppresses firing at non-preferred disparities, removing potential mismatches. The immediacy suggests that mutual inhibition was local, which is consistent with our observation that many opposing disparity-tuned neurons spatially coexisted. The slower correlation peak at the preferred disparity A is indicative of mutual facilitation that occurred when the depth estimates of spatially distinct neurons matched. This facilitation leads to a more precise estimate of depth. 6 Discussion and conclusions The findings from this study provide support to Julesz’s proposal that cooperative and competitive mechanisms in primary visual cortex are utilized for estimating global depth in random dot stereograms [1], which was later described formally by Marr and Poggio [2]. More recent cooperative stereo computation models allow excitatory interaction between neurons of different disparities separated by long distance. This is used to accommodate the computation of slanted surfaces [13,14]. In this experiment, we only tested frontal parallel planes, thus, we cannot answer whether or not effective connections and facilitation exist between neurons with larger disparity differences over long distances. This will require further experiments using planes with disparity gradients. The observation that initial correlation peaks occurred at disparities that evoked the lowest firing rates in neurons, suggests that correlation peaks emerged from common inhibition for non-preferred disparities. The observation that later correlation occurred at disparities that evoked the highest firing rates suggests that neurons were mutually exciting each other at their preferred disparity. Our neurophysiological data reveal interesting dynamics between network-based (effective connectivity) and firing rate-based encoding of depth estimates. The observation that inhibition precedes facilitation suggests that competition is local (ren = 32 cells A B C A B C Correlation Firing Rate n = 27 pairs -0.05 0.00 0.05 0.10 0.15 0.20 0.25 -100 0 100 200 300 400 Correlation A B C 0 5 10 15 20 25 30 -100 0 100 200 300 400 Time (ms) Firing Rate (sps) A B C calling neurons at the same electrode tend to have opposite disparity tuning) and cooperation is more global (mediated through long-range connectivity). Local competition between neurons encoding different depths is consistent with the uniqueness principle of Marr and Poggio's algorithm [2]. In addition, cooperation among neurons encoding the same depth across space was predicted by the second rule of their algorithm: matter is cohesive. These two interactions are robust at removing potential ambiguity during stereo matching and depth inference. Previous neurophysiological data had suggested that intracortical connectivity in primary visual cortex underlies competitive [15] and cooperative [16] mechanisms for improving estimates of orientation. Our data suggests similar circuitry might play a role also in stereo matching [17]. However, this study is distinct in that it provides detailed empirical support for computational algorithms for solving stereo matching. It thus highlights the importance of computational algorithms in generating hypotheses to guide future neurophysiological studies. Acknowledgments We thank George Gerstein and Jeff Keating for JPSTH software. Supported by NIMH IBSC MH64445 and NSF CISE IIS-0413211 grants. References [1] Julesz, B. (1971) Foundations of cyclopean perception. Chicago: University of Chicago Press. [2] Marr, D. & Poggio, T. (1976) Cooperative computation of stereo disparity. Science 194(4262):283-287. [3] Aertsen, A.M., Gerstein, G.L., Habib, M.K. & Palm, G. (1989) Dynamics of neuronal firing correlation: modulation of "effective connectivity". Journal of Neurophysiology 61(5):900-917. [4] Efron, B. & Tibshirani, R. (1993) An Introduction to the Bootstrap. New York: Chapman & Hall. [5] Brody, C.D. (1999) Correlations without synchrony. Neural Computation 11(7):1537-1551. [6] Gerstein, G.L. & Kirkland, K.L. (2001) Neural assemblies: technical issues, analysis, and modeling. Neural Networks 14(6-7):589-598. [7] Kohn, A. & Smith, M.A. (2005) Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. Journal of Neuroscience 25(14):3661-3673. [8] Menz, M. & Freeman, R.D. (2004) Functional connectivity of disparity-tuned neurons in the visual cortex. Journal of Neurophysiology 91(4):1794-1807. [9] Gilbert, C.D. & Wiesel, T.N. (1989) Columnar specificity of intrinsic horizontal and corticocortical connections in cat visual cortex. Journal of Neuroscience 9(7):2432-2442. [10] Moore, G.P., Segundo, J.P., Perkel, D.H. & Levitan, H. (1970) Statistical signs of synaptic interaction in neurons. Biophysics Journal 10(9):876-900. [11] Menz, M. & Freeman, R.D. (2003) Stereoscopic depth processing in the visual cortex: a coarseto-fine mechanism. Nature Neuroscience 6(1):59-65. [12] Bruno, R.M. & Sakmann, B. (2006) Cortex is driven by weak but synchronously active thalamocortical synapses. Science 312(5780):1622-1627. [13] Prazdny, K. (1985) Detection of binocular disparities. Biological Cybernetics 52(2):93-99. [14] Pollard, S.B., Mayhew, J.E., & Frisby, J.P. (1985) PMF: a stereo correspondence algorithm using a disparity gradient limit. Perception 14(4):449-470. [15] Ringach, D.L., Hawken, M.J. & Shapley, R. (1997) Dynamics of orientation tuning in macaque primary visual cortex. Nature 387(6630):281-284. [16] Samonds, J.M., Allison, J.D., Brown, H.A. & Bonds, A.B. (2004) Cooperative synchronized assemblies enhance orientation discrimination. Proceedings of the National Academy of Sciences USA 101(17):6722-6727. [17] Ben-Shahar, O., Huggins, P.S., Izo, T. & Zucker, S.W. (2003) Cortical connections and early visual function: intra- and inter-columnar processing. Journal of Physiology (Paris) 97(2-3):191-208.
|
2006
|
141
|
2,968
|
Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation Gavin C. Cawley School of Computing Sciences University of East Anglia Norwich, Norfolk, NR4 7TJ, U.K. gcc@cmp.uea.ac.uk Nicola L. C. Talbot School of Computing Sciences University of East Anglia Norwich, Norfolk, NR4 7TJ, U.K. nlct@cmp.uea.ac.uk Mark Girolami Department of Computing Science University of Glasgow Glasgow, Scotland, G12 8QQ, U.K. girolami@dcs.gla.ac.uk Abstract Multinomial logistic regression provides the standard penalised maximumlikelihood solution to multi-class pattern recognition problems. More recently, the development of sparse multinomial logistic regression models has found application in text processing and microarray classification, where explicit identification of the most informative features is of value. In this paper, we propose a sparse multinomial logistic regression method, in which the sparsity arises from the use of a Laplace prior, but where the usual regularisation parameter is integrated out analytically. Evaluation over a range of benchmark datasets reveals this approach results in similar generalisation performance to that obtained using cross-validation, but at greatly reduced computational expense. 1 Introduction Multinomial logistic and probit regression are perhaps the classic statistical methods for multi-class pattern recognition problems (for a detailed introduction, see e.g. [1, 2]). The output of a multinomial logistic regression model can be interpreted as an a-posteriori estimate of the probability that a pattern belongs to each of c disjoint classes. The probabilistic nature of the multinomial logistic regression model affords many practical advantages, such as the ability to set rejection thresholds [3], to accommodate unequal relative class frequencies in the training set and in operation [4], or to apply an appropriate loss matrix in making predictions that minimise the expected risk [5]. As a result, these models have been adopted in a diverse range of applications, including cancer classification [6, 7], text categorisation [8], analysis of DNA binding sites [9] and call routing. More recently, the focus of research has been on methods for inducing sparsity in (multinomial) logistic or probit regression models. In some applications, the identification of salient input features is of itself a valuable activity; for instance in cancer classification from micro-array gene expression data, the identification of biomarker genes, the pattern of expression of which is diagnostic of a particular form of cancer, may provide insight into the ætiology of the condition. In other applications, these methods are used to select a small number of basis functions to form a compact non-parametric classifier, from a set that may contain many thousands of candidate functions. In this case the sparsity is desirable for the purposes of computational expediency, rather than as an aid to understanding the data. A variety of methods have been explored that aim to introduce sparsity in non-parametric regression models through the incorporation of a penalty or regularisation term within the training criterion. In the context of least-squares regression using Radial Basis Function (RBF) networks, Orr [10], proposes the use of local regularisation, in which a weight-decay regularisation term is used with distinct regularisation parameters for each weight. The optimisation of the Generalised Cross-Validation (GCV) score typically leads to the regularisation parameters for redundant basis functions achieving very high values, allowing them to be identified and pruned from the network (c.f. [11, 12]). The computational efficiency of this approach can be further improved via the use of Recursive Orthogonal Least Squares (ROLS). The relevance vector machine (RVM) [13] implements a form of Bayesian automatic relevance determination (ARD), using a separable Gaussian prior. In this case, the regularisation parameter for each weight is adjusted so as to maximise the marginal likelihood, also known as the Bayesian evidence for the model. An efficient component-wise training algorithm is given in [14]. An alternative approach, known as the LASSO [15], seeks to minimise the negative log-likelihood of the sample, subject to an upper bound on the sum of the absolute value of the weights (see also [16] for a practical training procedure). This strategy is equivalent to the use of a Laplace prior over the model parameters [17], which has been demonstrated to control over-fitting and induce sparsity in the weights of multi-layer perceptron networks [18]. The equivalence of the Laplace prior and a separable Gaussian prior (with appropriate choice of regularisation parameters) has been established by Grandvalet [11, 12], unifying these strands of research. In this paper, we demonstrate that, in the case of the Laplace prior, the regularisation parameters can be integrated out analytically, obviating the need for a lengthy cross-validation based model selection stage. The resulting sparse multinomial logistic regression algorithm with Bayesian regularisation (SBMLR) is then fully automated and, having storage requirements that scale only linearly with the number of model parameters, is well suited to relatively large-scale applications. The remainder of this paper is set out as follows: The sparse multinomial logistic regression procedure with Bayesian regularisation is presented in Section 2. The proposed algorithm is then evaluated against competing approaches over a range of benchmark learning problems in Section 3. Finally, the work is summarised in Section 5 and conclusion drawn. 2 Method Let D = {(xn, tn)}ℓ n=1 represent the training sample, where xn ∈X ⊂Rd is the vector of input features for the ith example, and tn ∈T = {t | t ∈{0, 1}c, ∥t∥1 = 1} is the corresponding vector of desired outputs, using the usual 1-of-c coding scheme. Multinomial logistic regression constructs a generalised linear model [1] with a softmax inverse link function [19], allowing the outputs to be interpreted as a-posteriori estimates of the probabilities of class membership, p(tn i |xn) = yn i = exp{an i } Pc j=1 exp{an j } where an i = d X j=1 wijxn j (1) Assuming that D represents an i.i.d. sample from a conditional multinomial distribution, then the negative log-likelihood, used as a measure of the data-misfit, can be written as, ED = ℓ X n=1 En D = − ℓ X n=1 c X i=1 tn i log {yn i } The parameters, w of the multinomial logistic regression model are given by the minimiser of a penalised maximum-likelihood training criterion, L = ED + αEW where EW = c X i=1 d X j=1 |wij| (2) and α is a regularisation parameter [20] controlling the bias-variance trade-off [21]. At a minima of L, the partial derivatives of L with respect to the model parameters will be uniformly zero, giving ∂ED ∂wij = α if |wij| > 0 and ∂ED ∂wij < α if |wij| = 0. This implies that if the sensitivity of the negative log-likelihood with respect to a model parameter, wij, falls below α, then the value of that parameter will be set exactly to zero and the corresponding input feature can be pruned from the model. 2.1 Eliminating the Regularisation Parameters Minimisation of (2) has a straight-forward Bayesian interpretation; the posterior distribution for w, the parameters of the model given by (1), can be written as p(w|D) ∝P(D|w)P(w). L is then, up to an additive constant, the negative logarithm of the posterior density. The prior over model parameters, w, is then given by a separable Laplace distribution P(w) = α 2 W exp{−αEW} = W Y i=1 α 2 exp {−α|wi|} , (3) where W is the number of active (non-zero) model parameters. A good value for the regularisation parameter α can be estimated, within a Bayesian framework, by maximising the evidence [22] or alternatively it may be integrated out analytically [17, 23]. Here we take the latter approach, where the prior distribution over model parameters is given by marginalising over α, p(w) = Z p(w|α)p(α)dα. As α is a scale parameter, an appropriate ignorance prior is given by the improper Jeffrey’s prior, p(α) ∝1/α, corresponding to a uniform prior over log α. Substituting equation (3) and noting that α is strictly positive, p(w) = 1 2W Z ∞ 0 αW −1 exp{−αEW}dα. Using the Gamma integral, R ∞ 0 xν−1e−µxdx = Γ(ν) µν [24, equation 3.384], we obtain p(w) = 1 2W Γ(W) EW W =⇒ −log p(w) ∝W log EW, giving a revised optimisation criterion for sparse logistic regression with Bayesian regularisation, M = ED + W log EW, (4) in which the regularisation parameter has been eliminated, for further details and theoretical justification, see [17]. Note that we integrate out the regularisation parameter and optimise the model parameters, which is unusual in that most Bayesian approaches, such as the relevance vector machine [13] optimise the regularisation parameters and integrate over the weights. 2.1.1 Practical Implementation The training criterion incorporating a fully Bayesian regularisation term can be minimised via a simple modification of existing cyclic co-ordinate descent algorithms for sparse regression using a Laplace prior (e.g. [25, 26]). Differentiating the original and modified training criteria, (2) and (4) respectively, we have that ∇L = ∇ED + α∇EW and ∇M = ∇ED + ˜α∇EW where 1/˜α = 1 W W X i=1 |wi|. (5) From a gradient descent perspective, minimising M effectively becomes equivalent to minimising L, assuming that the regularisation parameter, α, is continuously updated according to (5) following every change in the vector of model parameters, w [17]. This requires only a very minor modification of the existing training algorithm, whilst eliminating the only training parameter and hence the need for a model selection procedure in fitting the model. 2.1.2 Equivalence of Marginalisation and Optimisation under the Evidence Framework Williams [17] notes that, at least in the case of the Laplace prior, integrating out the regularisation parameter analytically is equivalent to its optimisation under the evidence framework of MacKay [22]. The argument provided by Williams can be summarised as follows: The evidence framework sets the value of the regularisation parameter so as to optimise the marginal likelihood, P(D) = Z P(D|w)P(w)dw, also known as the evidence for the model. The Bayesian interpretation of the regularised objective function gives, P(D) = 1 ZW Z exp {−L} dw, where ZW is a normalising constant for the prior over the model parameters, for the Laplace prior, ZW = (2/α)W . In the case of multinomial logistic regression, ED represents the negative logarithm of a normalised distribution, and so the corresponding normalising constant for the data misfit term is redundant. Unfortunately this integral is analytically intractable, and so we adopt the Laplace approximation, corresponding to a Gaussian posterior distribution for the model parameters, centred on their most probable value, wMP, L(w) = L(wMP) + 1 2(w −wMP)T A(w −wMP) where A = ∇∇L is the Hessian of the regularised objective function. The regulariser corresponding to the Laplace prior is locally a hyper-plane, and so does not contribute to the Hessian and so A = ∇∇ED. The negative logarithm of the evidence can then be written as, −log P(D) = ED + αEW + 1 2 log |A| + log ZW + constant. Setting the derivative of the evidence with respect to α to zero, gives rise to a simple update rule for the regularisation parameter, 1 ˜α = 1 W W X j=1 |wj|, which is equivalent to the update rule obtained using the integrate-out approach. Maximising the evidence for the model also provides a convenient means for model selection. Using the Laplace approximation, evidence for a multinomial logistic regression model under the proposed Bayesian regularisation scheme is given by −log {D} = ED + W log EW −log Γ (W) 2W + 1 2 log |A| + constant where A = ∇∇L. 2.2 A Simple but Efficient Training Algorithm In this study, we adopt a simplified version of the efficient component-wise training algorithm of Shevade and Keerthi [25], adapted for multinomial, rather than binomial, logistic regression. The principal advantage of a component-wise optimisation algorithm is that the Hessian matrix is not required, but only the first and second partial derivatives of the regularised training criterion. The first partial derivatives of the data mis-fit term are given by, ∂En D ∂an j = c X i=1 ∂En D ∂yn i ∂yn i ∂an j where ∂En D ∂yn i = −tn i yn i , ∂yn i ∂an j = yiδij −yiyj and δij = 1 if i = j and otherwise δij = 0. Substituting, we obtain, ∂ED ∂ai = ℓ X n=1 [yn i −tn i ] =⇒ ∂ED ∂wij = ℓ X n=1 [yn i −tn i ] xn j = ℓ X n=1 yn i xn j − ℓ X n=1 tn i xn j . Similarly, the second partial derivatives are given by, ∂2ED ∂wij = ℓ X n=1 xn j ∂yn i ∂wij = ℓ X n=1 yn i (1 −yn i ) xn j 2 . The Laplace regulariser is locally a hyperplane, with the magnitude of the gradient given by the regularisation parameter, α, ∂αEW ∂wij = sign {wij} α and ∂2αEW ∂w2 ij = 0. The partial derivatives of the regularisation term are not defined at the origin, and so we define the effective gradient of the regularised loss function as follows: ∂L ∂wij = ∂ED ∂wij + α if wij > 0 ∂ED ∂wij −α if wij < 0 ∂ED ∂wij + α if wij = 0 and ∂ED ∂wij + α < 0 ∂ED ∂wij −α if wij = 0 and ∂ED ∂wij −α > 0 0 otherwise Note that the value of a weight may be stable at zero if the derivative of the regularisation term dominates the derivative of the data misfit. The parameters of the model may then be optimised, using Newton’s method, i.e. wij ←wij −∂ED ∂wij " ∂2ED ∂w2 ij #−1 . Any step that causes a change of sign in a model parameter is truncated and that parameter set to zero. All that remains is to decide on a heuristic used to select the parameter to be optimised in each step. In this study, we adopt the heuristic chosen by Shevade and Keerthi, in which the parameter having the steepest gradient is selected in each iteration. The optimisation proceeds using two nested loops, in the inner loop, only active parameters are considered. If no further progress can be made by optimising active parameters, the search is extended to parameters that are currently set to zero. An optimisation strategy based on scaled conjugate gradient descent [27] has also be found to be effective. 3 Results The proposed sparse multinomial logistic regression method incorporating Bayesian regularisation using a Laplace prior (SBMLR) was evaluated over a suite of well-known benchmark datasets, against sparse multinomial logistic regression with five-fold cross-validation based optimisation of the regularisation parameter using a simple line search (SMLR). Table 1 shows the test error rate and cross-entropy statistics for SMLR and SBMLR methods over these datasets. Clearly, there is little reason to prefer either model over the other in terms of generalisation performance, as neither consistently dominates the other, either in terms of error rate or cross-entropy. Table 1 also shows that the Bayesian regularisation scheme results in models with a slightly higher degree of sparsity (i.e. the proportion of weights pruned from the model). However, the most striking aspect of the comparison is that the Bayesian regularisation scheme is typically around two orders of magnitude faster than the cross-validation based approach, with SBMLR being approximately five times faster in the worst case (COVTYPE). 3.1 The Value of Probabilistic Classification Probabilistic classifiers, i.e. those that providing an a-posteriori estimate of the probability of class membership, can be used in minimum risk classification, using an appropriate loss matrix to account for the relative costs of different types of error. Probabilistic classifiers allow rejection thresholds to be set in a straight-forward manner. This is particularly useful in a medical setting, where it may be prudent to refer a patient for further tests if the diagnosis is uncertain. Finally, the output of Table 1: Evaluation of linear sparse multinomial logistic regression methods over a set of nine benchmark datasets. The best results for each statistic are shown in bold. The final column shows the logarithm of the ratio of the training times for the SMLR and SBMLR, such that a value of 2 would indicate that SBMLR is 100 times faster than SMLR for a given benchmark dataset. Benchmark Error Rate Cross Entropy Sparsity log10 TSMLR TSBMLR SBMLR SMLR SBMLR SMLR SBMLR SMLR Covtype 0.4051 0.4041 0.9590 0.9733 0.4312 0.3069 0.6965 Crabs 0.0350 0.0500 0.1075 0.0891 0.2708 0.0635 2.7949 Glass 0.3318 0.3224 0.9398 0.9912 0.4400 0.4700 1.9445 Iris 0.0267 0.0267 0.0792 0.0867 0.4067 0.4067 1.9802 Isolet 0.0475 0.0513 0.1858 0.2641 0.9311 0.8598 1.3110 Satimage 0.1610 0.1600 0.3717 0.3708 0.3694 0.2747 1.3083 Viruses 0.0328 0.0328 0.1670 0.1168 0.8118 0.7632 2.1118 Waveform 0.1290 0.1302 0.3124 0.3131 0.3712 0.3939 1.8133 Wine 0.0225 0.0281 0.0827 0.0825 0.6071 0.5524 2.5541 a probabilistic classifier can be adjusted after training to compensate for a difference between the relative class frequencies in the training set and those observed in operation. Saerens [4] provides a simple expectation-maximisation (EM) based procedure for estimating unknown operational apriori probabilities from the output of a probabilistic classifier (c.f. [28]). Let pt (Ci) represent the a-priori probability of class Ci in the training set and pt (Ci|xn) represent the raw output of the classifier for the nth pattern of the test data (representing operational conditions). The operational a-priori probabilities, po (Ci) can then be updated iteratively via p(s) o (ωi|xn) = p(s) o (ωi) pt(ωi) pt(ωi|xn) Pc j=1 p(s) o (ωj) pt(ωj) pt(ωj|xn) and p(s+1) o (ωi) = 1 ℓ N X n=1 p(s) o (ωi|xn), (6) beginning with p(0) o (Ci) = pt (Ci). Note that the labels of the test examples are not required for this procedure. The adjusted estimates of a-posteriori probability are then given by the first part of equation (6). The training and validation sets of the COVTYPE benchmark have been artificially balanced, by random sampling, so that each class is represented by the same number of examples. The test set consists of the unused patterns, and so the test set a-priori probabilities are both highly disparate and very different from the training set a-priori probabilities. Figure 1 and Table 2 summarise the results obtained using the raw and corrected outputs of a linear SBMLR model on this dataset, clearly demonstrating a key advantage of probabilistic classifiers over purely discriminative methods, for example the support vector machine (note the same procedure could be applied to the SMLR model with similar results). Table 2: Error rate and average crossentropy score for linear SBMLR models of the COVTYPE benchmark, using the raw and corrected outputs. Statistic Raw Corrected Error Rate 40.51% 28.57% Cross-Entropy 0.9590 0.6567 1 2 3 4 5 6 7 0 0.1 0.2 0.3 0.4 0.5 class a−priori probability training set test set estimated Figure 1: Training set, test set and estimated a-priori probabilities for the COVTYPE benchmark. 4 Relationship to Existing Work The sparsity inducing Laplace density has been utilized previously in [15, 25, 26, 29–31] and emerges as the marginal of a scale-mixture-of-Gaussians where the corresponding prior is an Exponential such that Z Nw(0, τ)Eτ(γ)dτ = α 2 exp(−α|w|) where Eτ(γ) is an Exponential distribution over τ with parameter γ and α = √γ. In [29] this hierarchical representation of the Laplace prior is utilized to develop an EM style sparse binomial probit regression algorithm. The hyper-parameter α is selected via cross-validation but in an attempt to circumvent this requirement a Jeffreys prior is placed on τ and is used to replace the exponential distribution in the above integral. This yields an improper parameter free prior distribution over w which removes the explicit requirement to perform any cross-validation. However, the method developed in [29] is restricted to binary classification and has compute scaling O(d3) which prohibits its use on moderately high-dimensional problems. Likewise in [13] the RVM employs a similar scale-mixture for the prior where now the Exponential distribution is replaced by a Gamma distribution whose marginal yields a Student prior distribution. No attempt is made to estimate the associated hyper-parameters and these are typically set to zero producing, as in [29], a sparsity inducing improper prior. As with [29] the original scaling of [13] is, at worst, O(d3), though more efficient methods have been developed in [14]. However the analysis holds only for a binary classifier and it would be non-trivial to extend this to the multi-class domain. A similar multinomial logistic regression model to the one proposed in this paper is employed in [26] where the algorithm is applied to large scale classification problems and yet they, as with [25], have to resort to cross-validation in obtaining a value for the hyper-parameters of the Laplace prior. 5 Summary In this paper we have demonstrated that the regularisation parameter used in sparse multinomial logistic regression using a Laplace prior can be integrated out analytically, giving similar performance in terms of generalisation as is obtained using extensive cross-validation based model selection, but at a greatly reduced computational expense. It is interesting to note that the SBMLR implements a strategy that is exactly the opposite of the relevance vector machine (RVM) [13], in that it integrates over the hyper-parameter and optimises the weights, rather than marginalising the model parameters and optimising the hyper-parameters. It seems reasonable to suggest that this approach is feasible in the case of the Laplace prior as the pruning action of this prior ensures that values of all of the weights are strongly determined by the data misfit term. A similar strategy has already proved effective in cancer classification based on gene expression microarray data in a binomial setting [32], and we plan to extend this work to multi-class cancer classification in the near future. Acknowledgements The authors thank the anonymous reviewers for their helpful and constructive comments. MG is supported by EPSRC grant EP/C010620/1. References [1] P. McCullagh and J. A. Nelder. Generalized linear models, volume 37 of Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, second edition, 1989. [2] D. W. Hosmer and S. Lemeshow. Applied logistic regression. Wiley, second edition, 2000. [3] C. K. Chow. On optimum recognition error and reject tradeoff. IEEE Transactions on Information Theory, 16(1):41–46, January 1970. [4] M. Saerens, P. Latinne, and C. Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computation, 14(1):21–41, 2001. [5] J. O. Berger. Statistical decision theory and Bayesian analysis. Springer Series in Statistics. Springer, second edition, 1985. [6] J. Zhu and T. Hastie. Classification of gene microarrays by penalized logistic regression. Biostatistics, 5(3):427–443, 2004. [7] X. Zhou, X. Wang, and E. R. Dougherty. Multi-class cancer classification using multinomial probit regression with Bayesian gene selection. IEE Proceedings - Systems Biology, 153(2):70–76, March 2006. [8] T. Zhang and F. J. Oles. Text categorization based on regularised linear classification methods. Information Retrieval, 4(1):5–31, April 2001. [9] L. Narlikar and A. J. Hartemink. Sequence features of DNA binding sites reveal structural class of associated transcription factor. Bioinformatics, 22(2):157–163, 2006. [10] M. J. L. Orr. Regularisation in the selection of radial basis function centres. Neural Computation, 7(3):606–623, 1995. [11] Y. Grandvalet. Least absolute shrinkage is equivalent to quadratic penalisation. In L. Niklasson, M. Bod´en, and T. Ziemske, editors, Proceedings of the International Conference on Artificial Neural Networks, Perspectives in Neural Computing, pages 201–206, Sk¨ovde, Sweeden, September 2–4 1998. Springer. [12] Y. Grandvalet and S. Canu. Outcomes of the quivalence of adaptive ridge with least absolute shrinkage. In Advances in Neural Information Processing Systems, volume 11. MIT Press, 1999. [13] M. E. Tipping. Sparse Bayesian learning and the Relevance Vector Machine. Journal of Machine Learning Research, 1:211–244, 2001. [14] A. C. Faul and M. E. Tipping. Fast marginal likelihood maximisation for sparse Bayesian models. In C. M. Bishop and B. J. Frey, editors, Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3–6 January 2003. [15] R. Tibshirani. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society - Series B, 58:267–288, 1996. [16] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [17] P. M. Williams. Bayesian regularization and pruning using a Laplace prior. Neural Computation, 7(1):117–143, 1995. [18] C. M. Bishop. Neural networks for pattern recognition. Oxford University Press, 1995. [19] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogelman Souli´e and J. H´erault, editors, Neurocomputing: Algorithms, architectures and applications, pages 227–236. Springer-Verlag, New York, 1990. [20] A. N. Tikhonov and V. Y. Arsenin. Solutions of ill-posed problems. John Wiley, New York, 1977. [21] S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilema. Neural Computation, 4(1):1–58, 1992. [22] D. J. C. MacKay. The evidence framework applied to classification networks. Neural Computation, 4(5):720–736, 1992. [23] W. L. Buntine and A. S. Weigend. Bayesian back-propagation. Complex Systems, 5:603–643, 1991. [24] I. S. Gradshteyn and I. M. Ryzhic. Table of Integrals, Series and Products. Academic Press, fifth edition, 1994. [25] S. K. Shevade and S. S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19(17):2246–2253, 2003. [26] D. Madigan, A. Genkin, D. D. Lewis, and D. Fradkin. Bayesian multinomial logistic regression for author identification. In AIP Conference Proceedings, volume 803, pages 509–516, 2005. [27] P. M. Williams. A Marquardt algorithm for choosing the step size in backpropagation learning with conjugate gradients. Technical Report CSRP-229, University of Sussex, February 1991. [28] G. J. McLachlan. Discriminant analysis and statistical pattern recognition. Wiley, 1992. [29] M. Figueiredo. Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1150–1159, September 2003. [30] B. Krishnapuram, L. Carin, M. A. T. Figueiredo, and A. J. Hartemink. Sprse multinomial logistic regression: Fast algorithms and generalisation bounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(6):957–968, June 2005. [31] J. M. Bioucas-Dias, M. A. T. Figueiredo, and J. P. Oliveira. Adaptive total variation image deconvolution: A majorization-minimization approach. In Proceedings of the European Signal Processing Conference (EUSIPCO’2006), Florence, Italy, September 2006. [32] G. C. Cawley and N. L. C. Talbot. Gene selection in cancer classification using sparse logistic regression with Bayesian regularisation. Bioinformatics, 22(19):2348–2355, October 2006.
|
2006
|
142
|
2,969
|
Gaussian and Wishart Hyperkernels Risi Kondor, Tony Jebara Computer Science Department, Columbia University 1214 Amsterdam Avenue, New York, NY 10027, U.S.A. {risi,jebara}@cs.columbia.edu Abstract We propose a new method for constructing hyperkenels and define two promising special cases that can be computed in closed form. These we call the Gaussian and Wishart hyperkernels. The former is especially attractive in that it has an interpretable regularization scheme reminiscent of that of the Gaussian RBF kernel. We discuss how kernel learning can be used not just for improving the performance of classification and regression methods, but also as a stand-alone algorithm for dimensionality reduction and relational or metric learning. 1 Introduction The performance of kernel methods, such as Support Vector Machines, Gaussian Processes, etc. depends critically on the choice of kernel. Conceptually, the kernel captures our prior knowledge of the data domain. There is a small number of popular kernels expressible in closed form, such as the Gaussian RBF kernel k(x, x′) = exp(−∥x −x′ ∥2 /(2σ2)), which boasts attractive and unique properties from an abstract function approximation point of view. In real world problems, however, and especially when the data is heterogenous or discrete, engineering an appropriate kernel is a major part of the modelling process. It is natural to ask whether instead it might be possible to learn the kernel itself from the data. Recent years have seen the development of several approaches to kernel learning [5][1]. Arguably the most principled method proposed to date is the hyperkernels idea introduced by Ong, Smola and Williamson [8][7][9]. The current paper is a continuation of this work, introducing a new family of hyperkernels with attractive properties. Most work on kernel learning has focused on finding a kernel which is subsequently to be used in a conventional kernel machine, turning learning into an essentially two-stage process: first learn the kernel, then use it in a conventional algorithm such as an SVM to solve a classification or regression task. Recently there has been increasing interest in using the kernel in its own right to answer relational questions about the dataset. Instead of predicting individual labels, a kernel characterizes which pairs of labels are likely to be the same, or related. Kernel learning can be used to infer the network structure underlying data. A different application is to use the learnt kernel to produce a low dimensional embedding via kernel PCA. In this sense, kernel learning can be also be regarded as a dimensionality reduction or metric learning algorithm. 2 Hyperkernels We begin with a brief review of the kernel and hyperkernel formalism. Let X be the input space, Y the output space, and {(x1, y1) , (x2, y2) , . . . , (xm, ym)} the training data. By kernel we mean a symmetric function k: X × X →R that is positive definite on X. Whenever we refer to a function being positive definite, we assume that it is also symmetric. Positive definiteness guarantees that k induces a Reproducing Kernel Hilbert Space (RKHS) F, which is a vector space of functions spanned by { kx(·) = k(x, ·) | x ∈X } and endowed with an inner product satisfying ⟨kx, kx′⟩= k(x, x′). Kernel-based learning algorithms find a hypothesis ˆf ∈F by solving some variant of the Regularized Risk Minimzation problem ˆf = arg min f∈F " 1 m m X i=1 L(f(xi), yi) + 1 2 ∥f ∥2 F # where L is a loss function of our choice. By the Representer Theorem [2], ˆf is expressible in the form ˆ(x) = Pm i=1 αi k(xi, x) for some α1, α2, . . . , αm ∈R. The idea expounded in [8] is to set up an analogous optimization problem for finding k itself in the RKHS of a hyperkernel K : X × X →R, where X = X 2. We will sometimes view K as a function of four arguments, K((x1, x′ 1), (x2, x′ 2)), and sometimes as a function of two pairs, K(x1, x2), with x1 = (x1, x′ 1) and x2 = (x2, x′ 2). To induce an RKHS K must be positive definite in the latter sense. Additionaly, we have to ensure that the solution of our regularized risk minimization problem is itself a kernel. To this end, we require that the functions Kx1,x′ 1(x2, x′ 2) that we get by fixing the first two arguments of K((x1, x′ 1), (x2, x′ 2)) be symmetric and positive definite kernel in the remaining two arguments. Definition 1. Let X be a nonempty set, X = X × X and K : X × X →R with Kx( · ) = K(x, · ) = K( · , x). Then K is called a hyperkernel on X if and only if 1. K is positive definite on X and 2. for any x ∈X, Kx is positive definite on X. Denoting the RKHS of K by K, potential kernels lie in the cone Kpd = { k ∈K | k is pos.def. }. Unfortunately, there is no simple way of restricting kernel learning algorithms to Kpd. Instead, we will restrict ourselves to the positive quadrant K+ = k ∈K | k, Kx ≥0 ∀x ∈X , which is a subcone of Kpd. The actual learning procedure involved in finding k is very similar to conventional kernel methods, except that now regularized risk minimization is to be performed over all pairs of data points: ˆk = arg min K∗ Q(X, Y, k) + 1 2 ∥k ∥2 K , (1) where Q is a quality functional describing how well k fits the training data and K∗= K+. Several candidates for Q are described in [8]. If K∗has the property that for any S ⊂X the orthogonal projection of any k ∈K∗to the subspace spanned by Kx | x ∈X remains in K∗, then bk is expressible as bk(x, x′) = m X i,j=1 αij K(xi,xj)(x, x′) = m X i,j=1 αij K((xi, xj), (x, x′)) (2) for some real coefficients (αij)i.j. In other words, we have a hyper-representer theorem. It is easy to see that for K∗= K+ this condition is satisfied provided that K((x1, x′ 1), (x2, x′ 2)) ≥0 for all x1, x′ 1, x2, x′ 2 ∈X. Thus, in this case to solve (1) it is sufficient to optimize the variables (αij)m i,j=1, introducing the additional constraints αij ≥0 to enforce bk ∈K+. Finding functions that satisfy Definition 1 and also make sense in terms of regularization theory or practical problem domains in not trivial. Some potential choices are presented in [8]. In this paper we propose some new families of hyperkernels. The key tool we use is the following simple lemma. Lemma 1. Let {gz : X →R} be a family of functions indexed by z ∈Z and let h: Z×Z →R be a kernel. Then k(x, x′) = Z Z gz(x) h(z, z′) gz′(x′) dz dz′ (3) is a kernel on X. Furthermore, if h is pointwise positive (h(z, z′) ≥0) and { gz : X ×X →R } is a family of pointwise positive kernels, then K ((x1, x′ 1) , (x2, x′ 2)) = Z Z gz1(x1, x′ 1) h(z1, z2) gz2(x2, x′ 2) dz1 dz2 (4) is a hyperkernel on X, and it satisfies K((x1, x′ 1), (x2, x′ 2)) ≥0 for all x1, x′ 1, x2, x′ 2 ∈X. 3 Convolution hyperkernels One interpreation of a kernel k(x, x′) is that it quantifies some notion of similarity between points x and x′. For the Gaussian RBF kernel, and heat kernels in general, this similarity can be regarded as induced by a diffusion process in the ambient space [4]. Just as physical substances diffuse in space, the similarity between x and x′ is mediated by intermediate points, in the sense that by virtue of x being similar to some x0 and x0 being similar to x′, x and x′ themselves become similar to each other. This captures the natural transitivity of similarity. Specifically, the normalized Gaussian kernel on Rn of variance 2t = σ2, kt(x, x′) = 1 (4πt)n/2 e−∥x−x′ ∥2/(4t), satisfies the well known convolution property kt(x, x′) = Z kt/2(x, x0) kt/2(x0, x) dx0 . (5) Such kernels are by definition homogenous and isotropic in the ambient space. What we hope for from the hyperkernels formalism is to be able to adapt to the inhomogeneous and anisotropic nature of training data, while retaining the transitivity idea in some form. Hyperkernels achieve this by weighting the integrand of (5) in relation to what is “on the other side” of the hyperkernel. Specifically, we define convolution hyperkernels by setting gz(x, x′) = r(x, z) r(x′, z) in (4) for some r: X × X →R. By (3), the resulting hyperkernel always satisfies the conditions of Definition 1. Definition 2. Given functions r: X×X →R and h: X×X →R where h is positive definite, the convolution hyperkernel induced by r and h is K ((x1, x′ 1) , (x2, x′ 2)) = Z Z r(x1, z1) r(x′ 1, z1) h(z1, z2) r(x2, z2) r(x′ 2, z2) dz1 dz2 . (6) A good way to visualize the structure of convolution hyperkernels is to note that (6) is proportional to the likelihood of the graphical model in the figure to the right. The only requirements on the graphical model are to have the same potential function ψ1 at each of the extremities and to have a positive definite potential function ψ2 at the core. 3.1 The Gaussian hyperkernel To make the foregoing more concrete we now investigate the case where r(x, x′) and h(z, z′) are Gaussians. To simplify the notation we use the shorthand ⟨x, x′⟩σ2 = 1 (2πσ2)n/2 e−∥x−x′ ∥2/(2σ2). The Gaussian hyperkernel on X =Rn is then defined as K((x1, x′ 1), (x2, x′ 2)) = Z X Z X ⟨x1, z⟩σ2 ⟨z, x′ 1⟩σ2 ⟨z, z′⟩σ2 h ⟨x2, z′⟩σ2 ⟨z′, x′ 2⟩σ2 dz dz′. (7) Fixing x and completing the square we have ⟨x1, z⟩σ2 ⟨z, x′ 1⟩σ2 = 1 (2πσ2)n exp −1 2σ2 ∥z−x1 ∥2 + ∥z−x′ 1 ∥2 = 1 (2πσ2)n exp −1 σ2 wwww z −x1+x′ 1 2 wwww 2 −∥x1−x′ 1 ∥2 4σ2 = ⟨x1, x′ 1⟩2σ2 ⟨z, x1⟩σ2/2 , where xi = (xi+x′ i)/2. By the convolution property of Gaussians it follows that K((x1, x′ 1), (x2, x′ 2)) = ⟨x1, x′ 1⟩2σ2 ⟨x2, x′ 2⟩2σ2 Z X Z X ⟨x1, z⟩σ2/2 ⟨z, z′⟩σ2 h ⟨z, x2⟩σ2/2 dz dz′ = ⟨x1, x′ 1⟩2σ2 ⟨x2, x′ 2⟩2σ2 ⟨x1, x2⟩σ2+σ2 h . (8) It is an important property of the Gaussian hyperkernel that it can be evaluated in closed form. A noteworthy special case is when h(x, x′) = δ(x, x′), corresponding to σ2 h →0. At the opposite extreme, in the limit σ2 h →∞, the hyperkernel decouples into the product of two RBF kernels. Since the hyperkernel expansion (2) is a sum over hyperkernel evaluations with one pair of arguments fixed, it is worth examining what these functions look like: Kx1,x′ 1(x2, x′ 2) ∝exp −∥x1 −x2 ∥2 2 (σ2 +σ2 h) exp −∥x2 −x′ 2 ∥2 2σ′2 (9) with σ′ = √ 2σ. This is really a conventional Gaussian kernel between x2 and x′ 2 multiplied by a spatially varying Gaussian intensity factor depending on how close the mean of x2 and x′ 2 is to the mean of the training pair. This can be regarded as a localized Gaussian, and the full kernel (2) will be a sum of such terms with positive weights. As x2 and x′ 2 move around in X, whichever localized Gaussians are centered close to their mean will dominate the sum. By changing the (αij) weights, the kernel learning algorithm can choose k from a highly flexible class of potential kernels. The close relationship of K to the ordinary Gaussian RBF kernel is further borne out by changing coordinates to ˆx = (x + x′) / √ 2 and ˜x = (x −x′) / √ 2, which factorizes the hyperkernel in the form K((ˆx1, ˜x1), (ˆx2, ˜x2)) = ˆK(ˆx1, ˆx2) ˜K(˜x1, ˜x2) = ⟨ˆx1, ˆx2⟩2(σ2+σ2 h) ⟨˜x1, 0⟩σ2 ⟨˜x2, 0⟩σ2 . Omitting details for brevity, the consequences of this include that K = ˆK × ˜K, where ˆK is the RKHS of a Gaussian kernel over X, while ˜K is the one-dimensional space generated by ⟨˜x, 0⟩σ2: each k ∈K can be written as k(ˆx, ˜x) = ˆk(ˆx) ⟨˜x, 0⟩σ2. Furthermore, the regularization operator Υ (defined by ⟨k, k′⟩K = ⟨Υk, Υk′⟩L2 [10]) will be ⟨˜x, 0⟩σ2 Z bκ(ω) eiωxdω 7→ ⟨˜x, 0⟩σ2 Z e(σ2+σ2 h) ω2/2 bκ(ω) eiωxdω where bκ(ω) is the Fourier transform of bk(bx), establishing the same exponential regularization penalty scheme in the Fourier components of ˆk that is familiar from the theory of Gaussian RBF kernels. In summary, K behaves in (ˆx1, ˆx2) like a Gaussian kernel with variance 2(σ2 +σ2 h), but in ˜x it just effects a one-dimensional feature mapping. 4 Anisotropic hyperkernels With the hyperkernels so far far we can only learn kernels that are a sum of rotationally invariant terms. Consequently, the learnt kernel will have a locally isotropic character. Yet, rescaling of the axes and anisotropic dilations are one of the most common forms of variation in naturally occurring data that we would hope to accomodate by learning the kernel. 4.1 The Wishart hyperkernel We define the Wishart hyperkernel as K((x1, x′ 1), (x2, x′ 2)) = Z Σ⪰0 Z X ⟨x1, z⟩Σ ⟨z, x′ 1⟩Σ ⟨x2, z⟩Σ ⟨z, x′ 2⟩Σ IW(Σ; C, r) dz dΣ. (10) where ⟨x, x′⟩Σ = 1 (2π)n/2 | Σ |1/2 e−(x−x′)⊤Σ−1(x−x′)/2, and IW(Σ; C, r) is the inverse Wishart distribution | C | r/2 Zr,n | Σ |(n+r+1)/2 exp −tr Σ−1C /2 over positive definite matrices (denoted Σ ⪰0) [6]. Here r is an integer parameter, C is an n × n positive definite parameter matrix and Zr,n = 2 rn/2πn(n−1)/4 Qn i=1 Γ((r+1−i)/2) is a normalizing factor. The Wishart hyperkernel can be seen as the anisotropic analog of (7) in the limit σ2 h →0, ⟨z, z′⟩σ2 h →δ(z, z′). Hence, by Lemma 1, it is a valid hyperkernel. In analogy with (8), K((x1, x′ 1), (x2, x′ 2)) = Z Σ⪰0 ⟨x1, x′ 1⟩2Σ ⟨x2, x′ 2⟩2Σ ⟨x1, x2⟩Σ IW(Σ; C, r) dΣ . (11) By using the identity v⊤A v = tr(A(vv⊤)), ⟨x, x′⟩Σ IW(Σ; C, r) = | C | r/2 (2π)n/2Zr,n | Σ |(n+r+2)/2 exp −tr Σ−1(C+S) /2 = Zr+1,n (2π)n/2Zr,n | C | r/2 | C + S |(r+1)/2 IW( Σ; C+S, r+1 ) , where S = (x−x′)(x−x′)⊤. Cascading this through each of the terms in the integrand of (11) and noting that the integral of a Wishart density is unity, we conclude that K((x1, x′ 1), (x2, x′ 2)) ∝ | C |r/2 | C +Stot |(r+3)/2 , (12) where Stot = S1 + S2 + S∗; Si = 1 2(xi −x′ i)(xi −x′ i)⊤; and S∗= (x1 −x2)(x1 −x2)⊤. We can read offthat for given ∥x1 −x′ 1 ∥, ∥x2 −x′ 2 ∥, and ∥x−x′ ∥, the hyperkernel will favor quadruples where x1 −x′ 1, x2 −x′ 2, and x−x′ are close to parallel to each other and to the largest eigenvector of C. It is not so easy to immediately see the dependence of K on the relative distances between x1, x′ 1, x2 and x′ 2. To better expose the qualitative behavior of the Wishart hyperkernel, we fix (x1, x′ 1), assume that C = cI for some c ∈R and use the identity cI + vv⊤ = cn−1 c + ∥v∥2 to write Kx1,x′ 1(x2, x′ 2) ∝ " Qc(2S1, 2S∗) c + 4 ∥x1 −x2 ∥21/4 #(r+3)/2 " Qc(S1 +S∗, S2) c + ∥x2 −x′ 2 ∥21/4 #r+3 where Qc(A, B) is the affinity Qc(A, B) = | cI + 2A |1/4 · | cI + 2B |1/4 | cI + A + B |1/2 . This latter expression is a natural positive definite similarity metric between positive definite matrices, as we can see from the fact that it is the overlap integral (Bhattacharyya kernel) Qc(A, B) = Z h ⟨x, 0⟩(cI+2A)−1 i1/2 h ⟨x, 0⟩(cI+2B)−1 i1/2 dx between two zero-centered Gaussian distributions with inverse covariances cI+2A and cI + 2B, respectively [3]. −0.1 −0.05 0 0.05 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 −0.3 −0.2 −0.1 0 0.1 0.2 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 Figure 1: The first two panes show the separation of ’3’s and ’8’s in the training and testing sets respectively achieved by the Gaussian hyperkernel (the plots show the data plotted by its first two eigenvectors according to the learned kernel k). The right hand pane shows a similar KernelPCA plot but based on a fixed RBF kernel. 5 Experiments We conducted preliminary experiments with the hyperkernels in relation learning between pairs of datapoints. The idea here is that the learned kernel k naturally induces a distance metric d(x, x′) = p k(x, x) −2k(x, x′) + k(x′, x′), and in this sense kernel learning is equivalent to learning d. Given a labeled dataset, we can learn a kernel which effectively remaps the data in such a way that data points with the same label are close to each other, while those with different labels are far apart. For classification problems (yi being the class label), a natural choice of quality functional similar to the hinge loss is Q(X, Y, k) = 1 m2 Pm i,j=1 | 1 −yijk(xi, xj) |+, where | z |+ = z if z ≥0 and | z |+ = 0 for z < 0, while yij = 1 if yi = yj. The corresponding optimization problem learns k(x, x′) = Pm i=1 Pm j=1 αijK((x, x′), (xi, xj)) + b minimizing 1 2 X i,j X i′,j′ αijαi′j′K((xi, xj), (xi′, xj′)) + C X i,j ξij subject to the classification constraints yij X i′,j′ αi′j′K((xi′, xj′), (xi, xj)) + b ≥1 −ξij ξij ≥0 αij ≥0 for all pairs of i, j ∈{1, 2, . . . , m}. In testing we interpret k(x, x′) > 0 to mean that x and x′ are of the same class and k(x, x′) ≤0 to mean that they are of different classes. As an illustrative example we learned a kernel (and hence, a metric) between a subset of the NIST handwritten digits1. The training data consisted of 20 ’3’s and 20 ’8’s randomly rotated by ±45 degrees to make the problem slightly harder. Figure 1 shows that a kernel learned by the above strategy with a Gaussian hyperkernel with parameters set by cross validation is extremely good at separating the two classes in training as well as testing. In comparison, in a similar plot for a fixed RBF kernel the ’3’s and ’8’s are totally intermixed. Interpreting this as an information retrieval problem, we can imagine inflating a ball around each data point in the test set and asking how many other data points in this ball are of the same class. The corresponding area under the curve (AUC) in the original space is just 0.5575, while in the hyperkernel space it is 0.7341. 1Provided at http://yann.lecun.com/exdb/mnist/ courtesy of Yann LeCun and Corinna Cortes. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 σ AUC σh=0σ SVM Linear HyperKernel Conic HyperKernel 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 σ AUC σh=1σ SVM Linear HyperKernel Conic HyperKernel 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 σ AUC σh=2σ SVM Linear HyperKernel Conic HyperKernel 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 σ AUC σh=4σ SVM Linear HyperKernel Conic HyperKernel 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 σ AUC σh=6σ SVM Linear HyperKernel Conic HyperKernel 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 σ AUC σh=10σ SVM Linear HyperKernel Conic HyperKernel Figure 2: Test area under the curve (AUC) for Olivetti face recognition under varying σ and σh. We ran a similar experiment but with multiple classes on the Olivetti faces dataset, which consists of 92 × 112 pixel normalized gray-scale images of 30 individuals in 10 different poses. Here we also experimented with dropping the αij ≥0 constraints, which breaks the positive definiteness of k, but might still give a reasonable similarity measure. The first case we call “conic hyperkernels”, whereas the second are just “linear hyperkernels”. Both involve solving a quadratic program over 2m2+1 variables. Finally, as a baseline, we trained an SVM over pairs of datapoints to predict yij, representing (xi, xj) with a concatenated feature vector [xi, xj] and using a Gaussian RBF between these concatenations. The results on the Olivetti dataset are summarized in Figure 2. We trained the system with m = 20 faces and considered all pairs of the training data-points (i.e. 400 constraints) to find a kernel that predicted the labeling matrix. When speed becomes an issue it often suffices to work with a subsample of the binary entries in the m × m label matrix and thus avoid having m2 constraints. Also, we only need to consider half the entries due to symmetry. Using the learned kernel, we then test on 100 unseen faces and predict all their pairwise kernel evaluations, in other words, 104 predicted pair-wise labelings. Test error rates are averaged over 10 folds of the data. For both the baseline Gaussian RBF and the Gaussian hyperkernels we varied the σ parameter from 0.1 to 0.6. For the Gaussian hyperkernel we also varied σh from 0 to 10σ. We used a value of C = 10 for all experiments and for all algorithms. The value of C had very little effect on the testing accuracy. Using a conic hyperkernel combination did best in labeling new faces. The advantage over SVMs is dramatic. The support vector machine can only achieve an AUC of less than 0.75 while the Gaussian hyperkernel methods achieve an AUC of almost 0.9 with only T = 20 training examples. While the difference between the conic and linear hyperkernel methods is harder to see, across all settings of σ and σh, the conic combination outperformed the linear combination over 92% of the time. The conic hyperkernel combination is also the only method of the three that guarantees a true Mercer kernel as an output which can then be converted into a valid metric. The average runtime for the three methods was comparable. The SVM took 2.08s ± 0.18s, the linear hyperkernel took 2.75s ± 0.10s and the conic hyperkernel took 7.63s ± 0.50s to train on m = 20 faces with m2 constraints. We implemented quadratic programming using the MOSEK optimization package on a single CPU workstation. 6 Conclusions The main barrier to hyperkernels becoming more popular is their high computational demands (out of the box algorithms run in O(m6) time as opposed to O(m3) in regular learning). In certain metric learning and on-line settings however this need not be forbidding, and is compensated for by the elegance and generality of the framework. The Gaussian and Wishart hyperkernels presented in this paper are in a sense canonical, with intuitively appealing interpretations. In the case of the Gaussian hyperkernel we even have a natural regularization scheme. Preliminary experiments show that these new hyperkernels can capture the inherent structure of some input spaces. We hope that their introduction will give a boost to the whole hyperkernels field. Acknowledgements The authors wish to thank Zoubin Ghahramani, Alex Smola and Cheng Soon Ong for discussions related to this work. This work was supported in part by National Science Foundation grants IIS-0347499, CCR-0312690 and IIS-0093302. References [1] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola. On kernel-target alignment. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 367 – 373, Cambridge, MA, 2002. MIT Press. [2] G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. J. Math. Anal. Applic., 33:82–95, 1971. [3] R. Kondor and T. Jebara. A kernel between sets of vectors. In Machine Learning: Tenth International Conference, ICML 2003, 2003. [4] R. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In Machine Learning: Proceedings of the Nineteenth International Conference (ICML ’02), 2002. [5] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix with semi-definite programming. Journal of Machine Learning Research, 5:27 – 72, 2004. [6] T. P. Minka. Inferring a Gaussian distribution, 2001. Tutorial paper available at http://www.stat.cmu.edu/ minka/papers/learning.html. [7] C. S. Ong and A. J. Smola. Machine learning using hyperkernels. In Proceedings of the International Conference on Machine Learning, 2003. [8] Cheng Soon Ong, Alexander J. Smola, and Robert C. Williamson. Hyperkernels. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 478–485. MIT Press, Cambridge, MA, 2003. [9] Cheng Soon Ong, Alexander J. Smola, and Robert C. Williamson. Learning the kernel with hyperkernels. Sumbitted to the Journal of Machine Learning Research, 2003. [10] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002.
|
2006
|
143
|
2,970
|
A Complexity-Distortion Approach to Joint Pattern Alignment Andrea Vedaldi Stefano Soatto Department of Computer Science University of California at Los Angeles Los Angeles, CA 90035 {vedaldi,soatto}@cs.ucla.edu Abstract Image Congealing (IC) is a non-parametric method for the joint alignment of a collection of images affected by systematic and unwanted deformations. The method attempts to undo the deformations by minimizing a measure of complexity of the image ensemble, such as the averaged per-pixel entropy. This enables alignment without an explicit model of the aligned dataset as required by other methods (e.g. transformed component analysis). While IC is simple and general, it may introduce degenerate solutions when the transformations allow minimizing the complexity of the data by collapsing them to a constant. Such solutions need to be explicitly removed by regularization. In this paper we propose an alternative formulation which solves this regularization issue on a more principled ground. We make the simple observation that alignment should simplify the data while preserving the useful information carried by them. Therefore we trade off fidelity and complexity of the aligned ensemble rather than minimizing the complexity alone. This eliminates the need for an explicit regularization of the transformations, and has a number of other useful properties such as noise suppression. We show the modeling and computational benefits of the approach to the some of the problems on which IC has been demonstrated. 1 Introduction Joint pattern alignment attempts to remove from an ensemble of patterns the effect of nuisance transformations of a systematic nature. The aligned patterns have then a simpler structure and can be processed more easily. Joint pattern alignment is not the same problem as aligning a pattern to another; instead all the patterns are projected to a common “reference” (usually a subspace) which is unknown and needs to be discovered in the process. Joint pattern alignment is useful in many applications and has been addressed by several authors. Here we only review the methods that are most related the present work. Transform Component Analysis [7] (TCA) explicitly models the aligned ensemble as a Gaussian linear subspace of patterns. In fact, TCA is a direct extension of Probabilistic Principal Component Analysis (PPCA) [10]: Patterns are generated as in standard PPCA and additional hidden layers model the nuisance deformations. Expectation-maximization is used to learn the model from data which result in their alignment. Unfortunately the method requires the space of transformations to be quantized and it is not clear how well the approach could scale to complex scenarios. Image Congealing (IC) [9] takes a different perspective. The idea is that, as the nuisance deformations should increase the complexity of the data, one should be able to identify and undo them by contrasting this effect. Thus IC transforms the data to minimize an appropriate measure of the “complexity” of the ensemble. With respect to TCA, IC results in a lighter formulation which enables addressing more complex transformations and makes fewer assumptions on the aligned ensemble. An issue with the standard formulation of IC is that it does not require the aligned data to be a faithful representation of the original data. Thus simplifying the data might not only remove the nuisance factors, but also the useful information carried by the patterns. For example, if entropy is used to measure complexity, a typical degenerate solution is obtained by mapping all the data to a constant, which results in minimum (null) entropy. Such solutions are avoided by explicitly regularizing the transformations, in ways that are however rather arbitrary [9]. One should instead search for an optimal compromise between complexity of the simplified data and preservation of the useful information (Sect. 2). This approach is not only more direct, but also conceptually more straightforward as no ad hoc regularization needs to be introduced. We illustrate some of its relationship with rate-distortion theory (Sect. 2.1) and information bottleneck [2] (Sect. 2.2) and we contrast it to IC (Sect. 2.4). In Sect. 3 we specialize our model to the problem of image alignment as done in [9]. For this case, we show that the new model has the same computational complexity of IC (Sect. 3.1). We also show that a Gauss-Newton based algorithm is possible, which is useful to converge quickly during the final stage of the optimization (Sect. 3.2; in a similar context a descent based algorithm was introduced in [1]). In Sect. 4 we illustrate the practical behavior of the algorithm, showing how the complexitydistortion compromise affects the final solution. In particular, our results compare favorably with the ones of [9], with the added simplicity and other benefits, such as noise suppression. 2 Problem formulation We formulate joint pattern alignment as the problem of finding a deformed pattern ensemble which is simpler but faithful to the original data. This is similar to a lossy compression problem [5, 4, 3] and is in fact equivalent to it in some cases (Sect. 2.1). A pattern (or data) ensemble x ∈X is a random variable with density p(x). Similarly, an aligned ensemble or alignment y ∈X of the ensemble x is another variable y that has conditional statistic p(y|x). We seek for an alignment that is “simpler” than x but “faithful” to x. The complexity R of the alignment y is measured by an operator R = H(y) such as, for example, the entropy of the random variable y (but we will see other options). The cost of representing x by y is expressed by a distortion function d(x, y) ∈R+ and the faithfulness of the alignment y is quantified as the expected distortion D = E[d(x, y)]. Consider a class W of deformations w : X →X acting on the patterns X. In order for the alignment y to factor out W we consider a distortion function which is invariant to the action of W; in particular, given a base distortion d0(x, y), we consider the deformation invariant distortion d(x, y) = min w∈W d0(x, w(y)) Thus an aligned pattern y is faithful to a deformed pattern x if it is possible to map y to x by a nuisance deformation w. Figuring out the best alignment y boils down in optimizing p(y|x) for complexity and distortion. However, this require trading off complexity and distortion and there is no unique way of doing so. The distortion-complexity function D(R) gives the best distortion D that can be achieved by alignments of complexity R. All such distortion-optimal alignments are equally good in principle, and it is the application that poses an upper bound on the acceptable distortion. D(R) can be computed by optimizing the distortion D w.r.t. p(y|x) while keeping constant the complexity R. However it is usually easier optimize the Lagrangian min p(y|x) D + λR (1) whose optimum is attained where the derivative of D(R) is equal to −λ. Then by varying λ one spans the graph of D(R) and finds all the optimal alignments for given complexities. 2.1 Relation to rate-distortion and entropy constrained vector quantization If one chooses the mutual information I(x, y) as complexity measure H(y) in eq. (1), then (1) becomes a rate-distortion problem and the function D(R) a rate-distortion function [5]. The formulation is valid both for discrete and continuous spaces X, but yields to a mapping p(y|x) that is genuinely stochastic. Therefore the alignment y of a pattern x is in general not unique. This is because in rate-distortion y is an auxiliary variable used to derive a deterministic code for long sequences (x1, . . . , xn) of data, not for data x in isolation. In contrast, entropy constrained vector quantization [4, 3] assumes that y is finite (i.e. that it spans a finite subset of X) and that it is functionally determined by x (i.e. y = y(x)). Then it measures the complexity of y as the (discrete) entropy H(y). This is analogous to a rate-distortion problem, except that one searches for a “single letter” optimal coding y of x rather than an optimal coding for long sequences (x1, . . . , xn). Unlike rate-distortion, however, the aligned ensemble y is discrete even if the ensemble x is continuous. 2.2 Relation to information bottleneck Information Bottleneck (IB) [2] is a special rate-distortion problem in which one compresses a variable x while preserving the information carried by x about another variable z, representing the task of interest. In this sense IB is similar to the idea proposed here. By designing an appropriate distribution p(x, z) it may also be possible to obtain an alignment effect similar to the one we seek here. For example, if W is a group of transformations, one may define z = z(x) = {w(x) : w ∈ W}, for which z is indifferent exactly to the deformations w of x. 2.3 Alternative measures of complexity Instead of the entropy H(y) or the mutual information I(x, y) we can use alternative measures of complexity that yield to more convenient computations. An example is the averaged-per-pixel entropy introduced by IC [9] and discussed in Sect. 3. Generalizing this idea, we assume that the aligned data y depend functionally on the patterns x (i.e. y = y(x)) and we express the complexity of y as the total entropy of lower dimensional projections φ1(y), . . . , φM(y), φi : X →Rkof the ensemble. Distortion and entropies are estimated empirically and non-parametrically. Concretely, given an ensemble x1, . . . , xK ∈X of patterns, we recover transformations w1, . . . , wK ∈W and aligned patterns y1, . . . , yK ∈X that minimize 1 K K X i=1 d(xi, wi(yi)) −λ M X j=1 1 K K X i=1 log pj(φj(yi)), where the densities pj(φj(y)) are estimated from the samples φj(y1), . . . , φj(yK) by histogramming (discrete case) or by a Parzen estimator [6] with Gaussian kernel gσ(y) of variance σ (continuous case1), i.e. pj(φj(y)) = 1 N N X i=1 gσ(φj(y) −φj(yi)). 2.4 Comparison to image congealing In IC [9], given data x1, . . . , xK ∈X, one looks for transformations v : X →X, x 7→y such that the density p(y) estimated from samples y1 = v1(x1), . . . , yK = vK(xK) has minimum entropy. If the transformations enable to do so, one can minimize the entropy by mapping all the patterns to a constant; to avoid this one considers the regularized cost function H(y) + α X i R(vi) (2) 1The Parzen estimator implies that the differential entropy of the distributions pj is always lower bounded by the entropy of the kernel gσ. This prevents the differential entropy to have arbitrary small negative values. where R(v) is a term penalizing unacceptable deformations. Compared to IC, in our formulation: ▶The distortion term E[d(x, y)] substitutes the arbitrary regularization R(v). ▶The aligned patterns y are not obtained by deforming the patterns x; instead y is obtained as a simplification of x within an acceptable level of distortion. This fact induces a noise-cancellation effect (Sect. 4). ▶The transformations w can be rather general, even non-invertible. IC can use complex transformations too, but most likely these would need to be heavily regularized as they would tend to annihilate the patterns. 3 Application to joint image alignment We apply our model to the problem of removing a family of geometric distortions from images. This is the same application for which IC [7] was proposed in the first place. We are given a set I1(x), . . . , IK(x) of digital images (pattern ensemble) defined on a regular lattice x ∈Λ ⊂R2 and with range in [0, 1]. The images may be affected by parametric transformations wi(·) = w(·; qi) : R2 →R2, so that Ii(x) = Ti(wx) + ni(x), x ∈Λ for templates (aligned ensemble2) Ti(y), y ∈Λ and residuals ni(x). Here qi is the vector of parameters of the transformation wi (for example, wi might be a 2-D affine transformation y = Lx + l and qi the vector q = [L11 L21 L12 L22 l1 l2]). The templates Ti(y), y ∈Λ are digital images themselves. In order to define Ti(wx) when wx ̸∈Λ, bilinear interpolation and zero-padding are used. Therefore the symbol Ti(wix) really denotes the quantity T(wix) = A(x; wi)Ti, x ∈Λ where A(x; wi) is a row vector of mixing coefficients determined by wi and and the interpolation method being used and Ti is the vector obtained by stacking the pixels of the template Ti(y), y ∈Λ. We will also use the notation wi ◦Ti = A(wi)Ti where the left hand side is the stacking of the warped template T(wix), x ∈Λ and A(wi) is the matrix whose rows are the vectors A(x; wi) for x ∈Λ. The distortion is defined to be the squared l2 norm of the residual d(Ii, w ◦Ti) = P x∈Λ(Ii(x) − Ti(wix))2. The complexity of the aligned ensemble T(y), y ∈Λ is computed as in Sect. 2.3 by projecting on the image pixels and averaging their entropies (this is equivalent to assuming that the pixels are statistically independent). For each pixel y ∈Λ a density p(T(y) = t), t ∈[0, 1] is estimated non parametrically from the data {T1(y), . . . , TK(y)} (we use Parzen window as explained in Sect. 2.3). The complexity of a pixel is thus H(T(y)) = −1 K K X i=1 log p(Ti(y)). Finally the overall cost function is obtained by summing over all pixels and averaging over all images: L(w1, . . . , wK, T1, . . . , TK) = 1 K K X i=1 X x∈Λ (Ii(x) −Ti(wix))2 −λ 1 K K X i=1 X y∈Λ log p(Ti(y)). (3) 3.1 Basic search In this section we show how the optimization algorithm from [7] can be adapted to work with the new formulation. This algorithm is a simple coordinate maximization in the dimensions of the search space: 2With respect to Sect. 2 the patterns xi are now the images Ii and the alignment y are the templates Ti. 1: Estimate the probabilities p(T(y)), y ∈Λ from the templates {Ti(y) : i = 1, . . . , K} 2: For each pattern i = 1, . . . , K and for each component qji of the parameter vector qi, try a few values of qji. For each value re-compute the cost function (3) and keep the best. 3: Repeat, refining the sampling step of the parameters. This algorithm is appropriate if the dimensionality of the parameter vector q is reasonably small. Here we consider affine transformations for the sake of the illustration, so that q is six-dimensional. In (1.) and (2.) estimating the probabilities p(Ti(y)) and the cost function L(w1, . . . , wK, T1, . . . , TK) requires to know Ti(y). As a first order approximation (as the final result will be refined by Gauss-Newton as explained in the next Section), we bypass this problem and we simply set Ti = w−1 i ◦Ii, exploiting the fact that the affine transformations wi are invertible3. Eventually all we do is substituting the regularization term P i R(vi) of [9] with the expected distortion 1 K K X i=1 X x∈Λ (Ii(x) −wi ◦(w−1 i ◦Ii(x)))2 = 1 K K X i=1 X x∈Λ (Ii(x) −A(x; wi)A(w−1 i )Ii)2 Note that warping and un-warping the image Ii is a lossy operation even if wi is bijective because the transformation, applied to digital images, introduces aliasing. Thus the new algorithm is simply avoiding those transformations wi that would introduce excessive loss of fidelity. 3.2 Gauss-Newton search With respect to IC, where only the transformations w1, . . . , wK are estimated, here we compute the templates T1, . . . , Tk as well. While this might be not so important when a coarse approximation to the solution has to be found (for which the algorithm of Sect. 3.1 can be used), it must be taken into account to get refined results. This can be done (with a bit of numeric care) by Gauss-Newton (GN). Applying Gauss-Newton requires to take derivatives with respect to the pixel values Ti(y). We exploit the fact that the variables T(y) are continuous, as opposed to [9]. We still process a single image per time, reiterating several times across the whole ensemble {I1(x), . . . , IK(x)}. For a given image Ii we update the warp parameters qi and the template Ti simultaneously. We exploit the fact that, as the number K of images is usually big, the density p(T(y)) does not change significantly when only one of the templates Ti is being changed. Therefore p(T(y)) can be assumed constant in the computation of the gradient and the Hessian of the cost function (3). The gradient is given by ∂L ∂q⊤ i = X x∈Λ 2∆i(x)∇Ti(wix) ∂wi ∂q⊤ i (x), ∂L ∂Ti(y) = X x∈Λ 2∆i(x)(A(x; wi)δy) − X y∈Λ ˙p(Ti(y)) p(Ti(y)) where ∆i(x) = Ti(wix)−Ii(x) is the reconstruction residual, A(x; wi) is the linear map introduced in Sect. 3 and δy = δ(z −y) is the 2-D discrete delta function centered on y, encoded as a vector. The approximated Hessian of the cost function (3) can be obtained as follows. First, we use the Gauss-Newton approximation for the derivative w.r.t. the transformation parameters qi ∂2L ∂qi∂q⊤ i ≈ X x∈Λ 2∂w⊤ i ∂qi (x)∇⊤Ti(wix)∇Ti(wix) ∂wi ∂q⊤ i (x) We then have ∂2L ∂Ti(y)2 = X x∈Λ 2(A(x; wi)δy)2 − X y∈Λ ¨p(Ti(y))p(Ti(y)) −˙p(Ti(y))2 p(Ti(y))2 ∂2L ∂Ti(y)∂Ti(z) = X x∈Λ 2(A(x; wi)δy)(A(x; wi)δz) ∂2L ∂Ti(y)∂q⊤= X x∈Λ 2(A(x; wi)δy)∇Ti(wix) ∂wi ∂q⊤ i + X x∈Λ 2∆i(x)A(x; wi) ˆD1δy D2δy ˜ ∂wi ∂q⊤ i 3Our criterion avoids implicitly non-invertible affine transformations as they yield highly distorted codes. Figure 1: Toy example. Top left. We distort the patterns by applying translations drawn uniformly from the 8-shaped region (the center corresponds to the null translation). Top. We show the gradient based algorithm while it gradually aligns the patterns by reducing the complexity of the alignment y. Dark areas correspond to high values of the density of the alignment; we also superimpose the trajectory of one of the patterns. Unfortunately the gradient based algorithm, being a local technique, gets trapped in two local modes (the modes can however be fused in a post-processing stage). Bottom. The basic algorithm completely eliminates the effect of the nuisance transformations doing a better job of avoiding local minima. Although for this simple problem the basic search is more effective, on more difficult scenarios the extra complexity of the Gauss-Newton search pays off (see Sect. 4). where D1 is the discrete linear operator used to compute the derivative of Ti(y) along its first dimension and D2 the analogous operator for the second dimension. The second term of the last equation gives a very small contribution and can be dropped. The equations are all straightforward and result in la linear system δθ⊤ ∂2L ∂θ∂θ⊤ = −∂L ∂θ⊤ where the vector θ⊤= q⊤T(y1) . . . T(yn) has size in the order of the number of pixels of the template T(y), y ∈Λ. While this system is large, it is also extremely sparse an can be solved rather efficiently by standard methods [8]. 4 Experiments The first experiment (Fig.1) is a toy problem illustrating our method. We collect K patterns xi, i = 1, . . . , K which are arrays of M 2D points xi = (x1i, . . . , xMi). Such points are generated by drawing M i.i.d. samples from a 2-D Gaussian distribution and adding a random translation wi ∈R2 to them. The distribution of the translations wi is generic (in the example wi is drawn uniformly from an 8-shaped region of the plane): This is not a problem as we do not need to make any particular assumptions on w besides that it is a translation. The distortion d(xi, yi) is simply the sum of the Euclidean distances Pm j=1 ∥yji +wi −xji∥2 between the patterns xi and the transformed codes wi(yi) = (y1i +wi, . . . , ymi +wi). The distribution p(yi) of the codes is assumed to factorize as p(yi) = Q j=1 p(yji) where the p(yji) are identical densities estimated by Parzen window from all the available samples {yji, j = 1, . . . , M, i = 1, . . . , K}. In the second experiment (Fig. 2) we align hand-written digits extracted from the NIST Special Database 19. The results (Fig. 3) should be compared to the ones from [9]: They are of analogous quality, but they were achieved without regularizing the class of admissible transformations. Despite this, we did not observe any of the aligned patterns to collapse. In Fig. 4 we show the effect of choosing different values of the parameter λ in the cost function (3). As λ is increased, the alignment complexity is reduced and the fidelity of the alignment is degraded. By an appropriate choice of λ, the alignment can be regarded as a “restoration” or “canonization” of the pattern which abstracts from details of the specific instance. Expected value per pixel 5 10 15 20 25 0 5 10 15 20 25 30 Entropy per pixel 5 10 15 20 25 5 10 15 20 25 0 20 40 0 20 40 −2.5 −2 −1.5 −1 Entropy per pixel −2.2 −2 −1.8 0 0.5 1 1.5 x 10 −4 Distortion−rate diagram Rate Distortion p(T(y)) along middle scanline 5 10 15 20 25 0 0.2 0.4 0.6 0.8 Distortion per pixel 5 10 15 20 25 5 10 15 20 25 Expected value per pixel 5 10 15 20 25 0 5 10 15 20 25 30 Entropy per pixel 5 10 15 20 25 5 10 15 20 25 0 20 40 0 20 40 −2.5 −2 −1.5 −1 Entropy per pixel −2.2 −2 −1.8 0 1 2 3 4 x 10 −4 Distortion−rate diagram Rate Distortion p(T(y)) along middle scanline 5 10 15 20 25 0 0.2 0.4 0.6 0.8 Distortion per pixel 5 10 15 20 25 5 10 15 20 25 Figure 2: Basic vs GN image alignment algorithms. Left. We show the results of applying the basic image alignment algorithm of Sect. 3.1. The patterns are zeroes from the NIST Special Database 19. We show in writing order: The expected value E[T(y)]; the per-pixel entropy H(T(y)) (it can be negative as it is differential); a 3-D plot of the same function H(T(y)); the distortion-complexity diagram as the algorithm minimizes the function D + λR (in green we show some lines of constant cost); the probability p(T(y) = l) as l ∈[0, 1] and y varies along the middle scan-line; and the perpixel distortion D(x) = E[(I(x)−T(wx))2]. Right. We demonstrate the GN algorithm of Sect. 3.2. The algorithm achieves a significantly better solution in term of the cost function (3). Moreover GN converges in only two sweeps of the dataset, while the basic algorithm after 10 sweeps is still slowly moving. This is due to the fact that GN selects both the best search direction and step size, resulting in a more efficient search strategy. Figure 3: Aligned patterns. Left. A few patterns from NIST Special Database 19. Middle. Basic algorithm: Results are very similar to [9], except that no regularization on the transformations is used. Right. GN algorithm: Patterns achieve a better alignment due to the more efficient search strategy; they also appear to be much more “regular” due to the noise cancellation effect discussed in Fig. 4. Bottom. More examples of patterns before and after GN alignment. 5 Conclusions IC is a useful algorithm for joint pattern alignment, both robust and flexible. In this paper we showed that the original formulation can be improved by realizing that alignment should result in a simplified representation of the useful information carried by the patterns rather than a simplification of the patterns. This results in a formulation that does not require inventing regularization terms in order to prevent degenerate solutions. We also showed that Gauss-Newton can be successfully applied to this problem for the case of image alignment and that this is in some regards more effective than the original IC algorithm. −2.3 −2.2 −2.1 −2 −1.9 −1.8 0 1 2 3 4 5x 10 −4 Rate Distortion (a) Distortion-Complexity (b) Not aligned (c) Aligned Figure 4: Distortion-complexity balance. We illustrate the effect of varying the parameter λ in (3). (a) Estimated distortion-complexity function D(R). The green (dashed) lines have slope equal to λ and should be tangent to D(R) (Sect. 2). (b) We show the alignment T(wix) of eight patterns (rows) as λ is increased (columns). In order to reduce the entropy of the alignment, the algorithm “forgets” about specific details of each glyph. (c) The same as (b), but aligned. Acknowledgments We would like to acknowledge the support of AFOSR FA9550-06-1-0138 and ONR N00014-03-10850. References [1] P. Ahammad, C. L. Harmon, A. Hammonds, S. S. Sastry, and G. M. Rubin. Joint nonparametric alignment for analizing spatial gene expression patterns in drosophila imaginal discs. In Proc. CVPR, 2005. [2] K. Branson. The information bottleneck method. Lecture Slides, 2003. [3] J. Buhmann and H. K¨uhnel. Vector quantization with complexity costs. IEEE Trans. on Information Theory, 39, 1993. [4] P. A. Chou, T. Lookabaugh, and R. M. Gray. Entropy-constrained vector quantization. In 37, editor, IEEE Trans. on Acoustics, Speech, and Signal Processing, volume 1, 1989. [5] T. M. Cover and J. A. Thomson. Elements of Information Theory. Wiley, 2006. [6] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley Inerscience, 2001. [7] B. J. Frey and N. Jojic. Transformation-invariant clustering and dimensionality reduction using EM. PAMI, 2000. [8] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1996. [9] E. G. Learned-Miller. Data driven image models through continuous joint alignment. PAMI, 28(2), 2006. [10] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of The Royal Statistical Society, Series B, 61(3), 1999.
|
2006
|
144
|
2,971
|
AdaBoost is Consistent Peter L. Bartlett Department of Statistics and Computer Science Division University of California, Berkeley bartlett@stat.berkeley.edu Mikhail Traskin Department of Statistics University of California, Berkeley mtraskin@stat.berkeley.edu Abstract The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after nν iterations—for sample size n and ν < 1—the sequence of risks of the classifiers it produces approaches the Bayes risk if Bayes risk L∗> 0. 1 Introduction Boosting algorithms are an important recent development in classification. These algorithms belong to a group of voting methods, for example [1, 2, 3], that produce a classifier as a linear combination of base or weak classifiers. While empirical studies show that boosting is one of the best off the shelf classification algorithms (see [3]) theoretical results don’t give a complete explanation of their effectiveness. Breiman [4] showed that under some assumptions on the underlying distribution “population boosting” converges to the Bayes risk as the number of iterations goes to infinity. Since the population version assumes infinite sample size, this does not imply a similar result for AdaBoost, especially given results of Jiang [5], that there are examples when AdaBoost has prediction error asymptotically suboptimal at t = ∞(t is the number of iterations). Several authors have shown that modified versions of AdaBoost are consistent. These modifications include restricting the l1-norm of the combined classifier [6, 7] and restricting the step size of the algorithm [8]. Jiang [9] analyses the unmodified boosting algorithm and proves a process consistency property, under certain assumptions. Process consistency means that there exists a sequence (tn) such that if AdaBoost with sample size n is stopped after tn iterations, its risk approaches the Bayes risk. However Jiang also imposes strong conditions on the underlying distribution: the distribution of X (the predictor) has to be absolutely continuous with respect to Lebesgue measure and the function FB(X) = 1 2 ln P(Y =1|X) P(Y =−1|X) has to be continuous on X. Also Jiang’s proof is not constructive and does not give any hint on when the algorithm should be stopped. Bickel, Ritov and Zakai in [10] prove a consistency result for AdaBoost, under the assumption that the probability distribution is such that the steps taken by the algorithm are not too large. We would like to obtain a simple stopping rule that guarantees consistency and doesn’t require any modification to the algorithm. This paper provides a constructive answer to all of the mentioned issues: 1. We consider AdaBoost (not a modification). 2. We provide a simple stopping rule: the number of iterations t is a fixed function of the sample size n. 3. We assume only that the class of base classifiers has finite VC-dimension, and that the span of this class is sufficiently rich. Both assumptions are clearly necessary. 2 Setup and notation Here we describe the AdaBoost procedure formulated as a coordinate descent algorithm and introduce definitions and notation. We consider a binary classification problem. We are given X, the measurable (feature) space, and Y = {−1, 1}, set of (binary) labels. We are given a sample Sn = {(Xi, Yi)}n i=1 of i.i.d. observations distributed as the random variable (X, Y ) ∼P, where P is an unknown distribution. Our goal is to construct a classifier gn : X →Y based on this sample. The quality of the classifier gn is given by the misclassification probability L(gn) = P(gn(X) ̸= Y |Sn). Of course we want this probability to be as small as possible and close to the Bayes risk L∗= inf g L(g) = E(min{η(X), 1 −η(X)}), where the infimum is taken over all possible (measurable) classifiers and η(·) is a conditional probability η(x) = P(Y = 1|X = x). The infimum above is achieved by the Bayes classifier g∗(x) = g(2η(x) −1), where g(x) = 1 , x > 0, −1 , x ≤0. We are going to produce a classifier as a linear combination of base classifiers in H = {h|h : X → Y}. We shall assume that class H has a finite VC (Vapnik-Chervonenkis) dimension dV C(H) = max |S| : S ⊆X, H|S = 2|S| . Define Rn(f) = 1 n n X i=1 e−Yif(Xi) and R(f) = Ee−Y f(X). Then the boosting procedure can be described as follows. 1. Set f0 ≡0, choose number of iterations t. 2. For k = 1, . . . , t set fk = fk−1 + αk−1hk−1, where the following holds Rn(fk) = inf h∈H,α∈R Rn(fk−1 + αh) (1) We call αi the step size of the algorithm at step i. 3. Output g ◦ft as a final classifier. We shall also use the convex hull of H scaled by λ ≥0, Fλ = ( f f = n X i=1 λihi, n ∈N ∪{0}, λi ≥0, n X i=1 λi = λ, hi ∈H ) as well as the set of k-combinations, k ∈N, of functions in H Fk = ( f f = k X i=1 λihi, λi ∈R, hi ∈H ) . We shall also need to define the l∗-norm: for any f ∈F ∥f∥∗= inf{ X |αi|, f = X αihi, hi ∈H}. Define the squashing function πl(·) to be πl(x) = ( l , x > l, x , x ∈[−l, l], −l , x < −l. Then the set of truncated functions is πl ◦F = n ˜f| ˜f = πl(f), f ∈F o . The set of classifiers based on class F is denoted by g ◦F = { ˜f| ˜f = g(f), f ∈F}. Define the derivative of an arbitrary function Q(·) in the direction of h as Q′(f; h) = ∂Q(f + λh) ∂λ λ=0 . The second derivative Q′′(f; h) is defined similarly. 3 Consistency of boosting procedure We shall need the following assumption. Assumption 1 Let the distribution P and class H be such that lim λ→∞inf f∈Fλ R(f) = R∗, where R∗= inf R(f) over all measurable functions. For many classes H, the above assumption is satisfied for all possible distributions P. See [6, Lemma 1] for sufficient conditions for Assumption 1. As an example of such a class, we can take a class of indicators of all rectangles or indicators of half-spaces defined by hyperplanes or binary trees with the number of terminal nodes equal to d+1 (we consider trees with terminal nodes formed by successive univariate splits), where d is the dimensionality of X (see [4]). We begin with a simple lemma (see [1, Theorem 8] or [11, Theorem 6.1]): Lemma 1 For any t ∈N if dV C(H) ≥2 the following holds: dP (Ft) ≤2(t + 1)(dV C(H) + 1) log2[2(t + 1)/ ln 2], where dP (Ft) is the pseudodimension of class Ft. The proof of AdaBoost consistency is based on the following result, which builds on the result by Koltchinskii and Panchenko [12] and resembles [6, Lemma 2]. Lemma 2 For a continuous function ϕ define the Lipschitz constant Lϕ,λ = inf{L|L > 0, |ϕ(x) −ϕ(y)| ≤L|x −y|, −λ ≤x, y ≤λ} and maximum absolute value of ϕ(·) when argument is in [−λ, λ] Mϕ,λ = max x∈[−λ,λ] ϕ(x). Then for functions Rϕ(f) = Eϕ(Y f(X)) and Rϕ,n(f) = 1 n n X i=1 ϕ(Yif(Xi)), V = dV C(H), c = 24 R 1 0 q ln 8e ϵ2 dϵ and any n, λ > 0 and t > 0, E sup f∈πλ◦Ft |Rϕ(f) −Rϕ,n(f)| ≤cλLϕ,λ r (V + 1)(t + 1) log2[2(t + 1)/ ln 2] n (2) and E sup f∈Fλ |Rϕ(f) −Rϕ,n(f)| ≤4λLϕ,λ r 2V ln(4n + 2) n . (3) Also, for any δ > 0, with probability at least 1 −δ, sup f∈πλ◦Ft |Rϕ(f) −Rϕ,n(f)| ≤ cλLϕ,λ r (V + 1)(t + 1) log2[2(t + 1)/ ln 2] n + Mϕ,λ r ln(1/δ) 2n (4) and sup f∈Fλ |Rϕ(f) −Rϕ,n(f)| ≤4λLϕ,λ r 2V ln(4n + 2) n + Mϕ,λ r ln(1/δ) 2n . (5) Proof. Equations (3) and (5 ) constitute [6, Lemma 2]. The proof of equations (2) and (4) is similar. We begin with symmetrization to get E sup f∈πλ◦Ft |Rϕ(f) −Rϕ,n(f)| ≤ 2E sup f∈πλ◦Ft 1 n n X i=1 σi(ϕ(−Yif(Xi)) −ϕ(0)) , where σi are i.i.d. with P(σi = 1) = P(σi = −1) = 1/2. Then we use the “contraction principle” (see [13, Theorem 4.12, pp. 112–113]) with a function ψ(x) = (ϕ(x) −ϕ(0))/Lϕ,λ to get E sup f∈πλ◦Ft |Rϕ(f) −Rϕ,n(f)| ≤ 4Lϕ,λE sup f∈πλ◦Ft 1 n n X i=1 −σiYif(Xi) = 4Lϕ,λE sup f∈πλ◦Ft 1 n n X i=1 σif(Xi) . Next we proceed and find the supremum. Notice, that functions in πλ ◦Ft are bounded and clipped to the absolute value equal λ, therefore we can rescale πλ ◦Ft by (2λ)−1 and get E sup f∈πλ◦Ft 1 n n X i=1 σif(Xi) = 2λE sup f∈(2λ)−1◦πλ◦Ft 1 n n X i=1 σif(Xi) . Next, we are going to use Dudley’s entropy integral [14] to bound the r.h.s above E sup f∈(2λ)−1◦πλ◦Ft 1 n n X i=1 σif(Xi) ≤12 √n Z ∞ 0 p ln N(ϵ, (2λ)−1 ◦πλ ◦Ft, L2(Pn))dϵ. Since for ϵ > 1 the covering number N is 1, then upper integration limit can be taken 1, and we can use Pollard’s bound [15] for F ⊆[0, 1]X N(ϵ, F, L2(P)) ≤2 4e ϵ2 dP (F) , where dP (F) is a pseudodimension, and obtain for ˜c = 12 R 1 0 q ln 8e ϵ2 dϵ E sup f∈(2λ)−1◦πλ◦Ft 1 n n X i=1 σif(Xi) ≤˜c r dP ((2λ)−1 ◦πλ ◦Ft) n , also notice that constant ˜c doesn’t depend on Ft or λ. Next, since (2λ)−1 ◦πλ is a non-decreasing transform, we use inequality dP ((2λ)−1 ◦πλ ◦Ft) ≤dP (Ft) (e.g. [11, Theorem 11.3]) E sup f∈(2λ)−1◦πλ◦Ft 1 n n X i=1 σif(Xi) ≤c r dP (Ft) n . And then, since Lemma 1 gives an upper-bound on the pseudodimension of the class Ft, we have E sup f∈πλ◦Ft 1 n n X i=1 σif(Xi) ≤cλ r (V + 1)(t + 1) log2[2(t + 1)/ ln 2] n , with constant c above being independent of H, t and λ. To prove the second statement we use McDiarmid’s bounded difference inequality [16, Theorem 9.2, p. 136], since ∀i sup (xj,yj)n j=1,(x′ i,y′ i) sup f∈πλ◦Ft |Rϕ(f) −Rϕ,n(f)| − sup f∈πλ◦Ft |Rϕ(f) −R′ ϕ,n(f)| ≤Mϕ,λ n , where R′ ϕ,n(f) is obtained from Rϕ,n(f) by changing pair (xi, yi) to (x′ i, y′ i). This completes the proof of the lemma. ⋄ Lemma 2, unlike [6, Lemma 2], allows us to choose the number of steps t, that describes the complexity of the linear combination of base functions in addition to the parameter λ, which governs the size of the deviations of the functions in F, and this is essential for the proof of the consistency. It is easy to see that for AdaBoost (i.e. ϕ(x) = e−x) we have to choose λ = κ ln n and t = nν with κ > 0, ν > 0 and 2κ + ν < 1. So far we dealt with the statistical properties of the function we are minimizing, now we turn to the algorithmic part. We need the following simple consequence of the proof of [10, Theorem 1] Theorem 1 Let function Q(f) be convex in f. Let Q∗= limλ→∞inff∈Fλ Q(f). Assume that ∀c1, c2, s.t. Q∗< c1 < c2 < ∞, 0 < inf{Q′′(f; h) : c1 < Q(f) < c2, h ∈H} ≤ sup{Q′′(f; h) : Q(f) < c2, h ∈H} < ∞. Then for any reference function ¯f and the sequence of functions fm, produced by the boosting algorithm, the following bound holds ∀m s.t. Q(fm) > Q( ¯f). Q(fm) ≤Q( ¯f) + s 8B3Q(f0)(Q(f0) −Q( ¯f)) β3 ln ℓ2 0 + c3(m + 1) ℓ2 0 −1 2 , (6) where ℓk =
¯f −fk
∗, c3 = 2Q(f0)/β, β = inf{Q′′(f; h) : Q( ¯f) < Q(f) < Q(f0), h ∈H}, B = sup{Q′′(f; h) : Q(f) < Q(f0), h ∈H}. Proof. The statement of the theorem is a version of the result implicit in the proof of [10, Theorem 1]. If for some m we have Q(fm) ≤Q( ¯f), then theorem is trivially true for all m′ ≥m. Therefore, we are going to consider only the case when Q(fm+1) > Q( ¯f). By convexity of Q(·) |Q′(fm; fm −¯f)| ≥Q(fm) −Q( ¯f) = ϵm. (7) Let fm −¯f = P ˜αi˜hi, where ˜αi and ˜hi correspond to the best representation (with the smallest l∗-norm). Then from (7) and linearity of the derivative we have ϵm ≤ X ˜αiQ′(fm; ˜hi) ≤sup h∈H |Q′(fm; h)| X |˜αi|, therefore sup h∈H Q′(fm; h) ≥ ϵm
fm −¯f
∗ . (8) Next, Q(fm + αhm) = Q(fm) + αQ′(fm; hm) + 1 2α2Q′′( ˜fm; hm), where ˜fm = fm + ˜αmhm, for ˜αm ∈[0, αm], and since by assumption ˜fm is on the path from fm to fm+1 we have the following bounds Q( ¯f) < Q(fm+1) ≤Q( ˜fm) ≤Q(fm) ≤Q(f0), then by assumption of the theorem for β, that depends on Q( ¯f), we have Q(fm+1) ≥Q(fm) + inf α∈R(αQ′(fm; hm) + 1 2α2β) = Q(fm) −|Q′(fm; hm)|2 2β . (9) On the other hand, Q(fm + αmhm) = inf h∈H,α∈R Q(fm + αh) ≤ inf h∈H,α∈R Q(fm) + αQ′(fm; h) + 1 2α2B) = Q(fm) −suph∈H |Q′(fm; h)|2 2B . (10) Therefore, combining (9) and (10) , we get |Q′(fm; hm)| ≥sup h∈H |Q′(fm; h)| r β B . (11) Another Taylor expansion, this time around fm+1, gives us Q(fm) = Q(fm+1) + 1 2α2 mQ′′(˜˜f m; hm), (12) where ˜˜f m is some (other) function on the path from fm to fm+1. Therefore, if |αm| < |Q′(fm; hm)|/B, then Q(fm) −Q(fm+1) < |Q′(fm; hm)|2 2B , but by (10) Q(fm) −Q(fm+1) ≥suph∈H |Q′(fm; h)|2 2B ≥|Q′(fm; hm)|2 2B , therefore we conclude, by combining (11) and (8), that |αm| ≥|Q′(fm; hm)| B ≥ √β suph∈H |Q′(fm; h)| B3/2 ≥ϵm √β ℓmB3/2 . (13) Using (12) we have m X i=0 α2 i ≤2 β m X i=0 (Q(fi) −Q(fi+1)) ≤2 β (Q(f0) −Q( ¯f)). (14) Recall that
fm −¯f
∗ ≤
fm−1 −¯f
∗+ |αm−1| ≤
f0 −¯f
∗+ m−1 X i=0 |αi| ≤
f0 −¯f
∗+ √m m−1 X i=0 α2 i !1/2 , therefore, combining with (14) and (13), since sequence ϵi is decreasing, 2 β (Q(f0) −Q( ¯f)) ≥ m X i=0 α2 i ≥β B3 m X i=0 ϵ2 i ℓ2 i ≥β B3 ϵ2 m m X i=0 1 ℓ0 + √ i Pi−1 j=0 α2 j 1/22 ≥ β B3 ϵ2 m m X i=0 1 ℓ0 + √ i 2Q(f0) β 1/22 ≥ β 2B3 ϵ2 m m X i=0 1 ℓ2 0 + 2Q(f0) β i . Since m X i=0 1 a + bi ≥ Z m+1 0 dx a + bx = 1 b ln a + b(m + 1) a , then 2 β (Q(f0) −Q( ¯f)) ≥ β2 4B3Q(f0)ϵ2 m ln ℓ2 0 + 2Q(f0) β (m + 1) ℓ2 0 . Therefore ϵm ≤ s 8B3Q(f0)(Q(f0) −Q( ¯f)) β3 ln ℓ2 0 + 2Q(f0) β (m + 1) ℓ2 0 !−1 2 , and this completes the proof. ⋄ The theorem above allows us to get an upper bound on the difference between the ϕ-risk of the function output by AdaBoost and the ϕ-risk of the appropriate reference function. Theorem 2 Assume R∗> 0. Let tn = nν be the number of steps we run AdaBoost, let λn = κ ln n, with ν > 0, κ > 0 and ν + 2κ < 1. Let ¯fn be a minimizer of the function Rn(·) within Fλn. Then for n large enough with high probability the following holds Rn(ftn) ≤Rn( ¯fn) + 8 (R∗)3/2 ln λ2 n + (4/R∗)tn λ2n −1/2 Proof. This theorem follows directly from Theorem 1. Because in AdaBoost R′′ n(f; h) = 1 n n X i=1 (−Yih(Xi))2e−Yif(Xi) = 1 n n X i=1 e−Yif(Xi) = R(f) then all the conditions in Theorem 1 are satisfied (with Q(f) replaced by Rn(f)) and in the Equation (6) we have B = Rn(f0) = 1, β ≥Rn( ¯fn),
f0 −¯fn
∗≤λn. Since for t s.t. Rn(ft) ≤Rn( ¯fn) the theorem is trivially true we only have to notice that Lemma 2 guarantees that with probability at least 1 −δ |R( ¯fn) −Rn( ¯fn)| ≤4λnLϕ,λn r 2V ln(4n + 2) n + Mϕ,λn r ln(1/δ) 2n . Thus for n such that the r.h.s. of the above expression is less than R∗/2 we have β ≥Rn( ¯fn) ≥ R∗/2 and the result follows immediately from Equation (6) if we use the fact that Rn( ¯f) > 0. ⋄ Then, having all the ingredients at hand we can formulate the main result of the paper. Theorem 3 Assume V = dV C(H) < ∞, L∗> 0, lim λ→∞inf f∈Fλ R(f) = R∗, tn →∞, and tn = O(nν) for ν < 1. Then AdaBoost stopped at step tn returns a sequence of classifiers almost surely satisfying L(g(ftn)) →L∗. Proof. For the exponential loss function L∗> 0 implies R∗> 0. Let λn = κ ln n, κ > 0, 2κ + ν < 1. Also, let ¯f be a minimizer of R and ¯fn be a minimizer of Rn within Fλn. Then we have R(πλn(ftn)) ≤ Rn(πλn(ftn)) + ϵ1 by Lemma 2 (15) ≤ Rn(ftn) + ϵ1 + ϕ(λn) since ϕ(πλn(x)) ≤ϕ(x) + ϕ(λn) ≤ Rn( ¯fn) + ϵ1 + ϕ(λn) + ϵ2 by Theorem 2 (16) ≤ R( ¯f) + ϵ1 + ϕ(λn) + ϵ2 + ϵ3 by Lemma 2. (17) Inequalities (15) and (17) hold with probability at least 1 −δn, while inequality (16) is true for sufficiently large n when (17) holds. The ϵ’s above are ϵ1 = cnκκ ln n r (V + 1)(nν + 1) log2[2(nν + 1)/ ln 2] n + nκ r ln(1/δn) 2n ϵ2 = 8 (R∗)3/2 ln (κ ln n)2 + (4/R∗)nν (κ ln n)2 −1/2 , ϵ3 = 4nκκ ln n r 2V ln(4n + 2) n + nκ r ln(1/δn) 2n and ϕ(λn) = n−κ. Therefore, by the choice of ν and κ and appropriate choice of δn, for example δn = n−2, we have ϵ1 →0, ϵ2 →0, ϵ3 →0 and ϕ(λn) →0. Also, R( ¯f) →R∗by Assumption 1. Now we appeal to the Borel-Cantelli lemma and arrive at R(πλ(ftn)) →R∗a.s. Eventually we can use [17, Theorem 3] to conclude that L(g(πλn(ftn))) a.s. →L∗. But for λn > 0 we have g(πλn(ftn)) = g(ftn), therefore L(g(ftn)) a.s. →L∗. Hence AdaBoost is consistent if stopped after nν steps. ⋄ 4 Discussion We showed that AdaBoost is consistent if stopped sufficiently early, after tn iterations, for tn = nν with ν < 1, given that Bayes risk L∗> 0. It is unclear whether this number can be increased. Results by Jiang [5] imply that for some X and function class H AdaBoost algorithm will achieve zero training error after tn steps, where n2/tn = o(1). We don’t know what happens in between O(n1−ε) and O(n2 ln n). Lessening this gap is a subject of further research. We analyzed only AdaBoost, the boosting algorithm that uses loss function ϕ(x) = e−x. Since the proof of Theorem 2 relies on the properties of the exponential loss, we cannot make a similar conclusion for other versions of boosting, e.g., logit boosting with ϕ(x) = ln(1 + e−x): in this case assumption on the second derivative holds with R′′ n(f; h) ≥Rn(f)/n, though the resulting inequality is trivial, the factor 1/n precludes us from finding any useful bound. It is a subject of future work to find an analog of Theorem 2 that will handle logit loss. Acknowledgments We gratefully acknowledge the support of NSF under award DMS-0434383. References [1] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. [2] Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996. [3] Leo Breiman. Arcing classifiers (with discussion). The Annals of Statistics, 26(3):801–849, 1998. (Was Department of Statistics, U.C. Berkeley Technical Report 460, 1996). [4] Leo Breiman. Some infinite theory for predictor ensembles. Technical Report 579, Department of Statistics, University of California, Berkeley, 2000. [5] Wenxin Jiang. On weak base hypotheses and their implications for boosting regression and classification. The Annals of Statistics, 30:51–73, 2002. [6] G´abor Lugosi and Nicolas Vayatis. On the Bayes-risk consistency of regularized boosting methods. The Annals of Statistics, 32(1):30–55, 2004. [7] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32(1):56–85, 2004. [8] Tong Zhang and Bin Yu. Boosting with early stopping: convergence and consistency. The Annals of Statistics, 33:1538–1579, 2005. [9] Wenxin Jiang. Process consistency for AdaBoost. The Annals of Statistics, 32(1):13–29, 2004. [10] P. J. Bickel, Y. Ritov, and A. Zakai. Some theory for generalized boosting algorithms. Journal of Machine Learning Research, 7:705–732, May 2006. [11] Martin Anthony and Peter Bartlett. Neural network learning: theoretical foundations. Cambridge University Press, 1999. [12] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. The Annals of Statistics, 30:1–50, 2002. [13] Michel Ledoux and Michel Talagrand. Probability in Banach Spaces. Springer-Verlag, New York, 1991. [14] Richard M. Dudley. Uniform central limit theorems. Cambridge University Press, Cambridge, MA, 1999. [15] David Pollard. Empirical Processes: Theory and Applications. IMS, 1990. [16] Luc Devroye, L´aszl´o Gy¨orfi, and G´abor Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996. [17] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
|
2006
|
145
|
2,972
|
Cross-Validation Optimization for Large Scale Hierarchical Classification Kernel Methods Matthias W. Seeger Max Planck Institute for Biological Cybernetics P.O. Box 2169, 72012 T¨ubingen, Germany seeger@tuebingen.mpg.de Abstract We propose a highly efficient framework for kernel multi-class models with a large and structured set of classes. Kernel parameters are learned automatically by maximizing the cross-validation log likelihood, and predictive probabilities are estimated. We demonstrate our approach on large scale text classification tasks with hierarchical class structure, achieving state-of-the-art results in an order of magnitude less time than previous work. 1 Introduction In many real-world statistical problems, we would like to fit a model with a large number of dependent variables to a training sample with very many cases. For example, in multi-way classification problems with a structured label space, modern applications demand predictions on thousands of classes, and very large datasets become available. If n and C denote dataset size and number of classes respectively, nonparametric kernel methods like SVMs or Gaussian processes typically scale superlinearly in n C, if dependencies between the latent class functions are properly represented. Furthermore, most large scale kernel methods proposed so far refrain from solving the problem of learning hyperparameters (kernel or loss function parameters). The user has to run cross-validation schemes, which require frequent human interaction and are not suitable for learning more than a few hyperparameters. In this paper, we propose a general framework for learning in probabilistic kernel classification models. While the basic model is standard, a major feature of our approach is the high computational efficiency with which the primary fitting (for fixed hyperparameters) is done, allowing us to deal with hundreds of classes and thousands of datapoints within a few minutes. The primary fitting scales linearly in C, and depends on n mainly via a fixed number of matrix-vector multiplications (MVM) with n × n kernel matrices. In many situations, these MVM primitives can be computed very efficiently, as will be demonstrated. Furthermore, we optimize hyperparameters automatically by minimizing the cross-validation log likelihood, making use of our primary fitting technology as inner loop in order to compute the CV criterion and its gradient. Our approach can be used to learn a large number of hyperparameters and does not need user interaction. Our framework is generally applicable to structured label spaces, which we demonstrate here for hierarchical classification of text documents. The hierarchy is represented through an ANOVA setup. While the C latent class functions are fully dependent a priori, the scaling of our method stays within a factor of two compared to unstructured classification. We test our framework on the same tasks treated in [1], achieving comparable results in at least an order of magnitude less time. Our method estimates predictive probabilities for each test point, which can allow better predictions w.r.t. loss functions different from zero-one. The primary fitting method is given in Section 2, the extension to hierarchical classification in Section 3. Hyperparameter learning is discussed in Section 4. Computational details are provided in Section 5. We present experimental results in Section 6. Our highly efficient implementation is publicly available, as project klr in the LHOTSE1 toolbox for adaptive statistical models. 2 Penalized Multiple Logistic Regression Our problem is to predict y ∈{1, . . . , C} from x ∈X, given some i.i.d. data D = {(xi, yi) | i = 1, . . . , n}. We use zero-one coding, i.e. yi ∈{0, 1}C, 1T yi = 1. We elpoy the multiple logistic regression model, consisting of C latent (unobserved) class functions uc feeding into the multiple logistic (or softmax) likelihood P(yi,c = 1|xi, ui) = euc(xi)/(P c′ euc′(xi)). We write uc = fc + bc for intercept parameters bc ∈R and functions fc living in a reproducing kernel Hilbert space (RKHS) with kernel K(c), and consider the penalized negative log likelihood Φ = −Pn i=1 log P(yi|ui) + (1/2) PC c=1 ∥fc∥2 c + (1/2)σ−2∥b∥2, which we minimize for primary fitting. ∥· ∥c is the RKHS norm for kernel K(c). Details on such setups can be found in [4]. Our notation for n C vectors2 (and matrices) uses the ordering y = (y1,1, y2,1, . . . , yn,1, y1,2, . . . ). We set u = (uc(xi)) ∈RnC. ⊗denotes the Kronecker product, 1 is the vector of all ones. Selection indexes I are applied to i only: yI = (yi,c)i∈I,c ∈R|I|C. Since the likelihood depends on the fc only through fc(xi), every minimizer of Φ must be a kernel expansion: fc = P i αi,cK(c)(·, xi) (representer theorem, see [4]). Plugging this in, the regularizer becomes (1/2)αT Kα + (1/2)σ−2∥b∥2. K(c) = (K(c)(xi, xj))i,j ∈Rn,n, K = diag(K(c))c is block-diagonal. We refer to this setup as flat classification model. The bc may be eliminated as b = σ2(I ⊗1T )α. Thus, if ˜K = K + σ2(I ⊗1)(I ⊗1T ), then Φ becomes Φ = Φlh + 1 2αT ˜Kα, Φlh = −yT u + 1T l, li = log 1T exp(ui), u = ˜Kα. (1) Φ is strictly convex in α (because the likelihood is log-concave), so it has a unique minimum point ˆα. The corresponding kernel expansions are ˆuc = P i ˆαi,c(K(c)(·, xi) + σ2). Estimates of the conditional probability on test points x∗are obtained by plugging ˆuc(x∗) into the likelihood. We note that this setup can also be seen as MAP approximation to a Bayesian model, where the fc are given independent Gaussian process priors, e.g.[7]. It is also related to the multi-class SVM [2], where −log P(yi|ui) is replaced by the margin loss −uyi(xi) + maxc{uc(xi) + 1 −δc,yi}. The negative log multiple logistic likelihood has similar properties, but is smooth as a function of u, and the primary fitting of α does not require constrained convex optimization. We minimize Φ using the Newton-Raphson (NR) algorithm, the details are provided in Section 5. The complexity of our fitting algorithm is dominated by k1(k2 + 2) matrix-vector multiplications with K, where k1 is the number of NR iterations, k2 the number of linear conjugate gradient (LCG) steps for computing each Newton direction. Since NR is a second-order convergent method, k1 can be chosen small. k2 determines the quality of each Newton direction, for both fairly small values are sufficient (see Section 6.2). 3 Hierarchical Classification So far we dealt with flat classification, the classes being independent a priori, with block-diagonal kernel matrix K. However, if the label set has a known structure3, we can benefit from representing it in the model. Here we focus on hierarchical classification, the label set {1, . . . , C} being the leaf nodes of a tree. Classes with lower common ancestor should be more closely related. In this Section, we propose a model for this setup and show how it can be dealt with in our framework with minor modifications and minor extra cost. In flat classification, the latent class functions uc are modelled as a priori independent, in that the regularizer (which plays the role of a log prior) is a sum of individual terms for each uc, without any 1See www.kyb.tuebingen.mpg.de/bs/people/seeger/lhotse/. 2In Matlab, reshape(y,n,C) would give the matrix (yi,c) ∈Rn,C. 3Learning an unknown label set structure may be achieved by expectation maximization techniques, but this is subject to future work. interaction terms. Analysis of variance (ANOVA) models go beyond this independent design, they have previously been applied to text classification by [1]. Let {0, . . . , P} be the nodes of the tree, 0 being the root, and the numbers are assigned breadth first (1, 2, . . . are the root’s children). The tree is determined by P and np, p = 0, . . . , P, the number of children of node p. Let L be the set of leaf nodes, |L| = C. Assign a pair of latent functions up, ˘up to each node, except the root. The ˘up are assumed a priori independent, as in flat classification. up is the sum of ˘up′, p′ running over the nodes (including p) on the path from the root to p. The class functions to be fed into the likelihood are the uL(c) of the leafs. This setup represents similarities conditioned on the hierarchy. For example, if leafs L(c), L(c′) have the common parent p, then uL(c) = up + ˘uL(c), uL(c′) = up + ˘uL(c′), so the class functions share the effect up. Since regularization forces all independent effects ˘up′ to be smooth, the classes c, c′ are urged to behave similarly a priori. Let u = (up(xi))i,p, ˘u = (˘up(xi))i,p ∈RnP . The vectors are related as u = (Φ ⊗I)˘u, Φ ∈ {0, 1}P,P . Importantly, Φ has a simple structure which allows MVM with Φ or ΦT to be computed easily in O(P), without having to compute or store Φ explicitly. MVM with Φ is described in Algorithm 1, and MVM with ΦT works in a similar manner [8]. Under the hierarchical model, the class functions uL(c) are strongly dependent a priori. We may represent this prior coupling in our framework by simply plugging in the implied kernel matrix K: K = (ΦL,· ⊗I) ˘K(ΦT L,· ⊗I), (2) where the inner ˘K is block-diagonal. K is not sparse and certainly not block-diagonal, but the important point is that we are still able to do kernel MVMs efficiently: pre- and postmultiplying by Φ is cheap, and ˘K is block-diagonal just as in the flat case. We note that the step from flat to hierarchical classification requires minor modifications of existing code only. If code for representing a block-diagonal K is available, we can use it to represent the inner ˘K, just replacing C by P. This simplicity carries through to the hyperparameter learning case (see Section 4). The cost of a kernel MVM is increased by a factor P/C < 2, which in most hierarchies in practice is close to 1. However, it would be wrong to claim that hierarchical classification in general comes as cheap as flat classification. Algorithm 1: Matrix-vector multiplication y = Φx y ←(). y0 := 0. s := 0. for p = 0, . . . , P do if np > 0 (p not a leaf node) then Let J(p) = {s + 1, . . . , s + np}. y ←(yT , yp1T + xT J(p))T . s ←s + np. end if end for The subtle issue is that the primary fitting becomes more costly, precisely because there is more coupling between the variables. In the flat case, the Hessian of Φ is close to block-diagonal. The LCG algorithm to compute Newton directions converges quickly, because it nearly decomposes into C independent ones, and fewer NR steps are required (see Section 5). In the hierarchical case, both LCG and NR need more iterations to attain the same accuracy. In numerical mathematics, much work has been done to approximately decouple linear systems by preconditioning. In some of these strategies, knowledge about the structure of the system matrix (in our case: the hierarchy) can be used to drive preconditioning. An important point for future research is to find a good preconditioning strategy for the system of Eq. 5. However, in all our experiments so far the fitting of the hierarchical model took less than twice the time required for the flat model on the same task. Some further extensions, such as learning with incomplete label information, are discussed in [8]. 4 Hyperparameter Learning In any model of interest, there will be free hyperparameters h, for example parameters of the kernels K(c). These were assumed to be fixed in the primary fitting method introduced in Section 2. In this Section, we describe a scheme for learning h which makes use of the primary fitting algorithm as inner loop. Note that such nested strategies are commonplace in Bayesian Statistics, where (marginal) inference is typically used as subroutine for parameter learning. Recall that primary fitting consists of minimizing Φ of Eq. 1 w.r.t. α. If we minimize Φ w.r.t. h as well, we run into the problem of overfitting. A common remedy is to minimize the negative crossvalidation log likelihood Ψ instead. Let {Ik} be a partition of {1, . . . , n}, with Jk = {1, . . . , n}\Ik, and let ΦJk = uT [Jk]((1/2)α[Jk] −yJk) + 1T l[Jk] be the primary criterion on the subset Jk of the data. Here, u[Jk] = ˜K Jkα[Jk]. The α[Jk] are independent variables, not part of a common α. The CV criterion is Ψ = X k ΨIk, ΨIk = −yT Iku[Ik] + 1T l[Ik], u[Ik] = ˜K Ik,Jkα[Jk], (3) where α[Jk] minimizes ΦJk. Since for each k, we fit and evaluate on disjoint parts of y, Ψ is an unbiased estimator of the test negative log likelihood, and minimizing Ψ should be robust to overfitting. In order to select h, we pick a fixed partition at random, then do gradient-based minimization of Ψ w.r.t. h. To this end, we keep the set {α[Jk]} of primary variables, and iterate between re-fitting those for each fold Ik, and computing Ψ and ∇hΨ. The latter can be determined analytically, requiring us to solve a linear system with the Hessian matrix I +V T [Jk] ˜K JkV [Jk] already encountered during primary fitting (see Section 5). This means that the same LCG code used to compute Newton directions there can be applied here in order to compute the gradient of Ψ. The details are given in Section 5. As for the complexity, suppose there are q folds. The update of the α[Jk] requires q primary fitting applications, but since they are initialized with the previous values α[Jk], they do converge very rapidly, especially during later outer iterations. Computing Ψ based on the α[Jk] comes basically for free. The gradient computation decomposes into two parts: accumulation, and kernel derivative MVMs. The accumulation part requires solving q systems of size ((q −1)/q)n C, thus q k3 kernel MVMs on the ˜K Jk if linear conjugate gradients (LCG) is used, k3 being the number of LCG steps. We also need two buffer matrices E, F of q n C elements each. Note that the accumulation step is independent of the number of hyperparameters. The kernel derivative MVM part consists of q derivative MVM calls for each independent component of h, see Section 5.1. As opposed to the accumulation part, this part consists of a simple large matrix operation and can be run very efficiently using specialized numerical linear algebra code. As shown in Section 5, the extension of hyperparameter learning to the hierarchical case of Section 3 is simply done by wrapping the accumulation part, the coding and additional memory effort being minimal. Given a method for computing Ψ and ∇hΨ, we plug these into a custom optimizer such as Quasi-Newton in order to learn h. 5 Computational Details In this Section, we provide details for the general plan laid out above. It is precisely these which characterize our framework and allow us to apply a standard model to domains beyond its usual applications, but of interest to Machine Learning. Recall Section 2. We minimize Φ by choosing search directions s, and doing line minimizations along α + λs, λ > 0. For the latter, we maintain the pair (α, u), u = ˜Kα. We have: ∇uΦ = π −y + α, π = exp(u −1 ⊗l), i.e. πi,c = P(yi,c = 1|ui). (4) Given (α, u), Φ and ∇uΦ can be computed in O(n C), without requiring MVMs. This suggests to perform the line search in u along the direction ˜s = ˜Ks, the corresponding α can be constructed from the final λ. Since kernel MVMs are significantly more expensive than these O(n C) operations, the line searches basically come for free! We choose search directions by Newton-Raphson (NR)4, since the Hessian of Φ is required anyway for hyperparameter learning. Let D = diag π, P = (1⊗I)(1T ⊗I), and W = D −DP D. We have ∇∇uΦlh = W , and g = ∇uΦlh = π −y from Eq. 4. The NR system is (I + W ˜K)α′ = W u −g, with the NR direction being s = α′ −α. If V = (I −DP )D1/2, then W = V V T , because (1T ⊗I)D = I. We see that α′ = V β (using (1T ⊗I)g = 0), and we can obtain it from the equivalent symmetric system I + V T ˜KV β = V T u −D−1/2(π −y), α′ = V β (5) 4Initial experiments with conjugate gradients in α gave very slow convergence, due to poor conditioning, but experiments with a different dual criterion are in preparation. (details are in [8]). Note that P x = (P c′ x(c′))c, so that MVM with V can be done in O(n C). The NR direction is obtained by solving this system approximately by the linear conjugate gradients (LCG) method, requiring a MVM with the system matrix in each iteration, thus a single MVM with K. Our implementation includes diagonal preconditioning and numerical stability safeguards [8]. The NR system need not be solved to high accuracy (see Section 6.2). Initially, β = D−1/2α, because then V β = α if only (1T ⊗I)α = 0, which is true if the initial α fulfils it. We now show how to compute the gradient ∇hΨ for the CV criterion Ψ (Eq. 3). Note that α[J] is determined by the stationary equation α[J] + g[J] = 0. Taking the derivative gives dα[J] = −W [J]((dKJ)α[J] + ˜K J(dα[J])). We obtain a system for dα[J] which is symmetrized as above: (I + V T [J] ˜K JV [J])β = −V T [J](dKJ)α[J], dα[J] = V [J]β. Also, dΨI = (π[I] −yI)T ((dKI,J)α[J] + ˜K I,J(dα[J])). With s = I·,I(π[I] −yI) −I·,JV [J](I + V T [J] ˜K JV [J])−1V T [J] ˜K J,I(π[I]−yI), we have that dΨI = (I·,Jα[J])T (dK)s. If we collect these vectors as columns of E, F ∈RnC,q, we have that dΨ = tr ET (dK)F . In the hierarchical setup, we use Eq. 2: ˜E = (ΦT L,· ⊗I)E ∈RnP,q, ˜F accordingly, then dΨ = tr ˜E T (d ˘K) ˜F . Here, we build E, F in the buffers allocated for ˜E, ˜F , then transform them later in place. We finally mention some of the computational “tricks”, without which we could not have dealt with the largest tasks in Section 6.2 (for section B, a single n C vector requires 88M of memory). For the linear kernel (see Section 5.1), the main primitive A 7→XXT A can be coded very efficiently using a standard sparse matrix format for X. If A is stored row-major (a1,1, a1,2, . . . ), the computation becomes faster by a factor of 4 to 6 compared to the standard column-major format5. For hyperparameter learning, we work on subsets Jk and need MVMs with ˜K Jk. “Covariance representation shuffling” permutes the representation s.t. ˜K Jk sits in the upper left part, and MVM can use flat rather than indexed code, which is many times faster. We also share memory blocks of size n C between LCG, gradient accumulation, line searches, keeping the overall memory requirements at r n C for a small constant r, and avoiding frequent reallocations. 5.1 Matrix-Vector Multiplication MVM with K is the bottleneck of our framework, and all efforts should be concentrated on this primitive. We can tap into much prior work in numerical mathematics. With many classes C, we may share kernels: K(c) = vcM (lc), vc > 0 variance parameters, M (l) independent correlation functions. Our generic implementation stores two symmetric matrices M (l) in a n × n buffer. The linear kernel K(c)(x, x′) = vcxT x′ is frequently used for text classification (see Section 6.2). If the data matrix X is sparse, kernel MVM can be done in much less than the generic O(C n2), typically in O(C n), requiring O(n) storage for X only, even if the dimension of x is way beyond n. If the K(c) are isotropic kernels (depending on ∥x−x′∥only) and the x are low-dimensional, MVM with K(c) can be approximated using specialized nearest neighbour data structures such as KD trees [12, 9]. Again, the MVM cost is typically O(C n) in this case. For general kernels whose kernel matrices have a rapidly decaying eigenspectrum, one can approximate MVM by using low-rank matrices instead of the K(c) [10], whence MVM is O(C n d), d the rank. In Section 4 we also need MVM with the derivatives (∂/∂hj)K(c). Note that (∂/∂log vc)K(c) = K(c), reducing to kernel MVM. For isotropic kernels, K(c) = f(A), ai,j = ∥xi −xj∥, so (∂/∂hj)K(c) = gj(A). If KD trees are used to approximate A, they can be used equivalently (and with little additional cost) for computing derivative MVMs. 5The innermost vector operations work on contiguous chunks of memory, rather than strided ones, thus supporting cacheing or vector functions of the processor. 6 Experiments In this Section, we provide experimental results for our framework on data from remote sensing, and on a set of large text classification tasks with very many classes, the latter are hierarchical. 6.1 Flat Classification: Remote Sensing We use the satimage remote sensing task from the statlog repository.6 This task has been used in the extensive SVM multi-class study of [5], where it is among the datasets on which the different methods show the most variance. It has n = 4435 training, m = 2000 test cases, and C = 6 classes. We use the isotropic Gaussian (RBF) kernel K(c)(x, x′) = vc exp −wc 2d ∥x −x′∥2 , vc, wc > 0, x, x′ ∈Rd. (6) We compare the methods mc-sep (ours with separate kernels for each class; 12 hyperparameters), mc-tied (ours with a single shared kernel; 2 hyperparameters), 1rest (one-against-rest: C binary classifiers are trained separately to discriminate c from the rest, they are voted by log probability upon prediction; 12 hyperparameters). Note that 1rest is arguably the most efficient method which can be used for multi-class, because its binary classifiers can be fitted separately and in parallel. Even if run sequentially, 1rest requires less memory by a factor of C than a joint multi-class method. We use our 5-fold CV criterion Ψ for each method. Results here are averaged over ten randomly drawn 5-partitions of the training set (the same partitions are used for the different methods). The test error (in percent) of mc-sep is 7.81 vs. 8.01 for 1rest. The result for mc-sep is state-of-the-art, for example the best SVM technique tested in [5] attained 7.65, and SVM one-against-rest attained 8.30 in this study. Note that while 1rest also may choose 12 independent kernel parameters, it does not make good use of this possibility, as opposed to mc-sep. mc-tied has test error 8.37, suggesting that tying kernels leads to significant degradation. ROC curves for the different methods are given in [8], showing that mc-sep also profits from estimating the predictive probabilities in a better way. 6.2 Hierarchical Classification: Patent Text Classification We use the WIPO-alpha collection7 previously studied in [1], where patents (title and claim text) are to be classified w.r.t. the standard taxonomy IPC, a tree with 4 levels and 5229 nodes. Sections A, B,. . . , H. form the first level. As in [1], we concentrate on the 8 subtasks rooted at the sections, ranging from D (n = 1140, C = 160, P = 187) to B (n = 9794, C = 1172, P = 1319). We use linear kernels (see Section 5.1) with variance parameters vc. All experiments are averaged over three training/test splits, different methods using the same ones. Ψ is used with a different 5-partition per section and split, the same across all methods. Our method outputs a predictive pj ∈RC for each test case xj. The standard prediction y(xj) = argmaxc pj,c maximizes expected accuracy, classes are ranked as rj(c) ≤rj(c′) iff pj,c ≥pj,c′. The test scores are the same as in [1]: accuracy (acc) m−1 P j I{y(xj)=yj}, precision (prec) m−1 P j rj(yj)−1, parent accuracy (pacc) m−1 P j I{par(y(xj))=par(yj)}, par(c) being the parent of L(c). Let ∆(c, c′) be half the length of the shortest path between leafs L(c), L(c′). The taxo-loss (taxo) is m−1 P j ∆(y(xj), yj). These scores are motivated in [1]. For taxo-loss and parent accuracy, we better choose y(xj) to minimize expected loss8, different from the standard prediction. We compare methods F1, F2, H1, H2 (F: flat; H: hierarchical). F1: all vc shared (1); H1: vc shared across each level of the tree (3). F2, H2: vc shared across each subtree rooted at root’s children (A: 15, B: 34, C: 17, D: 7, E: 7, F: 17, G: 12, H: 5). Recall that there are 3 accuracy parameters. For hyperparameter learning: k1 = 8, k2 = 4, k3 = 15 (F1, F2); k1 = 10, k2 = 4, k3 = 25 (H1, H2)9. 6Available at http://www.niaad.liacc.up.pt/old/statlog/. 7Raw data from www.wipo.int/ibis/datasets. Label hierarchy described at www.wipo.int/classifications/en. Thanks to L. Cai, T. Hofmann for providing us with the count data and dictionary. We did Porter stemming, stop word removal, and removal of empty categories. The attributes are bag-of-words over the dictionary of occuring words. All cases xi were scaled to unit norm. 8For parent accuracy, let p(j) be the node with maximal mass (under pj) of its children which are leafs, then y(xj) must be a child of p(j). 9Except for section C, where k1 = 14, k2 = 6, k3 = 35. acc (%) prec (%) taxo F1 H1 F2 H2 F1 H1 F2 H2 F1 H1 F2 H2 A 40.6 41.9 40.5 41.9 51.6 53.4 51.4 53.4 1.27 1.19 1.29 1.19 B 32.0 32.9 31.7 32.7 41.8 43.8 41.6 43.7 1.52 1.44 1.55 1.44 C 33.7 34.7 34.1 34.5 45.2 46.6 45.4 46.4 1.34 1.26 1.35 1.27 D 40.0 40.6 39.7 40.8 52.4 54.1 52.2 54.3 1.19 1.11 1.18 1.11 E 33.0 34.2 32.8 34.1 45.1 47.1 45.0 47.1 1.39 1.31 1.38 1.31 F 31.4 32.4 31.4 32.5 42.8 44.9 42.8 45.0 1.43 1.34 1.43 1.34 G 40.1 40.7 40.2 40.7 51.2 52.5 51.3 52.5 1.32 1.26 1.32 1.26 H 39.3 39.6 39.4 39.7 52.4 53.3 52.5 53.4 1.17 1.15 1.17 1.14 taxo[0-1] pacc (%) pacc[0-1] (%) F1 H1 F2 H2 F1 H1 F2 H2 F1 H1 F2 H2 A 1.28 1.19 1.29 1.18 58.9 61.6 58.2 61.5 57.2 61.3 56.9 61.4 B 1.54 1.44 1.56 1.44 53.6 56.4 52.7 56.6 51.9 55.9 51.4 55.9 C 1.33 1.26 1.32 1.26 58.9 62.6 58.5 62.0 58.6 61.8 58.9 61.6 D 1.20 1.12 1.22 1.12 64.6 67.0 64.4 67.1 63.5 67.1 62.6 67.0 E 1.43 1.33 1.44 1.34 56.0 59.1 56.2 59.2 54.0 58.2 53.5 57.9 F 1.43 1.34 1.44 1.34 56.8 59.7 56.8 59.8 54.9 58.7 54.6 58.9 G 1.32 1.26 1.32 1.26 58.0 59.7 57.6 59.6 56.8 59.2 56.6 58.9 H 1.19 1.16 1.19 1.15 61.6 62.5 61.8 62.5 59.9 61.6 60.0 61.8 Table 1: Results on tasks A-H. Methods F1, F2 flat, H1, H2 hierarchical. taxo[0-1], pacc[0-1] for argmaxc pj,c rule, rather than minimize expected loss. Final NR (s) CV Fold (s) Final NR (s) CV Fold (s) F1 H1 F1 H1 F1 H1 F1 H1 A 2030 3873 573 598 E 131.5 203.4 32.2 49.6 B 3751 8657 873 1720 F 1202 2871 426 568 C 4237 7422 719 1326 G 1342 2947 232 579 D 56.3 118.5 9.32 20.2 H 971.7 1052 146 230 Table 2: Running times for tasks A-H. Method F1 flat, H1 hierarchical. CV Fold: Re-optimization of α[J], gradient accumulation for single fold. For final fitting: k1 = 25, k2 = 12 (F1, F2); k1 = 30, k2 = 17 (H1, H2). The optimization is started from vc = 5 for all methods. Results are given in Table 1. The hierarchical model outperforms the flat one consistently. While the differences in accuracy and precision are hardly significant (as also found in [1]), they (partly) are in taxo-loss and parent accuracy. Also, minimizing expected loss is consistently better than using the standard rule for the latter, although the differences are very small. H1 and H2 do not perform differently: choosing many different vc in the linear kernel seems no advantage here (but see Section 6.1). The results are very similar to the ones of [1]. However, for our method, the recommendation in [1] to use vc = 1 leads to significantly worse results in all scores, the vc chosen by our methods are generally larger. In Table 2, we present running times10 for the final fitting and for a single fold during hyperparameter optimization (5 of them are required for Ψ, ∇hΨ). Cai and Hofmann [1] quote a final fitting time of 2200s on the D section, while we require 119s (more than 18 times faster). It is precisely this high efficiency of primary fitting which allows us to use it as inner loop for hyperparameter learning. 7 Discussion We presented a general framework for very efficient large scale kernel multi-way classification with structured label spaces and demonstrated its features on hierarchical text classification tasks with many classes. As shown for the hierarchical case, the framework is easily extended to novel struc10Processor time on 64bit 2.33GHz AMD machines. tural priors or covariance functions, and while not shown here, it is also easy to extend it to different likelihoods (as long as they are log-concave). We solve the kernel parameter learning problem by optimizing the CV log likelihood, whose gradient can be computed within the framework. Our method provides estimates of the predictive distribution at test points, which may result in better predictions for non-standard losses or ROC curves. Efficient and easily extendable code is publicly available (see Section 1). An extension to multi-label classification is planned. More advanced label set structures can be adressed, noting that Hessian vector products can often be computed in about the same way as gradients. An application to label sequence learning is work in progress, which may even be combined with a hierarchical prior. Infering a hierarchy from data is possible in principle, using expectation maximization techniques (note that the primary fitting can deal with target distributions yi), as well as incorporating uncertain data. Empirical Bayesian methods or approximate CV scores for hyperparameter learning have been proposed in [11, 3, 6], but they are orders of magnitude more expensive than our proposal here, and do not apply to a massive number of classes. Many multi-class SVM techniques are available (see [2, 5] for references). Here, fitting is a constrained convex problem, and often fairly sparse solutions (many zeros in α) are found. However, if the degree of sparsity is not large, first-order conditional gradient methods typically applied can be slow11. SVM methods typically do not come with efficient automatic kernel parameter learning schemes, and they do not provide estimates of predictive probabilities which are asymptotically correct. Acknowledgments Thanks to Olivier Chapelle for many useful discussions. Supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. References [1] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In CIKM 13, pages 78–87, 2004. [2] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. J. M. Learn. Res., 2:265–292, 2001. [3] P. Craven and G. Wahba. Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation. Numerische Mathematik, 31:377–403, 1979. [4] P.J. Green and B. Silverman. Nonparametric Regression and Generalized Linear Models. Monographs on Statistics and Probability. Chapman & Hall, 1994. [5] C.-W. Hsu and C.-J. Lin. A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks, 13:415–425, 2002. [6] Y. Qi, T. Minka, R. Picard, and Z. Ghahramani. Predictive automatic relevance determination by expectation propagation. In Proceedings of ICML 21, 2004. [7] M. Seeger. Gaussian processes for machine learning. International Journal of Neural Systems, 14(2):69– 106, 2004. [8] M. Seeger. Cross-validation optimization for structured Hessian kernel methods. Technical report, Max Planck Institute for Biologic Cybernetics, T¨ubingen, Germany, 2006. See www.kyb.tuebingen.mpg.de/bs/people/seeger. [9] Y. Shen, A. Ng, and M. Seeger. Fast Gaussian process regression using KD-trees. In Advances in NIPS 18, 2006. [10] A. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in NIPS 13, pages 619–625, 2001. [11] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE PAMI, 20(12):1342–1351, 1998. [12] C. Yang, R. Duraiswami, and L. Davis. Efficient kernel machines using the improved fast Gauss transform. In Advances in NIPS 17, pages 1561–1568, 2005. 11These methods solve a very large number of small problems iteratively, as opposed to ours which does few expensive Newton steps. The latter kind, if feasible at all, often makes better use of hardware features such as cacheing and vector operations, and therefore is the preferred approach in numerical optimization.
|
2006
|
146
|
2,973
|
Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning Gediminas Lukˇsys1,2 gediminas.luksys@epfl.ch J´er´emie Kn¨usel1 jeremie.knuesel@epfl.ch Denis Sheynikhovich1 denis.sheynikhovich@epfl.ch Carmen Sandi2 carmen.sandi@epfl.ch Wulfram Gerstner1 wulfram.gerstner@epfl.ch 1Laboratory of Computational Neuroscience 2Laboratory of Behavioral Genetics Ecole Polytechnique F´ed´erale de Lausanne CH-1015, Switzerland Abstract Stress and genetic background regulate different aspects of behavioral learning through the action of stress hormones and neuromodulators. In reinforcement learning (RL) models, meta-parameters such as learning rate, future reward discount factor, and exploitation-exploration factor, control learning dynamics and performance. They are hypothesized to be related to neuromodulatory levels in the brain. We found that many aspects of animal learning and performance can be described by simple RL models using dynamic control of the meta-parameters. To study the effects of stress and genotype, we carried out 5-hole-box light conditioning and Morris water maze experiments with C57BL/6 and DBA/2 mouse strains. The animals were exposed to different kinds of stress to evaluate its effects on immediate performance as well as on long-term memory. Then, we used RL models to simulate their behavior. For each experimental session, we estimated a set of model meta-parameters that produced the best fit between the model and the animal performance. The dynamics of several estimated meta-parameters were qualitatively similar for the two simulated experiments, and with statistically significant differences between different genetic strains and stress conditions. 1 Introduction Animals choose their actions based on reward expectation and motivational drives. Different aspects of learning are known to be influenced by acute stress [1, 2, 3] and genetic background [4, 5]. Stress effects on learning depend on the stress type (eg task-specific or unspecific) and intensity, as well as on the learning paradigm (eg spatial/episodic vs. procedural learning) [3]. It is known that stress can affect short- and long-term memory by modulating plasticity through stress hormones and neuromodulators [1, 2, 3, 6]. However, there is no integrative model that would accurately predict and explain differential effects of acute stress. Although stress factors can be described in quantitative measures, their effects on learning, memory, and performance are strongly influenced by how an animal perceives it. The subjective experience can be influenced by emotional memories as well as by behavioral genetic traits such as anxiety, impulsivity, and novelty reactivity [4, 5, 7]. In the present study, behavioral experiments conducted on two different genetic strains of mice and under different stress conditions were combined with a modeling approach. In our models, behavioral performance as a function of time was described in the framework of temporal difference reinforcement learning (TDRL). In TDRL models [8] a modeled animal, termed agent, can occupy various states and undertake actions in order to acquire rewards. The expected values of cumulative future reward (Q-values) are learned by observing immediate rewards delivered under different state-action combinations. Their update is controlled by certain meta-parameters such as learning rate, future reward discount factor, and memory decay/interference factor. The Q-values (together with the exploitation/exploration factor) determine what actions are more likely to be chosen when the animal is at a certain state, ie they represent the goal-oriented behavioral strategy learned by the agent. The activity of certain neuromodulators in the brain are thought to be associated with the role the meta-parameters play in the TDRL models. Besides dopamine (DA), whose levels are known to be related to the TD reward prediction error [9], serotonin (5-HT), noradrenaline (NA), and acetylcholine (ACh) were discussed in relation to TDRL meta-parameters [10]. Thus, the knowledge of the characteristic meta-parameter dynamics can give an insight into the putative neuromodulatory activities in the brain. Dynamic parameter estimation approaches, recently applied to behavioral data in the context of TDRL [11], could be used for this purpose. In our study, we carried out 5-hole-box light conditioning and Morris water maze experiments with C57BL/6 and DBA/2 inbred mouse strains (referred to as C57 and DBA from now on), renown for their differences in anxiety, impulsivity, and spatial learning [4, 5, 12]. We exposed subgroups of animals to different kinds of stress (such as motivational stress or task-specific uncertainty) in order to evaluate its effects on immediate performance, and also tested their long-term memory after a break of 4-7 weeks. Then, we used TDRL models to describe the mouse behavior and established a number of performance measures that are relevant to task learning and memory (such as mean response times and latencies to platform) in order to compare the outcome of the model with the animal performance. Finally, for each experimental session we ran an optimization procedure to find a set of the meta-parameters, best fitting to the experimental data as quantified by the performance measures. This approach made it possible to relate the effects of stress and genotype to differences in the meta-parameter values, allowing us to make specific inferences about learning dynamics (generalized over two different experimental paradigms) and their neurobiological correlates. 2 Reinforcement learning model of animal behavior In the TDRL framework [8] animal behavior is modelled as a sequence of actions. After an action is performed, the animal is in a new state where it can again choose from a set of possible actions. In certain states the animal is rewarded, and the goal of learning is to choose actions so as to maximize the expected future reward, or Q-value, formally defined as Q(st, at) = E ∞ X k=0 γkrt+k+1|st, at , (1) where (st, at) is the state-action pair, rt is a reward received at time step t and 0 < γ < 1 is the future reward discount factor which controls to what extent the future rewards are taken into account. As soon as state st+1 is reached and a new action is selected, the estimate of the previous state’s value Q(st, at) is updated based on the reward prediction error δt [8]: δt = rt+1 + γQ(st+1, at+1) −Q(st, at) , (2) Q(st, at) ←Q(st, at) + αδt , (3) where α is the learning rate. The action selection at each state is controlled by the exploitation factor β such that actions with high Q-values are chosen more often if the β is high, whereas random actions are chosen most of the time if the β is close to zero. Meta-parameters α, β and γ are the free parameters of the model. 3 5-hole-box experiment and modeling Experimental subjects were male mice (24 of the C57 strain, and 24 of the DBA strain), 2.5-month old at the beginning of the experiment, and food deprived to 85-90% of the initial weight. During an experimental session, each animal was placed into the 5-hole-box (5HB) (Figure 1a). The animals had to learn to make a nose poke into any of the holes upon the onset of lights and not to make it in the absence of light. After the response to light, the animals received a reward in form of a food pellet. Once a poke was initiated (see starting a poke in Figure 1b), the mouse had to stay in the hole at least for a short time (0.3-0.5 sec) in order to find the delivered reward (continuing a poke). Trial ended (lights turned off) as soon as the nose poke was finished. If the mouse did not find the reward, the reward remained in the box and the animal could find it during the next poke in the same box. The inter-trial interval (ITI) between subsequent trials was 15 sec. However, a new trial could only start when during the last 3 sec before it there were no wrong (ITI) pokes, so as to penalize spontaneous poking. The total session time was 10 min. Hence, the number of trials depended on how fast animals responded to light and how often they made ITI pokes. B.1 B.2 B.3 B.4 B.5 Trial, continuing a poke Trial, starting a poke Trial, staying outside ITI, starting a poke ITI, continuing a poke ITI, staying outside a. b. Trial starts after 15 sec. ITI Reward Reward (if available) Figure 1: a. Scheme of the 5HB experiment. Open circles are the holes where the food is delivered, filled circles are the lights. All 5 holes were treated as equivalent during the experiment. b. 5HB state-action chart. Rectangles are states, arrows are actions. After 2 days of habituation, during which the mice learned that food could be delivered in the holes, they underwent 8 consecutive days of training. During days 5-7 subsets of the animals were exposed to different stress conditions: motivational stress (MS, food deprivation to 85-87% of the initial weight vs. 88-90% in controls) and uncertainty in the reward delivery (US, in 50% of correct responses they received either none or 2 food pellets). Mice of each strain were divided into 4 stress groups: controls, MS, US, and MS+US. After a break of 26 days the long-term memory of the mice was tested by retraining them for another 8 days. During days 5-8 of the retraining, we again evaluated the impact of stress factors by exposing half of the mice to extrinsic stress (ES, 30 min on an elevated platform right before the 5HB experiment). To model the mouse behavior we used a discrete state TDRL model with 6 states: [ITI, trial] × [staying outside, starting a poke, continuing a poke], and 2 actions: move (in or out), and stay (see Figure 1b). Actions were chosen according to the soft-max method [8]: p(a|s) = exp(βQ(s, a))/ X k exp(βQ(s, ak)) , (4) where k runs over all actions and β is the exploitation factor. Initial Q-values were equal to zero. Since the time spent outside the holes was comparatively long and included multiple (task irrelevant) actions, state/action pair staying outside/stay was given much more weight in the above formula. The time step (0.43 sec) was constant throughout the experiment and was chosen to fit the animal performance in the beginning of the experiment. Finally, to account for the memory decay after each day all Q(s, a) values were updated as follows: Q(s, a) ←Q(s, a) · (1 −λ) + ⟨Q(s, a)⟩s,a · λ , (5) where λ is a memory decay/interference factor, and ⟨Q(s, a)⟩s,a is the average over Q values for all states and all actions at the end of the day. All performance measures (PMs) used in the 5HB paradigm (number of trials, number of ITI pokes, mean response time, mean poke length, TimePref1 and LengthPref2) were evaluated over the entire session (10 min, 1400 time steps), during which different states3 could be visited multiple 1TimePref = (average time between adjacent ITI pokes) / (average response time) 2LengthPref = (average response length) / (average ITI poke length) 3including the pseudo-states, corresponding to time steps within the 15 sec ITI times. As opposed to an online ”SARSA”-type update of Q-values, we work with state occupancy probabilities p(st) and update Q-values with the following reward prediction error: δt = E[rt] −Q(at, st) + γ X ∀at+1,st+1 Q(at+1, st+1) · p(at+1, st+1|at, st) . (6) 4 Morris water maze experiment and modeling The same mice as in the 5HB (4.5-month old at the beginning of the experiment) were tested in a variant of the Morris water maze (WM) task [13]. Starting from one of 4 starting positions in the circular pool filled with an opaque liquid they had to learn the location of a hidden escape platform using stable extra-maze cues (Fig. 2a). Animals were initially trained for 4 days with 4 sessions a day (to avoid confusion with 5HB, we consider each WM session consisting of only one trial). Trial length was limited to 60s, and the inter-session interval was 25 min.). Half of the mice had to swim in cold water of 19◦C (motivational stress, MS), while the rest were learning at 26◦C (control). After a 7-week break, 3-day long memory testing was done at 22-23◦C for all animals. Finally, after another 2 weeks, the mice performed the task for 5 more days: half of them did a version with uncertainty stress (US), where the platform location was randomly varying between the old position and its rotationally opposite; the other half did the same task as before. Behavior was quantified using the following 4 PMs: time to reach the goal (escape latency), time spent in the target platform quadrant, the opposite platform quadrant, and in the wall region (Fig. 2a). wij
! ! ! ! " " " " # # AC PC a. b. water pool platform 2 1 3 Figure 2: WM experiment and model. a. Experimental setup. 1 – target platform quadrant, 2 – opposite platform quadrant, 3 – wall region. Small filled circles mark 4 starting positions, large filled circle marks the target platform, open circle marks the opposite platform (used only in the US condition), pool ∅= 1.4m. b. Activities of place cells (PC) encode position of the animal in the WM, activities of action cells encode direction of the next movement. A TDRL paradigm (1)-(3) in continuous state and action spaces has been used to model the mouse behavior in the WM [14, 15]. The position of the animal is represented as a population activity of Npc = 211 ’place cells’ (PC) whose preferred locations are distributed uniformly over the area of a modelled circular arena (Fig. 2b). Activity of place cell j is modelled by a Gaussian centered at the preferred location ⃗pj of the cell: rpc j = exp(−∥⃗p −⃗pj∥2/2σ2 pc) , (7) where ⃗p is the current position of the modelled animal and σpc = 0.25 defines the width of the spatial receptive field relative to the pool radius. Place cells project to the population of Nac = 36 ’action cells’ (AC) via feed-forward all-to-all connections with modifiable weights. Each action cell is associated with angle φi, all φi being distributed uniformly in [0, 2π]. Thus, an activity profile on the level of place cells (i.e. state st) causes a different activity profile on the level of the action cells depending on the value of the weight vector. The activity of action cell i is considered as the value of the action (defined as a movement in direction φi4): Q(st, at) = rac i = X j wijrpc j . (8) 4A constant step length was chosen to fit the average speed of the animals during the experiment The action selection follows ϵ-greedy policy, where the optimal action a∗is chosen with probability β = 1 −ϵ and a random action with probability 1 −β. Action a∗is defined as movement in the direction of the center of mass φ∗of the AC population5. Q-value corresponding to an action with continuous angle φ is calculated as linear interpolation between activities of the two closest action cells. During learning the PC→AC connection weights are updated on each time step in such a way as to decrease the reward prediction error δt (3): ∆wij = αδrac i rpc j . (9) The Hebbian-like form of the update rule (9) is due to the fact that we use distributed representations for states and actions, i.e. there is no single state/action pair responsible for the last movement. To simulate one experimental session it is necessary to (i) initialize the weight matrix {wij}, (ii) choose meta-parameter values and starting position ⃗p0, (iii) compute (7)-(8) and perform corresponding movements until ∥⃗p −⃗ppl∥< Rpl at which point reward r = 15 is delivered (Rpl is the platform radius). Wall hits result in a small negative reward (rwall = −3). For each session and each set of the meta-parameters, 48 different sets of random initial weights wij (corresponding to individual mice) were used to run the model, with 50 simulations started out of each set. Final values of the PMs were averaged over all repetitions for each subgroup of mice. To account for the loss of memory, after each day all weights were updated as follows: wnew ij = wold ij · (1 −λ) + winitial ij · λ (10) where λ is the memory decay factor, wold ij is the weight value at the end of the day, and winitial ij is the initial weight value before any learning took place. 5 Goodness-of-fit function and optimization procedure To compare the model with the experiment we used the following goodness-of-fit function [16]: χ2 = NPM X k=1 (PMexp k −PMmod k (α, β, γ, λ))2/(σexp k )2 , (11) where PMexp k and PMmod k are the PMs calculated for the animals and the model, respectively and NPM is the number of the PMs. PMmod k (α, β, γ, λ) are calculated after simulation of one session with fixed values of the meta-parameters. PMexp k were calculated either for each animal (5HB), or for each subgroup (WM). Using stochastic gradient ascent, we minimized (11) with respect to α, β, γ for each session separately by systematically varying the meta-parameters in the following ranges: for WM, α ∈[10−5, 5 · 10−2] and β, γ ∈[0.01, 0.99], and for 5HB, α, γ ∈[0.03, 0.99] and β ∈[0.3, 9.9]. Decay factor λ ∈[0.01, 0.99] was estimated only for the first session after the break, otherwise constant values of λ = 0.03 (5HB) and λ = 0.2 (WM) were used. Several control procedures were performed to ensure that the meta-parameter optimization was statistically efficient and self-consistent. To evaluate how well the model fits the experimental data we used χ2-test with ν = NPM −3 degrees of freedom (since most of the time we had only 3 free meta-parameters). The P(χ2, ν) value, defined as the probability that a realization of a chi-squaredistributed random variable would exceed χ2 by chance, was calculated for each session separately. Generally, values of P(χ2, ν) > 0.01 correspond to a fairly good model [16]. To check reliability of the estimated meta-parameters we used the same optimization procedure with PMexp k artificially generated by the model itself. In a self-consistent model such a procedure is expected to find metaparameter values similar to those with which the PMs were generated. Finally, to see how well the model generalizes to previously unseen data, we used half of the available experimental data for optimization and tested the estimated parameters on the other half. Then we evaluated χ2 and P(χ2, ν) values for the testing as well as the training data. 6 Results The meta-parameter estimation procedure was performed for the models of both experiments using stochastic gradient ascent in χ2 goodness-of-fit. For the 5HB, meta-parameters were estimated for 5i.e. φ∗= arctan(P i rac i sin(2πk/Nac)/ P i rac i cos(2πk/Nac)) 10 20 30 40 50 1 2 3 4 5 6 7 8 9 10 11 12 Platform quadrant time % Day Model data Experimental data 0 2 4 6 8 10 12 14 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Mean response time [s] Day Model data Experimental data a. 0.1 0.2 0.5 1 2 5 0.1 0.2 0.5 Learning rate Exploitation factor Discount rate b. Figure 3: a. Example of PM evolution with learning in the WM (platform quadrant time, top) and in the 5HB (mean response time, bottom). b. Self-consistency check: true (open circles) and estimated (filled circles) meta-parameter values for the 24 random sets in the 5HB each animal and each experimental day. Further (sub)group values were calculated by averaging the individual estimations. For the WM, meta-parameters were estimated for each subgroup and each experimental session. Learning dynamics in both experiments are illustrated in Figure 3a for 2 representative PMs, where average performances for all mice and the corresponding models (with estimated meta-parameters) are shown. The results of both meta-parameter estimation procedures indicated a reasonably good fit between the model and animal performance. Evaluating the testing data, the condition P(χ2, ν) > 0.01 was satisfied for 92.5% of 5HB estimated parameter sets, and for 98.4% in the WM. The mean χ2 values for the testing data were ⟨χ2⟩= 1.59 in the WM (P(χ2, 1) = 0.21) and ⟨χ2⟩= 5.27 in the 5HB (P(χ2, 3) = 0.15). There was a slight over-fitting only in the WM estimation. To evaluate the quality of the estimated optima and sensitivities to different meta-parameters, we calculated eigenvalues of the Hessian of 1/χ2 around each of the estimated points. 98.4% of all eigenvalues were negative, and most of the corresponding eigenvectors were aligned with the directions of α, β, and γ, indicating that there were no significant correlations in parameter estimation. Furthermore, the absolute eigenvalues were highest in the directions of β and γ, thus the error surface is steep along these meta-parameters. To test the reliability of estimated meta-parameters, the self-consistency check was performed using a number of random meta-parameter sets. The mean absolute errors (distances between real and estimated parameter values) were quite small for exploitation factors (β) – approximately 6% of the total range, but higher for the reward discount factors (γ) and for the learning rates (α) – 10-29% of the total range (Figure 3b). This indicates that estimated β values should be considered more reliable than those of α and γ. 6.1 Meta-parameter dynamics During the course of learning, exploitation factors (β) (Figure 4a,b) showed progressive increase (regression p ≪0.001 for both the 5HB and the WM), reaching the peak at the end of each learning block. They were consistently higher for the C57 mice than for the DBA mice (2-way ANOVA with replications, p ≪0.001 for both experiments), indicating that the DBA mice were exploring the environment more actively, and/or were not able to focus their attention well on the specific task. Finally, C57 mouse groups, exposed to motivational stress in the WM and to extrinsic stress in the 5HB, had elevated exploitation factors (ANOVA p < 0.01 for both experiments), however there was no effect for the DBA mice. The estimated learning rates (α) did not show any obvious changes or trends with learning for either 5HB or WM. There were no differences between the 2 genetic strains (nor among the stress conditions) with one exception: for the first several days of the training, C57 learning rates were 0 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Exploitation factor β Day C57BL/6 DBA/2 0 0.2 0.4 0.6 0.8 1 1 2 3 4 5 6 7 8 9 10 11 12 Exploitation factor β Day C57BL/6 DBA/2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 8 9 10 11 12 Future reward deference factor γ Day Fixed platform Variable platform 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 5 6 7 Future reward deference factor γ Day Control Uncertainty C57 DBA 0 0.2 0.4 0.6 0.8 1 Memory decay/interference factor Previously exposed to US Control b. a. e. d. c. Figure 4: a,b. Estimated exploitation factors β for 5HB (a, break is between days 8 & 9) and WM (b, breaks between days 4 & 5 and between 7 & 8). c,d. Estimated future reward deference factors for the variable platform trials in the WM (c) and for the uncertainty trials in the 5HB (d). e. Estimated memory decay / interference factors for the first day after the break in the 5HB. significantly higher (ANOVA p < 0.01 in both experiments), indicating that C57 mice could learn a novel task more quickly. Under uncertainty (in reward delivery for the 5HB, and in the target platform location for the WM) future reward discount factors (γ) were significantly elevated (ANOVA p < 0.02, Figure 4c,d). In the 5HB, memory decay factors (λ), estimated for the first day after the break, were significantly higher (p < 0.01, unpaired t-test) for animals, previously exposed to uncertainty (Figure 4e). This suggests that uncertainty makes animals consider rewards further into the future, and it seems to impair memory consolidation. 7 Discussion In this paper we showed that various behavioral outcomes (caused by genetic traits and/or stress factors) could be predicted by our TDRL models for 2 different tasks. This provides hypotheses concerning the neuromodulatory mechanisms, which we plan to test using pharmacological manipulations (typically, injections of agonists or antagonists of relevant neurotransmitter systems). Results for the exploitation factors suggest that with learning (and decreasing reward prediction errors) the acquired knowledge is used more for choosing actions. This might also be related to decreased subjective stress and higher stressor controllability. The difference between C57 and DBA strains shows two things. Firstly, the anxious DBA mice cannot exploit their knowledge as well as C57 can. Secondly, in response to motivational or extrinsic stress C57 mice are the only ones that increase their exploitation. This may be related to an inverse-U-shaped effect of the noradrenergic influences on focused attention and performance accuracy [17]. Animals with low anxiety (C57) might be on the left side of the curve, and additional stress might lead them to optimal performance, while those with high anxiety – already on the right side, leading to possibly impaired performance. Our results may also suggest that the widely proclaimed deficiency of DBA mice in spatial learning (as compared to C57) [4, 12] might be primarily due to differential attentional capabilities. The increased future reward discount factors under uncertainty indicate a reasonable adaptive response – animals should not concentrate their learning on immediate events when task-reward relations become ambiguous. Uncertainty in behaviorally relevant outcomes under stress causes a decrease in subjective stressor controllability, which is known to be related to elevated serotonin levels [18]. Higher memory decay / interference factors for the animals previously exposed to uncertainty could be due to partially impaired memory consolidation and/or due to stronger competition between different strategies and perceptions of the uncertain task. Although estimated meta-parameter values can be easily compared between certain experimental conditions, it is difficult to study in this way the interactions between different genetic and environmental factors or extrapolate beyond the limits of available conditions. One could overcome this disadvantage by developing a black-box parameter model that would help us to evaluate in a flexible way the contributions of specific factors (motivation, uncertainty, genotype) to meta-parameter dynamics, as well as their relationship with dynamics of TD errors (δt) during the process of learning. Acknowledgments This work was partially supported by a grant from the Swiss National Science Foundation to C.S. (3100A0-108102). References [1] J. J. Kim and D. M. Diamond. The stressed hippocampus, synaptic plasticity and lost memories. Nat Rev Neurosci., 3(6):453–62., Jun 2002. [2] C. Sandi, M. Loscertales, and C. Guaza. Experience-dependent facilitating effect of corticosterone on spatial memory formation in the water maze. Eur J Neurosci., 9(4):637–42., Apr 1997. [3] M. Joels, Z. Pu, O. Wiegert, M. S. Oitzl, and H. J. Krugers. Learning under stress: how does it work? Trends Cogn Sci., 10(4):152–8. Epub 2006 Mar 2., Apr 2006. [4] J. M. Wehner, R. A. Radcliffe, and B. J. Bowers. Quantitative genetics and mouse behavior. Annu Rev Neurosci., 24:845–67., 2001. [5] A. Holmes, C. C. Wrenn, A. P. Harris, K. E. Thayer, and J. N. Crawley. Behavioral profiles of inbred strains on novel olfactory, spatial and emotional tests for reference memory in mice. Genes Brain Behav., 1(1):55–69., Jan 2002. [6] J. L. McGaugh. The amygdala modulates the consolidation of memories of emotionally arousing experiences. Annu Rev Neurosci., 27:1–28., 2004. [7] M. J. Kreek, D. A. Nielsen, E. R. Butelman, and K. S. LaForge. Genetic influences on impulsivity, risk taking, stress responsivity and vulnerability to drug abuse and addiction. Nat Neurosci., 8:1450–7, 2005. [8] R. Sutton and A. G. Barto. Reinforcement Learning - An Introduction. MIT Press, 1998. [9] W. Schultz, P. Dayan, and P. R. Montague. A neural substrate of prediction and reward. Science, 275(5306):1593–9, Mar 14 1997. [10] K. Doya. Metalearning and neuromodulation. Neural Netw, 15(4-6):495–506, Jun-Jul 2002. [11] K. Samejima, K. Doya, Y. Ueda, and M. Kimura. Estimating internal variables and paramters of a learning agent by a particle filter. In Advances in Neural Information Processing Systems 16. 2004. [12] C. Rossi-Arnaud and M. Ammassari-Teule. What do comparative studies of inbred mice add to current investigations on the neural basis of spatial behaviors? Exp Brain Res., 123(1-2):36–44., Nov 1998. [13] R. G. M. Morris. Spatial localization does not require the presence of local cues. Learning and Motivation, 12:239–260, 1981. [14] D. J. Foster, R. G. M. Morris, and P. Dayan. A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus, 10(1):1–16, 2000. [15] T. Str¨osslin, D. Sheynikhovich, R. Chavarriaga, and W. Gerstner. Modelling robust self-localisation and navigation using hippocampal place cells. Neural Networks, 18(9):1125–1140, 2005. [16] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C : The Art of Scientific Computing. Cambridge University Press, 1992. [17] G. Aston-Jones, J. Rajkowski, and J. Cohen. Locus coeruleus and regulation of behavioral flexibility and attention. Prog Brain Res., 126:165–82., 2000. [18] J. Amat, M. V. Baratta, E. Paul, S. T. Bland, L. R. Watkins, and S. F. Maier. Medial prefrontal cortex determines how stressor controllability affects behavior and dorsal raphe nucleus. Nat Neurosci., 8(3):365–71. Epub 2005 Feb 6., Mar 2005.
|
2006
|
147
|
2,974
|
A Novel Gaussian Sum Smoother for Approximate Inference in Switching Linear Dynamical Systems David Barber and Bertrand Mesot IDIAP Research Institute Martigny 1920, Switzerland david.barber/bertrand.mesot@idiap.ch Abstract We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman Filter and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the only central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but stable and accurate alternative. Unlike the alternative unstable Expectation Propagation procedure, our method consists only of a single forward and backward pass and is reminiscent of the standard smoothing ‘correction’ recursions in the simpler linear dynamical system. The algorithm performs well on both toy experiments and in a large scale application to noise robust speech recognition. 1 Switching Linear Dynamical System The Linear Dynamical System (LDS) [1] is a key temporal model in which a latent linear process generates the observed series. For complex time-series which are not well described globally by a single LDS, we may break the time-series into segments, each modeled by a potentially different LDS. This is the basis for the Switching LDS (SLDS) [2, 3, 4, 5] where, for each time t, a switch variable st ∈1, . . . , S describes which of the LDSs is to be used. The observation (or ‘visible’) vt ∈RV is linearly related to the hidden state ht ∈RH with additive noise η by vt = B(st)ht + ηv(st) ≡ p(vt|ht, st) = N (B(st)ht, Σv(st)) (1) where N (µ, Σ) denotes a Gaussian distribution with mean µ and covariance Σ. The transition dynamics of the continuous hidden state ht is linear, ht = A(st)ht−1 + ηh(st), ≡ p(ht|ht−1, st) = N A(st)ht−1, Σh(st) (2) The switch st may depend on both the previous st−1 and ht−1. This is an augmented SLDS (aSLDS), and defines the model p(v1:T , h1:T , s1:T ) = T Y t=1 p(vt|ht, st)p(ht|ht−1, st)p(st|ht−1, st−1) The standard SLDS[4] considers only switch transitions p(st|st−1). At time t = 1, p(s1|h0, s0) simply denotes the prior p(s1), and p(h1|h0, s1) denotes p(h1|s1). The aim of this article is to address how to perform inference in the aSLDS. In particular we desire the filtered estimate p(ht, st|v1:t) and the smoothed estimate p(ht, st|v1:T ), for any 1 ≤t ≤T . Both filtered and smoothed inference in the SLDS is intractable, scaling exponentially with time [4]. s1 s2 s3 s4 h1 h2 h3 h4 v1 v2 v3 v4 Figure 1: The independence structure of the aSLDS. Square nodes denote discrete variables, round nodes continuous variables. In the SLDS links from h to s are not normally considered. 2 Expectation Correction Our approach to approximate p(ht, st|v1:T ) mirrors the Rauch-Tung-Striebel ‘correction’ smoother for the simpler LDS [1].The method consists of a single forward pass to recursively find the filtered posterior p(ht, st|v1:t), followed by a single backward pass to correct this into a smoothed posterior p(ht, st|v1:T ). The forward pass we use is equivalent to standard Assumed Density Filtering (ADF) [6]. The main contribution of this paper is a novel form of backward pass, based only on collapsing the smoothed posterior to a mixture of Gaussians. Together with the ADF forward pass, we call the method Expectation Correction, since it corrects the moments found from the forward pass. A more detailed description of the method, including pseudocode, is given in [7]. 2.1 Forward Pass (Filtering) Readers familiar with ADF may wish to continue directly to Section (2.2). Our aim is to form a recursion for p(st, ht|v1:t), based on a Gaussian mixture approximation of p(ht|st, v1:t). Without loss of generality, we may decompose the filtered posterior as p(ht, st|v1:t) = p(ht|st, v1:t)p(st|v1:t) (3) The exact representation of p(ht|st, v1:t) is a mixture with O(St) components. We therefore approximate this with a smaller I-component mixture p(ht|st, v1:t) ≈ I X it=1 p(ht|it, st, v1:t)p(it|st, v1:t) where p(ht|it, st, v1:t) is a Gaussian parameterized with mean f(it, st) and covariance F(it, st). To find a recursion for these parameters, consider p(ht+1|st+1, v1:t+1) = X st,it p(ht+1|st, it, st+1, v1:t+1)p(st, it|st+1, v1:t+1) (4) Evaluating p(ht+1|st, it, st+1, v1:t+1) We find p(ht+1|st, it, st+1, v1:t+1) by first computing the joint distribution p(ht+1, vt+1|st, it, st+1, v1:t), which is a Gaussian with covariance and mean elements, Σhh = A(st+1)F(it, st)AT(st+1) + Σh(st+1), Σvv = B(st+1)ΣhhBT(st+1) + Σv(st+1) Σvh = B(st+1)F(it, st), µv = B(st+1)A(st+1)f(it, st), µh = A(st+1)f(it, st) (5) and then conditioningon vt+11. For the case S = 1, this forms the usual Kalman Filter recursions[1]. Evaluating p(st, it|st+1, v1:t+1) The mixture weight in (4) can be found from the decomposition p(st, it|st+1, v1:t+1) ∝p(vt+1|it, st, st+1, v1:t)p(st+1|it, st, v1:t)p(it|st, v1:t)p(st|v1:t) (6) 1p(x|y) is a Gaussian with mean µx + ΣxyΣ−1 yy (y −µy) and covariance Σxx −ΣxyΣ−1 yy Σyx. The first factor in (6), p(vt+1|it, st, st+1, v1:t) is a Gaussian with mean µv and covariance Σvv, as given in (5). The last two factors p(it|st, v1:t) and p(st|v1:t) are given from the previous iteration. Finally, p(st+1|it, st, v1:t) is found from p(st+1|it, st, v1:t) = ⟨p(st+1|ht, st)⟩p(ht|it,st,v1:t) (7) where ⟨·⟩p denotes expectation with respect to p. In the SLDS, (7) is replaced by the Markov transition p(st+1|st). In the aSLDS, however, (7) will generally need to be computed numerically. Closing the recursion We are now in a position to calculate (4). For each setting of the variable st+1, we have a mixture of I × S Gaussians which we numerically collapse back to I Gaussians to form p(ht+1|st+1, v1:t+1) ≈ I X it+1=1 p(ht+1|it+1, st+1, v1:t+1)p(it+1|st+1, v1:t+1) Any method of choice may be supplied to collapse a mixture to a smaller mixture; our code simply repeatedly merges low-weight components. In this way the new mixture coefficients p(it+1|st+1, v1:t+1), it+1 ∈1, . . . , I are defined, completing the description of how to form a recursion for p(ht+1|st+1, v1:t+1) in (3). A recursion for the switch variable is given by p(st+1|v1:t+1) ∝ X st,it p(vt+1|st+1, it, st, v1:t)p(st+1|it, st, v1:t)p(it|st, v1:t)p(st|v1:t) where all terms have been computed during the recursion for p(ht+1|st+1, v1:t+1). The likelihood p(v1:T ) may be found by recursing p(v1:t+1) = p(vt+1|v1:t)p(v1:t), where p(vt+1|vt) = X it,st,st+1 p(vt+1|it, st, st+1, v1:t)p(st+1|it, st, v1:t)p(it|st, v1:t)p(st|v1:t) 2.2 Backward Pass (Smoothing) The main contribution of this paper is to find a suitable way to ‘correct’ the filtered posterior p(st, ht|v1:t) obtained from the forward pass into a smoothed posterior p(st, ht|v1:T ). We derive this for the case of a single Gaussian representation. The extension to the mixture case is straightforward and presented in [7]. We approximate the smoothed posterior p(ht|st, v1:T ) by a Gaussian with mean g(st) and covariance G(st) and our aim is to find a recursion for these parameters. A useful starting point for a recursion is: p(ht, st|v1:T ) = X st+1 p(st+1|v1:T )p(ht|st, st+1, v1:T )p(st|st+1, v1:T ) The term p(ht|st, st+1, v1:T ) may be computed as p(ht|st, st+1, v1:T ) = Z ht+1 p(ht|ht+1, st, st+1, v1:t)p(ht+1|st, st+1, v1:T ) (8) The recursion therefore requires p(ht+1|st, st+1, v1:T ), which we can write as p(ht+1|st, st+1, v1:T ) ∝p(ht+1|st+1, v1:T )p(st|st+1, ht+1, v1:t) (9) The difficulty here is that the functional form of p(st|st+1, ht+1, v1:t) is not squared exponential in ht+1, so that p(ht+1|st, st+1, v1:T ) will not be Gaussian2. One possibility would be to approximate the non-Gaussian p(ht+1|st, st+1, v1:T ) by a Gaussian (or mixture thereof) by minimizing the Kullback-Leilbler divergence between the two, or performing moment matching in the case of a single Gaussian. A simpler alternative (which forms ‘standard’ EC) is to make the assumption p(ht+1|st, st+1, v1:T ) ≈p(ht+1|st+1, v1:T ), where p(ht+1|st+1, v1:T ) is already known from the previous backward recursion. Under this assumption, the recursion becomes p(ht, st|v1:T ) ≈ X st+1 p(st+1|v1:T )p(st|st+1, v1:T ) ⟨p(ht|ht+1, st, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) (10) 2In the exact calculation, p(ht+1|st, st+1, v1:T ) is a mixture of Gaussians, see [7]. However, since in (9) the two terms p(ht+1|st+1, v1:T ) will only be approximately computed during the recursion, our approximation to p(ht+1|st, st+1, v1:T ) will not be a mixture of Gaussians. Evaluating ⟨p(ht|ht+1, st, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) ⟨p(ht|ht+1, st, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) is a Gaussian in ht, whose statistics we will now compute. First we find p(ht|ht+1, st, st+1, v1:t) which may be obtained from the joint distribution p(ht, ht+1|st, st+1, v1:t) = p(ht+1|ht, st+1)p(ht|st, v1:t) (11) which itself can be found from a forward dynamics from the filtered estimate p(ht|st, v1:t). The statistics for the marginal p(ht|st, st+1, v1:t) are simply those of p(ht|st, v1:t), since st+1 carries no extra information about ht. The remaining statistics are the mean of ht+1, the covariance of ht+1 and cross-variance between ht and ht+1, which are given by ⟨ht+1⟩=A(st+1)ft(st), Σt+1,t+1 =A(st+1)Ft(st)AT(st+1)+Σh(st+1), Σt+1,t =A(st+1)Ft(st) Given the statistics of (11), we may now condition on ht+1 to find p(ht|ht+1, st, st+1, v1:t). Doing so effectively constitutes a reversal of the dynamics, ht = ←− A(st, st+1)ht+1 + ←−η (st, st+1) where ←− A(st, st+1) and ←−η (st, st+1) ∼N(←− m(st, st+1), ←− Σ(st, st+1)) are easily found using conditioning. Averaging the above reversed dynamics over p(ht+1|st+1, v1:T ), we find that ⟨p(ht|ht+1, st, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) is a Gaussian with statistics µt = ←− A(st, st+1)g(st+1)+←− m(st, st+1), Σt,t = ←− A(st, st+1)G(st+1)←− A T(st, st+1)+←− Σ(st, st+1) These equations directly mirror the standard RTS backward pass[1]. Evaluating p(st|st+1, v1:T ) The main departure of EC from previous methods is in treating the term p(st|st+1, v1:T ) = ⟨p(st|ht+1, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) (12) The term p(st|ht+1, st+1, v1:t) is given by p(st|ht+1, st+1, v1:t) = p(ht+1|st+1, st, v1:t)p(st, st+1|v1:t) P s′ t p(ht+1|st+1, s′ t, v1:t)p(s′ t, st+1|v1:t) (13) Here p(st, st+1|v1:t) = p(st+1|st, v1:t)p(st|v1:t), where p(st+1|st, v1:t) occurs in the forward pass, (7). In (13), p(ht+1|st+1, st, v1:t) is found by marginalizing (11). Computing the average of (13) with respect to p(ht+1|st+1, v1:T ) may be achieved by any numerical integration method desired. A simple approximation is to evaluate the integrand at the mean value of the averaging distribution p(ht+1|st+1, v1:T ). More sophisticated methods (see [7]) such as sampling from the Gaussian p(ht+1|st+1, v1:T ) have the advantage that covariance information is used3. Closing the Recursion We have now computed both the continuous and discrete factors in (8), which we wish to use to write the smoothed estimate in the form p(ht, st|v1:T ) = p(st|v1:T )p(ht|st, v1:T ). The distribution p(ht|st, v1:T ) is readily obtained from the joint (8) by conditioning on st to form the mixture p(ht|st, v1:T ) = X st+1 p(st+1|st, v1:T )p(ht|st, st+1, v1:T ) which may then be collapsed to a single Gaussian (the mixture case is discussed in [7]). The smoothed posterior p(st|v1:T ) is given by p(st|v1:T ) = X st+1 p(st+1|v1:T ) ⟨p(st|ht+1, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) . (14) 3This is a form of exact sampling since drawing samples from a Gaussian is easy. This should not be confused with meaning that this use of sampling renders EC a sequential Monte-Carlo scheme. 2.3 Relation to other methods The EC Backward pass is closely related to Kim’s method [8]. In both EC and Kim’s method, the approximation p(ht+1|st, st+1, v1:T ) ≈p(ht+1|st+1, v1:T ), is used to form a numerically simple backward pass. The other ‘approximation’ in EC is to numerically compute the average in (14). In Kim’s method, however, an update for the discrete variables is formed by replacing the required term in (14) by ⟨p(st|ht+1, st+1, v1:t)⟩p(ht+1|st+1,v1:T ) ≈p(st|st+1, v1:t) (15) Since p(st|st+1, v1:t) ∝p(st+1|st)p(st|v1:t)/p(st+1|v1:t), this can be computed simply from the filtered results alone. The fundamental difference therefore between EC and Kim’s method is that the approximation, (15), is not required by EC. The EC backward pass therefore makes fuller use of the future information, resulting in a recursion which intimately couples the continuous and discrete variables. The resulting effect on the quality of the approximation can be profound, as we will see in the experiments. The Expectation Propagation (EP) algorithm makes the central assumption of collapsing the posteriors to a Gaussian family [5]; the collapse is defined by a consistency criterion on overlapping marginals. In our experiments, we take the approach in [9] of collapsing to a single Gaussian. Ensuring consistency requires frequent translations between moment and canonical parameterizations, which is the origin of potentially severe numerical instability [10]. In contrast, EC works largely with moment parameterizations of Gaussians, for which relatively few numerical difficulties arise. Unlike EP, EC is not based on a consistency criterion and a subtle issue arises about possible inconsistencies in the Forward and Backward approximations for EC. For example, under the conditional independence assumption in the Backward Pass, p(hT |sT −1, sT , v1:T ) ≈p(hT |sT , v1:T ), which is in contradiction to (5) which states that the approximation to p(hT |sT −1, sT , v1:T ) will depend on sT −1. Such potential inconsistencies arise because of the approximations made, and should not be considered as separate approximations in themselves. Rather than using a global (consistency) objective, EC attempts to faithfully approximate the exact Forward and Backward propagation routines. For this reason, as in the exact computation, only a single Forward and Backward pass are required in EC. In [11] a related dynamics reversed is proposed. However, the singularities resulting from incorrectly treating p(vt+1:T |ht, st) as a density are heuristically finessed. In [12] a variational method approximates the joint distribution p(h1:T , s1:T |v1:T ) rather than the marginal inference p(ht, st|v1:T ). This is a disadvantage when compared to other methods that directly approximate the marginal. Sequential Monte Carlo methods (Particle Filters)[13], are essentially mixture of delta-function approximations. Whilst potentially powerful, these typically suffer in high-dimensional hidden spaces, unless techniques such as Rao-Blackwellization are performed. ADF is generally preferential to Particle Filtering since in ADF the approximation is a mixture of non-trivial distributions, and is therefore more able to represent the posterior. 3 Demonstration Testing EC in a problem with a reasonably long temporal sequence, T , is important since numerical instabilities may not be apparent in timeseries of just a few points. To do this, we sequentially generate hidden and visible states from a given model, here with H = 3, S = 2, V = 1 – see Figure(2) for full details of the experimental setup. Then, given only the parameters of the model and the visible observations (but not any of the hidden states h1:T , s1:T ), the task is to infer p(ht|st, v1:T ) and p(st|v1:T ). Since the exact computation is exponential in T , a simple alternative is to assume that the original sample states s1:T are the ‘correct’ inferences, and compare how our most probable posterior smoothed estimates arg maxst p(st|v1:T ) compare with the assumed correct sample st. We chose conditions that, from the viewpoint of classical signal processing, are difficult, with changes in the switches occurring at a much higher rate than the typical frequencies in the signal vt. For EC we use the mean approximation for the numerical integration of (12). We included the Particle Filter merely for a point of comparison with ADF, since they are not designed to approximate 0 10 20 0 200 400 600 800 1000 PF 0 10 20 RBPF 0 10 20 EP 0 10 20 ADFS 0 10 20 KimS 0 10 20 ECS 0 10 20 ADFM 0 10 20 KimM 0 10 20 ECM Figure 2: The number of errors in estimating p(st|v1:T ) for a binary switch (S = 2) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors are over 1000 experiments. The x-axes are cut off at 20 errors to improve visualization of the results. (PF) Particle Filter. (RBPF) Rao-Blackwellized PF. (EP) Expectation Propagation. (ADFS) Assumed Density Filtering using a Single Gaussian. (KimS) Kim’s smoother using the results from ADFS. (ECS) Expectation Correction using a Single Gaussian (I = J = 1). (ADFM) ADF using a multiple of I = 4 Gaussians. (KimM) Kim’s smoother using the results from ADFM. (ECM) Expectation Correction using a mixture with I = J = 4 components. S = 2, V = 1 (scalar observations), T = 100, with zero output bias. A(s) = 0.9999 ∗orth(randn(H, H)), B(s) = randn(V, H). H = 3, Σh(s) = IH, Σv(s) = 0.1IV , p(st+1|st) ∝1S×S + IS. At time t = 1, the priors are p1 = uniform, with h1 drawn from N(10 ∗randn(H, 1), IH). the smoothed estimate, for which 1000 particles were used, with Kitagawa resampling. For the RaoBlackwellized Particle Filter [13], 500 particles were used, with Kitagawa resampling. We found that EP4 was numerically unstable and often struggled to converge. To encourage convergence, we used the damping method in [9], performing 20 iterations with a damping factor of 0.5. Nevertheless, the disappointing performance of EP is most likely due to conflicts resulting from numerical instabilities introduced by the frequent conversions between moment and canonical representations. The best filtered results are given using ADF, since this is better able to represent the variance in the filtered posterior than the sampling methods. Unlike Kim’s method, EC makes good use of the future information to clean up the filtered results considerably. One should bear in mind that both EC and Kim’s method use the same ADF filtered results. This demonstrates that EC may dramatically improve on Kim’s method, so that the small amount of extra work in making a numerical approximation of p(st|st+1, v1:T ), (12), may bring significant benefits. We found similar conclusions for experiments with an aSLDS[7]. 4 Application to Noise Robust ASR Here we briefly present an application of the SLDS to robust Automatic Speech Recognition (ASR), for which the intractable inference is performed by EC, and serves to demonstrate how EC scales well to a large-scale application. Fuller details are given in [14]. The standard approach to noise robust ASR is to provide a set of noise-robust features to a standard Hidden Markov Model (HMM) classifier, which is based on modeling the acoustic feature vector. For example, the method of Unsupervised Spectral Subtraction (USS) [15] provides state-of-the-art performance in this respect. Incorporating noise models directly into such feature-based HMM systems is difficult, mainly because the explicit influence of the noise on the features is poorly understood. An alternative is to model the raw speech signal directly, such as the SAR-HMM model [16] for which, under clean conditions, isolated spoken digit recognition performs well. However, the SAR-HMM performs poorly under noisy conditions, since no explicit noise processes are taken into account by the model. The approach we take here is to extend the SAR-HMM to include an explicit noise process, so that the observed signal vt is modeled as a noise corrupted version of a clean hidden signal vh t : vt = vh t + ˜ηt with ˜ηt ∼N(0, ˜σ2) 4Generalized EP [5], which groups variables together improves on the results, but is still far inferior to the EC results presented here – Onno Zoeter personal communication. Noise Variance SNR (dB) HMM SAR-HMM AR-SLDS 0 26.5 100.0% 97.0% 96.8% 10−7 26.3 100.0% 79.8% 96.8% 10−6 25.1 90.9% 56.7% 96.4% 10−5 19.7 86.4% 22.2% 94.8% 10−4 10.6 59.1% 9.7% 84.0% 10−3 0.7 9.1% 9.1% 61.2% Table 1: Comparison of the recognition accuracy of three models when the test utterances are corrupted by various levels of Gaussian noise. The dynamics of the clean signal is modeled by a switching AR process vh t = R X r=1 cr(st)vh t−r + ηh t (st), ηh t (st) ∼N(0, σ2(st)) where st ∈{1, . . . , S} denotes which of a set of AR coefficients cr(st) are to be used at time t, and ηh t (st) is the so-called innovation noise. When σ2(st) ≡0, this model reproduces the SARHMM of [16], a specially constrained HMM. Hence inference and learning for the SAR-HMM are tractable and straightforward. For the case σ2(st) > 0 the model can be recast as an SLDS. To do this we define ht as a vector which contains the R most recent clean hidden samples ht = vh t . . . vh t−r+1 T (16) and we set A(st) to be an R × R matrix where the first row contains the AR coefficients −cr(st) and the rest is a shifted down identity matrix. For example, for a third order (R = 3) AR process, A(st) = " −c1(st) −c2(st) −c3(st) 1 0 0 0 1 0 # . (17) The hidden covariance matrix Σh(s) has all elements zero, except the top-left most which is set to the innovation variance. To extract the first component of ht we use the (switch independent) 1 × R projection matrix B = [ 1 0 . . . 0 ]. The (switch independent) visible scalar noise variance is given by Σv ≡σ2 v. A well-known issue with raw speech signal models is that the energy of a signal may vary from one speaker to another or because of a change in recording conditions. For this reason the innovation Σh is adjusted by maximizing the likelihood of an observed sequence with respect to the innovation covariance, a process called Gain Adaptation [16]. 4.1 Training & Evaluation Following [16], we trained a separate SAR-HMM for each of the eleven digits (0–9 and ‘oh’) from the TI-DIGITS database [17]. The training set for each digit was composed of 110 single digit utterances down-sampled to 8 kHz, each one pronounced by a male speaker. Each SAR-HMM was composed of ten states with a left-right transition matrix. Each state was associated with a 10thorder AR process and the model was constrained to stay an integer multiple of K = 140 time steps (0.0175 seconds) in the same state. We refer the reader to [16] for a detailed explanation of the training procedure used with the SAR-HMM. An AR-SLDS was built for each of the eleven digits by copying the parameters of the corresponding trained SAR-HMM, i.e., the AR coefficients cr(s) are copied into the first row of the hidden transition matrix A(s) and the same discrete transition distribution p(st | st−1) is used. The models were then evaluated on a test set composed of 112 corrupted utterances of each of the eleven digits, each pronounced by different male speakers than those used in the training set. The recognition accuracy obtained by the models on the corrupted test sets is presented in Table 1. As expected, the performance of the SAR-HMM rapidly decreases with noise. The feature-based HMM with USS has high accuracy only for high SNR levels. In contrast, the AR-SLDS achieves a recognition accuracy of 61.2% at a SNR close to 0 dB, while the performance of the two other methods is equivalent to random guessing (9.1%). Whilst other inference methods may also perform well in this case, we found that EC performs admirably, without numerical instabilities, even for time-series with several thousand time-steps. 5 Discussion We presented a method for approximate smoothed inference in an augmented class of switching linear dynamical systems. Our approximation is based on the idea that due to the forgetting which commonly occurs in Markovian models, a finite number of mixture components may provide a reasonable approximation. Clearly, in systems with very long correlation times our method may require too many mixture components to produce a satisfactory result, although we are unaware of other techniques that would be able to cope well in that case. The main benefit of EC over Kim smoothing is that future information is more accurately dealt with. Whilst EC is not as general as EP, EC carefully exploits the properties of singly-connected distributions, such as the aSLDS, to provide a numerically stable procedure. We hope that the ideas presented here may therefore help facilitate the practical application of dynamic hybrid networks. Acknowledgements This work is supported by the EU Project FP6-0027787. This paper only reflects the authors’ views and funding agencies are not liable for any use that may be made of the information contained herein. References [1] Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking : Principles, Techniques and Software. Artech House, Norwood, MA, 1998. [2] V. Pavlovic, J. M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Advances in Neural Information Processing systems (NIPS 13), pages 981–987, 2001. [3] A. T. Cemgil, B. Kappen, and D. Barber. A Generative Model for Music Transcription. IEEE Transactions on Audio, Speech and Language Processing, 14(2):679 – 694, 2006. [4] U. N. Lerner. Hybrid Bayesian Networks for Reasoning about Complex Systems. PhD thesis, Stanford University, 2002. [5] O. Zoeter. Monitoring non-linear and switching dynamical systems. PhD thesis, Radboud University Nijmegen, 2005. [6] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT Media Lab, 2001. [7] D. Barber. Expectation Correction for Smoothed Inference in Switching Linear Dynamical Systems. Journal of Machine Learning Research, 7:2515–2540, 2006. [8] C-J. Kim. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1–22, 1994. [9] T. Heskes and O. Zoeter. Expectation Propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Art. Intelligence, pages 216–223, 2002. [10] S. Lauritzen and F. Jensen. Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191–203, 2001. [11] G. Kitagawa. The Two-Filter Formula for Smoothing and an implementation of the Gaussian-sum smoother. Annals of the Institute of Statistical Mathematics, 46(4):605–623, 1994. [12] Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963–996, 1998. [13] A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. [14] B. Mesot and D. Barber. Switching Linear Dynamical Systems for Noise Robust Speech Recognition. IDIAP-RR 08, 2006. [15] G. Lathoud, M. Magimai-Doss, B. Mesot, and H. Bourlard. Unsupervised spectral subtraction for noiserobust ASR. In Proceedings of ASRU 2005, pages 189–194, November 2005. [16] Y. Ephraim and W. J. J. Roberts. Revisiting autoregressive hidden Markov modeling of speech signals. IEEE Signal Processing Letters, 12(2):166–169, February 2005. [17] R.G. Leonard. A database for speaker independent digit recognition. In Proceedings of ICASSP84, volume 3, 1984.
|
2006
|
148
|
2,975
|
Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons Stefan Klampfl, Robert Legenstein, Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-8010 Graz, Austria {klampfl,legi,maass}@igi.tugraz.at Abstract The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1 Introduction The Information Bottleneck (IB) approach and independent component analysis (ICA) have both attracted substantial interest as general principles for unsupervised learning [1, 2]. A hope has been, that they might also help us to understand strategies for unsupervised learning in biological systems. However it has turned out to be quite difficult to establish links between known learning algorithms that have been derived from these general principles, and learning rules that could possibly be implemented by synaptic plasticity of a spiking neuron. Fortunately, in a simpler context a direct link between an abstract information theoretic optimization goal and a rule for synaptic plasticity has recently been established [3]. The resulting rule for the change of synaptic weights in [3] maximizes the mutual information between pre- and postsynaptic spike trains, under the constraint that the postsynaptic firing rate stays close to some target firing rate. We show in this article, that this approach can be extended to situations where simultaneously the mutual information between the postsynaptic spike train of the neuron and other signals (such as for example the spike trains of other neurons) has to be minimized (Figure 1). This opens the door to the exploration of learning rules for information bottleneck analysis and independent component extraction with spiking neurons that would be optimal from a theoretical perspective. We review in section 2 the neuron model and learning rule from [3]. We show in section 3 how this learning rule can be extended so that it not only maximizes mutual information with some given spike trains and keeps the output firing rate within a desired range, but simultaneously minimizes mutual information with other spike trains, or other time-varying signals. Applications to inforA B Figure 1: Different learning situations analyzed in this article. A In an information bottleneck task the learning neuron (neuron 1) wants to maximize the mutual information between its output Y K 1 and the activity of one or several target neurons Y K 2 , Y K 3 , . . . (which can be functions of the inputs XK and/or other external signals), while at the same time keeping the mutual information between the inputs XK and the output Y K 1 as low as possible (and its firing rate within a desired range). Thus the neuron should learn to extract from its high-dimensional input those aspects that are related to these target signals. This setup is discussed in sections 3 and 4. B Two neurons receiving the same inputs XK from a common set of presynaptic neurons both learn to maximize information transmission, and simultaneously to keep their outputs Y K 1 and Y K 2 statistically independent. Such extraction of independent components from the input is described in section 5. mation bottleneck tasks are discussed in section 4. In section 5 we show that a modification of this learning rule allows a spiking neuron to extract information from its input spike trains that is independent from the component extracted by another neuron. 2 Neuron model and a basic learning rule We use the model from [3], which is a stochastically spiking neuron model with refractoriness, where the probability of firing in each time step depends on the current membrane potential and the time since the last output spike. It is convenient to formulate the model in discrete time with step size ∆t. The total membrane potential of a neuron i in time step tk = k∆t is given by ui(tk) = ur + N X j=1 k X n=1 wijϵ(tk −tn)xn j , (1) where ur = −70mV is the resting potential and wij is the weight of synapse j (j = 1, . . . , N). An input spike train at synapse j up to the k-th time step is described by a sequence Xk j = (x1 j, x2 j, . . . , xk j ) of zeros (no spike) and ones (spike); each presynaptic spike at time tn (xn j = 1) evokes a postsynaptic potential (PSP) with exponentially decaying time course ϵ(t −tn) with time constant τm = 10ms. The probability ρk i of firing of neuron i in each time step tk is given by ρk i = 1 −exp[−g(ui(tk)Ri(tk)∆t] ≈g(ui(tk))Ri(tk)∆t, (2) where g(u) = r0 log{1 + exp[(u −u0)/∆u]} is a smooth increasing function of the membrane potential u (u0 = −65mV, ∆u = 2mV, r0 = 11Hz). The approximation is valid for sufficiently small ∆t (ρk i ≪1). The refractory variable Ri(t) = (t−ˆti−τabs)2 τ 2 refr+(t−ˆti−τabs)2 Θ(t −ˆti −τabs) assumes values in [0, 1] and depends on the last firing time ˆti of neuron i (absolute refractory period τabs = 3ms, relative refractory time τrefr = 10ms). The Heaviside step function Θ takes a value of 1 for non-negative arguments and 0 otherwise. This model from [3] is a special case of the spike-response model, and with a refractory variable R(t) that depends only on the time since the last postsynaptic event it has renewal properties [4]. The output of neuron i at the k-th time step is denoted by a variable yk i that assumes the value 1 if a postsynaptic spike occurred and 0 otherwise. A specific spike train up to the k-th time step is written as Y k i = (y1 i , y2 i , . . . , yk i ). The information transmission between an ensemble of input spike trains XK and the output spike train YK i can be quantified by the mutual information1 [5] I(XK; YK i ) = X XK,Y K i P(XK, Y K i ) log P(Y K i |XK) P(Y K i ) . (3) The idea in [3] was to maximize the quantity I(XK; YK i ) −γDKL(P(Y K i )|| ˜P(Y K i )), where DKL(P(Y K i )|| ˜P(Y K i )) = P Y K i P(Y K i ) log(P(Y K i )/ ˜P(Y K i )) denotes the Kullback-Leibler divergence [5], imposing the additional constraint that the firing statistics P(Yi) of the neuron should stay as close as possible to a target distribution ˜P(Yi). This distribution was chosen to be that of a constant target firing rate ˜g accounting for homeostatic processes. An online learning-rule performing gradient ascent on this quantity was derived for the weights wij of neuron i, with ∆wk ij denoting the weight change during the k-th time step: ∆wk ij ∆t = αCk ijBk i (γ), (4) which consists of the “correlation term” Ck ij and the “postsynaptic term” Bk i [3]. The term Ck ij measures coincidences between postsynaptic spikes at neuron i and PSPs generated by presynaptic action potentials arriving at synapse j, Ck 1j = Ck−1 1j µ 1 −∆t τC ¶ + k X n=1 ϵ(tk −tn)xn j g′(u1(tk)) g(u1(tk)) £ yk 1 −ρk 1 ¤ , (5) in an exponential time window with time constant τC = 1s and g′(ui(tk)) denoting the derivative of g with respect to u. The term Bk 1(γ) = yk 1 ∆t log ·g(u1(tk)) ¯g1(tk) µ ˜g ¯g1(tk) ¶γ¸ −(1 −yk 1)R1(tk) £ g(u1(tk)) −(1 + γ)¯g1(tk) + γ˜g ¤ , (6) compares the current firing rate g(ui(tk)) with its average firing rate2 ¯gi(tk), and simultaneously the running average ¯gi(tk) with the constant target rate ˜g. The argument indicates that this term also depends on the optimization parameter γ. 3 Learning rule for multi-neuron interactions We extend the learning rule presented in the previous section to a more complex scenario, where the mutual information between the output spike train Y K 1 of the learning neuron (neuron 1) and some target spike trains Y K l (l > 1) has to be maximized, while simultaneously minimizing the mutual information between the inputs XK and the output Y K 1 . Obviously, this is the generic IB scenario applied to spiking neurons (see Figure 1A). A learning rule for extracting independent components with spiking neurons (see section 5) can be derived in a similar manner. For simplicity, we consider the case of an IB optimization for only one target spike train Y K 2 , and derive an update rule for the synaptic weights w1j of neuron 1. The quantity to maximize is therefore L = −I(XK; YK 1 ) + βI(YK 1 ; YK 2 ) −γDKL(P(Y K 1 )|| ˜P(Y K 1 )), (7) where β and γ are optimization constants. To maximize this objective function, we derive the weight change ∆wk 1j during the k-th time step by gradient ascent on (7), assuming that the weights w1j can change between some bounds 0 ≤w1j ≤wmax (we assume wmax = 1 throughout this paper). 1We use boldface letters (Xk) to distinguish random variables from specific realizations (Xk). 2The rate ¯gi(tk) = ⟨g(ui(tk))⟩Xk|Y k−1 i denotes an expectation of the firing rate over the input distribution given the postsynaptic history and is implemented as a running average with an exponential time window (with a time constant of 10ms). Note that all three terms of (7) implicitly depend on w1j because the output distribution P(Y K 1 ) changes if we modify the weights w1j. Since the first and the last term of (7) have already been considered (up to the sign) in [3], we will concentrate here on the middle term L12 := βI(YK 1 ; YK 2 ) and denote the contribution of the gradient of L12 to the total weight change ∆wk 1j in the k-th time step by ∆˜wk 1j. In order to get an expression for the weight change in a specific time step tk we write the probabilities P(Y K i ) and P(Y K 1 , Y K 2 ) occurring in (7) as products over individual time bins, i.e., P(Y K i ) = QK k=1 P(yk i |Y k−1 i ) and P(Y K 1 , Y K 2 ) = QK k=1 P(yk 1, yk 2|Y k−1 1 , Y k−1 2 ), according to the chain rule of information theory [5]. Consequently, we rewrite L12 as a sum over the contributions of the individual time bins, L12 = PK k=1 ∆Lk 12, with ∆Lk 12 = * β log P(yk 1, yk 2|Y k−1 1 , Y k−1 2 ) P(yk 1|Y k−1 1 )P(yk 2|Y k−1 2 ) + Xk,Yk 1 ,Yk 2 . (8) The weight change ∆˜wk 1j is then proportional to the gradient of this expression with respect to the weights w1j, i.e., ∆˜wk 1j = α(∂∆Lk 12/∂w1j), with some learning rate α > 0. The evaluation of the gradient yields ∆˜wk 1j = α Ck 1jβF k 12 ® Xk,Yk 1 ,Yk 2 with a correlation term Ck 1j as in (5) and a term F k 12 = yk 1yk 2 log ¯g12(tk) ¯g1(tk)¯g2(tk) −yk 1(1 −yk 2)R2(tk)∆t · ¯g12(tk) ¯g1(tk) −¯g2(tk) ¸ − −(1 −yk 1)yk 2R1(tk)∆t · ¯g12(tk) ¯g2(tk) −¯g1(tk) ¸ + + (1 −yk 1)(1 −yk 2)R1(tk)R2(tk)(∆t)2 £ ¯g12(tk) −¯g1(tk)¯g2(tk) ¤ . (9) Here, ¯gi(tk) = ⟨g(ui(tk))⟩Xk|Y k−1 i denotes the average firing rate of neuron i and ¯g12(tk) = ⟨g(u1(tk))g(u2(tk))⟩Xk|Y k−1 1 ,Y k−1 2 denotes the average product of firing rates of both neurons. Both quantities are implemented online as running exponential averages with a time constant of 10s. Under the assumption of a small learning rate α we can approximate the expectation ⟨·⟩Xk,Yk 1 ,Yk 2 by averaging over a single long trial. Considering now all three terms in (7) we finally arrive at an online rule for maximizing (7) ∆wk 1j ∆t = −αCk 1j £ Bk 1(−γ) −β∆tBk 12 ¤ . (10) which consists of a term Ck 1j sensitive to correlations between the output of the neuron and its presynaptic input at synapse j (“correlation term”) and terms Bk 1 and Bk 12 that characterize the postsynaptic state of the neuron (“postsynaptic terms”). Note that the argument of Bk 1 is different from (4) because some of the terms of the objective function (7) have a different sign. In order to compensate the effect of a small ∆t, the constant β has to be large enough for the term Bk 12 to have an influence on the weight change. The factors Ck 1j and Bk 1 were described in the previous section. In addition, our learning rule contains an extra term Bk 12 = F k 12/(∆t)2 that is sensitive to the statistical dependence between the output spike train of the neuron and the target. It is given by Bk 12 = yk 1yk 2 (∆t)2 log ¯g12(tk) ¯g1(tk)¯g2(tk) −yk 1 ∆t(1 −yk 2)R2(tk) · ¯g12(tk) ¯g1(tk) −¯g2(tk) ¸ −yk 2 ∆t(1 −yk 1)R1(tk) · ¯g12(tk) ¯g2(tk) −¯g1(tk) ¸ + (1 −yk 1)(1 −yk 2)R1(tk)R2(tk) £ ¯g12(tk) −¯g1(tk)¯g2(tk) ¤ . (11) This term basically compares the average product of firing rates ¯g12 (which corresponds to the joint probability of spiking) with the product of average firing rates ¯g1¯g2 (representing the probability of independent spiking). In this way, it measures the momentary mutual information between the output of the neuron and the target spike train. For a simplified neuron model without refractoriness (R(t) = 1), the update rule (4) resembles the BCM-rule [6] as shown in [3]. With the objective function (7) to maximize, we expect an “antiHebbian BCM” rule with another term accounting for statistical dependencies between Y K 1 and Y K 2 . Since there is no refractoriness, the postsynaptic rate ν1(tk) is given directly by the current value of g(u(tk)), and the update rule (10) reduces to the rate model3 ∆wk 1j ∆t = −ανpre,k j f(νk 1 ) ½ log ·νk 1 ¯νk 1 µ ¯νk 1 ˜g ¶γ¸ −β∆t µ νk 2 log · ¯νk 12 ¯νk 1 ¯νk 2 ¸ −¯νk 2 · ¯νk 12 ¯νk 1 ¯νk 2 −1 ¸¶¾ , (12) where the presynaptic rate at synapse j at time tk is denoted by νpre,k j = a Pk n=1 ϵ(tk −tn)xn j with a in units (Vs)−1. The values ¯νk 1 , ¯νk 2 , and ¯νk 12 are running averages of the output rate νk 1 , the rate of the target signal νk 2 and of the product of these values, νk 1 νk 2 , respectively. The function f(νk 1 ) = g′(g−1(νk 1 ))/a is proportional to the derivative of g with respect to u, evaluated at the current membrane potential. The first term in the curly brackets accounts for the homeostatic process (similar to the BCM rule, see [3]), whereas the second term reinforces dependencies between Y K 1 and Y K 2 . Note that this term is zero if the rates of the two neurons are independent. It is interesting to note that if we rewrite the simplified rate-based learning rule (12) in the following way, ∆wk 1j ∆t = −ανpre,k j Φ(νk 1 , νk 2 ), (13) we can view it as an extension of the classical Bienenstock-Cooper-Munro (BCM) rule [6] with a two-dimensional synaptic modification function Φ(νk 1 , νk 2 ). Here, values of Φ > 0 produce LTD whereas values of Φ < 0 produce LTP. These regimes are separated by a sliding threshold, however, in contrast to the original BCM rule this threshold does not only depend on the running average of the postsynaptic rate ¯νk 1 , but also on the current values of νk 2 and ¯νk 2 . 4 Application to Information Bottleneck Optimization We use a setup as in Figure 1A where we want to maximize the information which the output Y K 1 of a learning neuron conveys about two target signals Y K 2 and Y K 3 . If the target signals are statistically independent from each other we can optimize the mutual information to each target signal separately. This leads to an update rule ∆wk 1j ∆t = −αCk 1j £ Bk 1(−γ) −β∆t ¡ Bk 12 + Bk 13 ¢¤ , (14) where Bk 12 and Bk 13 are the postsynaptic terms (11) sensitive to the statistical dependence between the output and target signals 1 and 2, respectively. We choose ˜g = 30Hz for the target firing rate, and we use discrete time with ∆t = 1ms. In this experiment we demonstrate that it is possible to consider two very different kinds of target signals: one target spike train has has a similar rate modulation as one part of the input, while the other target spike train has a high spike-spike correlation with another part of the input. The learning neuron receives input at 100 synapses, which are divided into 4 groups of 25 inputs each. The first two input groups consist of rate modulated Poisson spike trains4 (Figure 2A). Spike trains from the remaining groups 3 and 4 are correlated with a coefficient of 0.5 within each group, however, spike trains from different groups are uncorrelated. Correlated spike trains are generated by the procedure described in [7]. The first target signal is chosen to have the same rate modulation as the inputs from group 1, except that Gaussian random noise is superimposed with a standard deviation of 2Hz. The second target spike train is correlated with inputs from group 3 (with a coefficient of 0.5), but uncorrelated to inputs from group 4. Furthermore, both target signals are silent during random intervals: at each 3In the absence of refractoriness we use an alternative gain function galt(u) = [1/gmax + 1/g(u)]−1 in order to pose an upper limit of gmax = 100Hz on the postsynaptic firing rate. A 0 2500 5000 0 50 input 1 [Hz] 0 2500 5000 0 50 input 2 [Hz] t [ms] B t [min] synapse idx evolution of weights 20 40 60 20 40 60 80 100 0 0.5 1 C 0 2500 5000 0 50 output [Hz] 0 2500 5000 0 50 target 1 [Hz] t [ms] D 0 20 40 60 0 0.005 0.01 t [min] MI/KLD of neuron 1 0 20 40 600 0.02 0.04 E 0 20 40 60 0 0.5 1x 10 −3 t [min] I(output;targets) F 0 20 40 60 0 0.1 0.2 0.3 0.4 0.5 t [min] correlation with targets target 1 target 2 Figure 2: Performance of the spike-based learning rule (10) for the IB task. A Modulation of input rates to input groups 1 and 2. B Evolution of weights during 60 minutes of learning (bright: strong synapses, wij ≈1, dark: depressed synapses, wij ≈0.) Weights are initialized randomly between 0.10 and 0.12, α = 10−4, β = 2 · 103, γ = 50. C Output rate and rate of target signal 1 during 5 seconds after learning. D Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin (dashed line, right scale) as a function of time. Averages are calculated over segments of 1 minute. E Evolution of the average mutual information per time bin between output and both target spike trains as a function of time. F Trace of the correlation between output rate and rate of target signal 1 (solid line) and the spike-spike correlation (dashed line) between the output and target spike train 2 during learning. Correlation coefficients are calculated every 10 seconds. time step, each target signal is independently set to 0 with a certain probability (10−5) and remains silent for a duration chosen from a Gaussian distribution with mean 5s and SD 1s (minimum duration is 1s). Hence this experiment tests whether learning works even if the target signals are not available all of the time. Figure 2 shows that strong weights evolve for the first and third group of synapses, whereas the efficacies for the remaining inputs are depressed. Both groups with growing weights are correlated with one of the target signals, therefore the mutual information between output and target spike trains increases. Since spike-spike correlations convey more information than rate modulations synaptic efficacies develop more strongly to group 3 (the group with spike-spike correlations). This results in an initial decrease in correlation with the rate-modulated target to the benefit of higher correlation with the second target. However, after about 30 minutes when the weights become stable, the correlations as well as the mutual information quantities stay roughly constant. An application of the simplified rule (12) to the same task is shown in Figure 3 where it can be seen that strong weights close to wmax are developed for the rate-modulated input. To some extent weights grow also for the inputs with spike-spike correlations in order to reach the constant target firing rate ˜g. In contrast to the spike-based rule the simplified rule is not able to detect spike-spike correlations between output and target spike trains. 4The rate of the first 25 inputs is modulated by a Gaussian white-noise signal with mean 20Hz that has been low pass filtered with a cut-off frequency of 5Hz. Synapses 26 to 50 receive a rate that has a constant value of 2Hz, except that a burst is initiated at each time step with a probability of 0.0005. Thus there is a burst on average every 2s. The duration of a burst is chosen from a Gaussian distribution with mean 0.5s and SD 0.2s, the minimum duration is chosen to be 0.1s. During a burst the rate is set to 50Hz. In the simulations we use discrete time with ∆t = 1ms. A t [min] synapse idx evolution of weights 10 20 30 20 40 60 80 100 0.2 0.4 0.6 0.8 1 B 0 10 20 30 0 1 2 3 4x 10 −3 t [min] MI/KLD of neuron 1 0 10 20 300 0.01 0.02 0.03 0.04 C 0 10 20 30 0.1 0.2 0.3 0.4 0.5 t [min] correlation with target 1 Figure 3: Performance of the simplified update rule (12) for the IB task. A Evolution of weights during 30 minutes of learning (bright: strong synapses, wij ≈1, dark: depressed synapses, wij ≈ 0.) Weights are initialized randomly between 0.10 and 0.12, α = 10−3, β = 104, γ = 10. B Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin (dashed line, right scale) as a function of time. Averages are calculated over segments of 1 minute. C Trace of the correlation between output rate and target rate during learning. Correlation coefficients are calculated every 10 seconds. 5 Extracting Independent Components With a slight modification in the objective function (7) the learning rule allows us to extract statistically independent components from an ensemble of input spike trains. We consider two neurons receiving the same input at their synapses (see Figure 1B). For both neurons i = 1, 2 we maximize information transmission under the constraint that their outputs stay as statistically independent from each other as possible. That is, we maximize ˜Li = I(XK; YK i ) −βI(YK 1 ; YK 2 ) −γDKL(P(Y K i )|| ˜P(Y K i )). (15) Since the same terms (up to the sign) are optimized in (7) and (15) we can derive a gradient ascent rule for the weights of neuron i, wij, analogously to section 3: ∆wk ij ∆t = αCk ij £ Bk i (γ) −β∆tBk 12 ¤ . (16) Figure 4 shows the results of an experiment where two neurons receive the same Poisson input with a rate of 20Hz at their 100 synapses. The input is divided into two groups of 40 spike trains each, such that synapses 1 to 40 and 41 to 80 receive correlated input with a correlation coefficient of 0.5 within each group, however, any spike trains belonging to different input groups are uncorrelated. The remaining 20 synapses receive uncorrelated Poisson input. Weights close to the maximal efficacy wmax = 1 are developed for one of the groups of synapses that receives correlated input (group 2 in this case) whereas those for the other correlated group (group 1) as well as those for the uncorrelated group (group 3) stay low. Neuron 2 develops strong weights to the other correlated group of synapses (group 1) whereas the efficacies of the second correlated group (group 2) remain depressed, thereby trying to produce a statistically independent output. For both neurons the mutual information is maximized and the target output distribution of a constant firing rate of 30Hz is approached well. After an initial increase in the mutual information and in the correlation between the outputs, when the weights of both neurons start to grow simultaneously, the amounts of information and correlation drop as both neurons develop strong efficacies to different parts of the input. 6 Discussion Information Bottleneck (IB) and Independent Component Analysis (ICA) have been proposed as general principles for unsupervised learning in lower cortical areas, however, learning rules that can implement these principles with spiking neurons have been missing. In this article we have derived from information theoretic principles learning rules which enable a stochastically spiking neuron to solve these tasks. These learning rules are optimal from the perspective of information theory, but they are not local in the sense that they use only information that is available at a single A t [min] synapse idx weights of neuron 1 10 20 30 20 40 60 80 100 0 0.5 1 B t [min] synapse idx weights of neuron 2 10 20 30 20 40 60 80 100 0 0.5 1 C 0 10 20 30 0 2 4 6x 10 −4 t [min] I(output 1;output2) D 0 10 20 30 0 0.004 0.008 0.012 0.016 t [min] 0 10 20 300 0.01 0.02 0.03 0.04 MI/KLD of neuron 1 E 0 10 20 30 0 0.004 0.008 0.012 0.016 t [min] MI/KLD of neuron 2 0 10 20 300 0.01 0.02 0.03 0.04 F 0 10 20 30 0 0.2 0.4 0.6 t [min] correlation between outputs Figure 4: Extracting independent components. A,B Evolution of weights during 30 minutes of learning for both postsynaptic neurons (red: strong synapses, wij ≈1, blue: depressed synapses, wij ≈0.) Weights are initialized randomly between 0.10 and 0.12, α = 10−3, β = 100, γ = 10. C Evolution of the average mutual information per time bin between both output spike trains as a function of time. D,E Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin for both neurons (dashed line, right scale) as a function of time. Averages are calculated over segments of 1 minute. F Trace of the correlation between both output spike trains during learning. Correlation coefficients are calculated every 10 seconds. synapse without an auxiliary network of interneurons or other biological processes. Rather, they tell us what type of information would have to be ideally provided by such auxiliary network, and how the synapse should change its efficacy in order to approximate a theoretically optimal learning rule. Acknowledgments We would like to thank Wulfram Gerstner and Jean-Pascal Pfister for helpful discussions. This paper was written under partial support by the Austrian Science Fund FWF, # S9102-N13 and # P17229N04, and was also supported by PASCAL, project # IST2002-506778, and FACETS, project # 15879, of the European Union. References [1] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368–377, 1999. [2] A. Hyv¨arinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley, New York, 2001. [3] T. Toyoizumi, J.-P. Pfister, K. Aihara, and W. Gerstner. Generalized Bienenstock-Cooper-Munro rule for spiking neurons that maximizes information transmission. Proc. Natl. Acad. Sci. USA, 102:5239–5244, 2005. [4] W. Gerstner and W. M. Kistler. Spiking Neuron Models. Cambridge University Press, Cambridge, 2002. [5] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991. [6] E. L. Bienenstock, L. N. Cooper, and P. W. Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci., 2(1):32–48, 1982. [7] R. G¨utig, R. Aharonov, S. Rotter, and H. Sompolinsky. Learning input correlations through non-linear temporally asymmetric hebbian plasticity. Journal of Neurosci., 23:3697–3714, 2003.
|
2006
|
149
|
2,976
|
Bayesian Ensemble Learning Hugh A. Chipman Department of Mathematics and Statistics Acadia University Wolfville, NS, Canada Edward I. George Department of Statistics The Wharton School University of Pennsylvania Philadelphia, PA 19104-6302 Robert E. McCulloch Graduate School of Business University of Chicago Chicago, IL, 60637 Abstract We develop a Bayesian “sum-of-trees” model, named BART, where each tree is constrained by a prior to be a weak learner. Fitting and inference are accomplished via an iterative backfitting MCMC algorithm. This model is motivated by ensemble methods in general, and boosting algorithms in particular. Like boosting, each weak learner (i.e., each weak tree) contributes a small amount to the overall model. However, our procedure is defined by a statistical model: a prior and a likelihood, while boosting is defined by an algorithm. This model-based approach enables a full and accurate assessment of uncertainty in model predictions, while remaining highly competitive in terms of predictive accuracy. 1 Introduction We consider the fundamental problem of making inference about an unknown function f that predicts an output Y using a p dimensional vector of inputs x when Y = f(x) + ϵ, ϵ ∼N(0, σ2). To do this, we consider modelling or at least approximating f(x) = E(Y | x), the mean of Y given x, by a sum of m regression trees: f(x) ≈g1(x) + g2(x) + . . . + gm(x) where each gi denotes a binary regression tree. The sum-of-trees model is fundamentally an additive model with multivariate components. It is vastly more flexible than a single tree model which does not easily incorporate additive effects. Because multivariate components can easily account for high order interaction effects, a sum-of-trees model is also much more flexible than typical additive models that use low dimensional smoothers as components. Our approach is fully model based and Bayesian. We specify a prior, and then obtain a sequence of draws from the posterior using Markov chain Monte Carlo (MCMC). The prior plays two essential roles. First, with m chosen large, it restrains the fit of each individual gi so that the overall fit is made up of many small contributions in the spirit of boosting (Freund & Schapire (1997), Friedman (2001)). Each gi is a “weak learner”. Second, it “regularizes” the model by restraining the overall fit to achieve good bias-variance tradeoff. The prior specification is kept simple and a default choice is shown to have good out of sample predictive performance. Inferential uncertainty is naturally quantified in the usual Bayesian way: variation in the MCMC draws of f = P gi (evaluated at a set of x of interest) and σ indicates our beliefs about plausible values given the data. Note that the depth of each tree is not fixed so that we infer the level of interaction. Our point estimate of f is the average of the draws. Thus, our procedure captures ensemble learning (in which many trees are combined) both in the fundamental sum-of-trees specification and in the model-averaging used to obtain the estimate. 2 The Model The model consists of two parts: a sum-of-trees model, which we have named BART (Bayesian Additive Regression Trees), and a regularization prior. 2.1 A Sum-of-Trees Model To elaborate the form of a sum-of-trees model, we begin by establishing notation for a single tree model. Let T denote a binary tree consisting of a set of interior node decision rules and a set of terminal nodes, and let M = {µ1, µ2, . . . , µB} denote a set of parameter values associated with each of the B terminal nodes of T. Prediction for a particular value of input vector x is accomplished as follows: If x is associated with terminal node b of T by the sequence of decision rules from top to bottom, it is then assigned the µb value associated with this terminal node. We use g(x; T, M) to denote the function corresponding to (T, M) which assigns a µb ∈M to x. Using this notation, and letting gi(x) = g(x; Ti, Mi), our sum-of-trees model can more explicitly be expressed as Y = g(x; T1, M1) + g(x; T2, M2) + · · · + g(x; Tm, Mm) + ϵ, (1) ϵ ∼N(0, σ2). (2) Unlike the single tree model, when m > 1 the terminal node parameter µi given by g(x; Tj, Mj) is merely part of the conditional mean of Y given x. Such terminal node parameters will represent interaction effects when their assignment depends on more than one component of x (i.e., more than one variable). Because (1) may be based on trees of varying sizes, the sum-of-trees model can incorporate both direct effects and interaction effects of varying orders. In the special case where every terminal node assignment depends on just a single component of x, the sum-of-trees model reduces to a simple additive function. With a large number of trees, a sum-of-trees model gains increased representation flexibility, which, when coupled with our regularization prior, gives excellent out of sample predictive performance. Indeed, in the examples in Section 4, we set m as large as 200. Note that with m large there are hundreds of parameters of which only σ is identified. This is not a problem for our Bayesian analysis. Indeed, this lack of identification is the reason our MCMC mixes well. Even when m is much larger than needed to capture f (effectively, we have an “overcomplete basis”) the procedure still works well. 2.2 A Regularization Prior The complexity of the prior specification is vastly simplified by letting the Ti be i.i.d, the µi,b (node b of tree i) be i.i.d given the set of T, and σ be independent of all T and µ. Given these independence assumptions we need only choose priors for a single tree T, a single µ, and σ. Motivated by our desire to make each g(x; Ti, Mi) a small contribution to the overall fit, we put prior weight on small trees and small µi,b. For the tree prior, we use the same specification as in Chipman, George & McCulloch (1998). In this prior, the probability that a node is nonterminal is α(1 + d)−β where d is the depth of the node. In all examples we use the same prior corresponding to the choice α = .95 and β = 2. With this choice, trees with 1, 2, 3, 4, and ≥5 terminal nodes receive prior probability of 0.05, 0.55, 0.28, 0.09, and 0.03, respectively. Note that even with this prior, trees with many terminal nodes can be grown if the data demands it. At any non-terminal node, the prior on the associated decision rule puts equal probability on each available variable and then equal probability on each available rule given the variable. For the prior on a µ, we start by simply shifting and rescaling Y so that we believe the prior probability that E(Y | x) ∈(−.5, .5) is very high. We let µ ∼N(0, σ2 µ). Given the Ti and an x, E(Y | x) is the sum of m independent µ’s. The standard deviation of the sum is √m σµ. We choose σµ so 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 sigma conservative: df=10, quantile=.75 default: df=3, quantile=.9 aggressive: df=3, quantile=.99 Figure 1: Three priors on σ when ˆσ = 2. that .5 is within k standard deviations of zero: k√mσµ = .5. For example if k = 2 there is a 95% (conditional) prior probability that the mean of Y is in (−.5, .5). k = 2 is our default choice and in practice we typically rescale the response y so that its observed values range from -5. to .5. Note that this prior increases the shrinkage of µi,b (toward zero) as m increases. For the prior on σ we start from the usual inverted-chi-squared prior: σ2 ∼ν λ/χ2 ν. To choose the hyperparameters ν and λ, we begin by obtaining a “rough overestimate” ˆσ of σ. We then pick a degrees of freedom value ν between 3 and 10. Finally, we pick a value of q such as 0.75, 0.90 or 0.99, and set λ so that the qth quantile of the prior on σ is located at ˆσ, that is P(σ < ˆσ) = q. Figure 1 illustrates priors corresponding to three (ν, q) settings when the rough overestimate is ˆσ = 2. We refer to these three settings, (ν, q) = (10, 0.75), (3, 0.90), (3, 0.99), as conservative, default and aggressive, respectively. For automatic use, we recommend the default setting (ν, q) = (3, 0.90) which tends to avoid extremes. Simple data-driven choices of ˆσ we have used in practice are the estimate from a linear regression or the sample standard deviation of Y . Note that this prior choice can be influential. Strong prior beliefs that σ is very small could lead to over-fitting. 3 A Backfitting MCMC Algorithm Given the observed data y, our Bayesian setup induces a posterior distribution p((T1, M1), . . . , (Tm, Mm), σ| y) on all the unknowns that determine a sum-of-trees model. Although the sheer size of this parameter space precludes exhaustive calculation, the following backfitting MCMC algorithm can be used to sample from this posterior. At a general level, our algorithm is a Gibbs sampler. For notational convenience, let T(i) be the set of all trees in the sum except Ti, and similarly define M(i). The Gibbs sampler here entails m successive draws of (Ti, Mi) conditionally on (T(i), M(i), σ): (T1, M1)|T(1), M(1), σ, y (T2, M2)|T(2), M(2), σ, y (3) ... (Tm, Mm)|T(m), M(m), σ, y, followed by a draw of σ from the full conditional: σ|T1, . . . Tm, M1, . . . , Mm, y. (4) Hastie & Tibshirani (2000) considered a similar application of the Gibbs sampler for posterior sampling for additive and generalized additive models with σ fixed, and showed how it was a stochastic generalization of the backfitting algorithm for such models. For this reason, we refer to our algorithm as backfitting MCMC. In contrast with the stagewise nature of most boosting algorithms (Freund & Schapire (1997), Friedman (2001), Meek, Thiesson & Heckerman (2002)), the backfitting MCMC algorithm repeatedly resamples the parameters of each learner in the ensemble. The idea is that given (T(i), M(i)) and σ we may subtract the fit from (T(i), M(i)) from both sides of (1) leaving us with a single tree model with known error variance. This draw may be made following the approach of Chipman et al. (1998) or the refinement of Wu, Tjelmeland & West (2007). These methods draw (Ti, Mi) | T(i), M(i), σ, y as Ti | T(i), M(i), σ, y followed by Mi | Ti, T(i), M(i), σ, y. The first draw is done by the Metropolis-Hastings algorithm after integrating out Mi and the second is a set of normal draws. The draw of σ is easily accomplished by subtracting all the fit from both sides of (1) so the the ϵ are considered to be observed. The draw is then a standard inverted-chisquared. The Metropolis-Hastings draw of Ti | T(i), M(i), σ, y is complex and lies at the heart of our method. The algorithm of Chipman et al. (1998) proposes a new tree based on the current tree using one of four moves. The moves and their associated proposal probabilities are: growing a terminal node (0.25), pruning a pair of terminal nodes (0.25), changing a non-terminal rule (0.40), and swapping a rule between parent and child (0.10). Although the grow and prune moves change the implicit dimensionality of the proposed tree in terms of the number of terminal nodes, by integrating out Mi from the posterior, we avoid the complexities associated with reversible jumps between continuous spaces of varying dimensions (Green 1995). We initialize the chain with m single node trees, and then iterations are repeated until satisfactory convergence is obtained. At each iteration, each tree may increase or decrease the number of terminal nodes by one, or change one or two decision rules. Each µ will change (or cease to exist or be born), and σ will change. It is not uncommon for a tree to grow large and then subsequently collapse back down to a single node as the algorithm iterates. The sum-of-trees model, with its abundance of unidentified parameters, allows for “fit” to be freely reallocated from one tree to another. Because each move makes only small incremental changes to the fit, we can imagine the algorithm as analogous to sculpting a complex figure by adding and subtracting small dabs of clay. Compared to the single tree model MCMC approach of Chipman et al. (1998), our backfitting MCMC algorithm mixes dramatically better. When only single tree models are considered, the MCMC algorithm tends to quickly gravitate toward a single large tree and then gets stuck in a local neighborhood of that tree. In sharp contrast, we have found that restarts of the backfitting MCMC algorithm give remarkably similar results even in difficult problems. Consequently, we run one long chain rather than multiple starts. In some ways backfitting MCMC is a stochastic alternative to boosting algorithms for fitting linear combinations of trees. It is distinguished by the ability to sample from a posterior distribution. At each iteration, we get a new draw f ∗= g(x; T1, M1) + g(x; T2, M2) + . . . + g(x; Tm, Mm) (5) corresponding to the draw of Tj and Mj. These draws are a (dependent) sample from the posterior distribution on the “true” f. Rather than pick the “best” f ∗from these draws, the set of multiple draws can be used to further enhance inference. We estimate f by the posterior mean of f which is approximated by averaging the f ∗over the draws. Further, we can gauge our uncertainty about the actual underlying f by the variation across the draws. For example, we can use the 5% and 95% quantiles of f ∗(x) to obtain 90% posterior intervals for f(x). 4 Examples In this section we illustrate the potential of our Bayesian ensemble procedure BART in a large experiment using 42 datasets. The data are a subset of 52 sets considered by Kim, Loh, Shih & Chaudhuri Method Parameter Values considered Lasso shrinkage (in range 0-1) 0.1, 0.2, ..., 1.0 Gradient # of trees 50, 100, 200 Boosting Shrinkage (multiplier of each tree added) 0.01, 0.05, 0.10, 0.25 Max depth permitted for each tree 1, 2, 3, 4 Neural # hidden units see text Nets Weight decay .0001,.001, .01, .1, 1, 2, 3 Random # of trees 500 Forests % variables sampled to grow each node 10, 25, 50, 100 BART-cv Sigma prior: (ν, q) combinations (3,0.90), (3,0.99), (10,0.75) # trees 50, 200 µ Prior: k value for σµ 2, 3, 5 Table 1: Operational parameters for the various competing models. (2007). Ten datasets were excluded either because Random Forests was unable to use over 32 categorical predictors, or because a single train/test split was used in the original paper. All datasets correspond to regression problems with between 3 and 28 numeric predictors and 0 to 6 categorical predictors. Categorical predictors were converted into 0/1 indicator variables corresponding to each level. Sample sizes vary from 96 to 6806 observations. As competitors we considered linear regression with L1 regularization (the Lasso) (Efron, Hastie, Johnstone & Tibshirani 2004) and four black-box models: Friedman’s (2001) gradient boosting, random forests (Breiman 2001), and neural networks with one layer of hidden units. Implementation details are given in Chipman, George & McCulloch (2006). Tree models were not considered, since they tend to sacrifice predictive performance for interpretability. We considered two versions of our Bayesian ensemble procedure BART. In BART-cv, the prior hyperparameters (ν, q, k, m) were treated as operational parameters to be tuned via cross-validation. In BART-default, we set (ν, q, k, m) = (3, 0.90, 2, 200). For both BART-cv and BART-default, all specifications of the quantile q were made relative to the least squares linear regression estimate ˆσ, and the number of burn-in steps and MCMC iterations used were determined by inspection of a single long run. Typically 200 burn-in steps and 1000 iterations were used. With the exception of BART-default (which has no tuning parameters), all free parameters in learners were chosen via 5-fold cross-validation within the training set. The parameters considered and potential levels are given in Table 1. The levels used were chosen with a sufficientlly wide range that the optimal value was not at an extreme of the candidate values in most problems. Neural networks are the only model whose operational parameters need additional explanation. In that case, the number of hidden units was chosen in terms of the implied number of weights, rather than the number of units. This design choice was made because of the widely varying number of predictors across problems, which directly impacts the number of weights. A number of hidden units was chosen so that there was a total of roughly u weights, with u = 50, 100, 200, 500 or 800. In all cases, the number of hidden units was further constrained to fall between 3 and 30. For example, with 20 predictors we used 3, 8 and 21 as candidate values for the number of hidden units. The models were compared with 20 replications of the following experiment. For each replication, we randomly chose 5/6 of the data as a training set and the remaining 1/6 was used for testing. As mentioned above, 5-fold cv was used within each training set. In each of the 42 datasets, the response was minimally preprocessed, applying a log or square root transformation if this made the histogram of observed responses more bell-shaped. In about half the cases, a log transform was used to reduce a right tail. In one case (Fishery) a square root transform was most appropriate. Finally, in order to enable performance comparisons across all datasets, after possible nonlinear transformation, the resultant response was scaled to have sample mean 0 and standard deviation 1 prior to any train/test splitting. A total of 42 × 20 = 840 experiments were carried out. Results across these experiments are summarized in Table 2, which gives mean RMSE values and Figure 2, which summarizes relative performance using boxplots. In Figure 2, the relative performances are calculated as follows: In Method BART-cv Boosting BART-default Random Forest Neural Net Lasso RMSE 0.5042 0.5089 0.5093 0.5097 0.5160 0.5896 Table 2: Average test set RMSE values for each learner, combined across 20 train/test replicates of 42 datasets. The only statistically significant difference is Lasso versus the other methods. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G GG GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Random Forests Lasso Neural Net Boosting BART−cv BART−default 1.0 1.2 1.4 1.6 1.8 2.0 Figure 2: Test set RMSE performance relative to best (ratio of 1 means minimum RMSE test error). Results are across 20 replicates in each of 42 datasets. Boxes indicate middle 50% of runs. Each learner has the following percentage of ratios larger than 2.0, which are not plotted above: Neural net: 5%, BART-cv: 6%, BART-default and Boosting: 7%, Random forests 10% and Lasso 21%. each of the 840 experiments, the learner with smallest RMSE was identified. The relative ratio for each learner is the raw RMSE divided by the smallest RMSE. Thus a relative RMSE of 1 means that the learner had the best performance in a particular experiment. The central box gives the middle 50% of the data, with the median indicated by a vertical line. The “whiskers” of the plot extend to 1.5 times the box width, or the range of values, whichever comes first. Extremes outside the whiskers are given by individual points. As noted in the caption, relative RMSE ratios larger than 2.0 are not plotted. BART has the best performance, although all methods except the Lasso are not significantly different. The strong performance of our “default” ensemble is especially noteworthy, since it requires no selection of operational parameters. That is, cross-validation is not necessary. This results in a huge computational savings, since under cross-validation, the number of times a learner must be trained is equal to the number of settings times the number of folds. This can easily be 50 (e.g. 5 folds by 10 settings), and in this experiment it was 90! BART-default is in some sense the “clear winner” in this experiment. Although average predictive performance was indistinguishable from the other models, it does not require cross-validation. Moreover, the use of cross-validation makes it impossible to interpret the MCMC output as valid uncertainty bounds. Not only is the default version of BART faster, but it also provides valid statistical inference, a benefit not available to any of the other learners considered. To further stress the benefit of uncertainty intervals, we report some more detailed results in the analysis of one of the 42 datasets, the Boston Housing data. We applied BART to all 506 observations of the Boston Housing data using the default setting (ν, q, k, m) = (3, 0.90, 2, 200) and the linear regression estimate ˆσ to anchor q. At each of the 506 predictor values x, we used 5% and 95% quantiles of the MCMC draws to obtain 90% posterior intervals for f(x). An appealing feature of these posterior intervals is that they widen when there is less information about f(x). To roughly illustrate this, we calculated Cook’s distance diagnostic Dx for each x (Cook 1977) based on a linear least squares regression of y on x. Larger Dx indicate more uncertainty about predicting y with a 0.00 0.02 0.04 0.06 0.08 0.15 0.20 0.25 (a) Cook’s distance Posterior Interval Width 0 20 40 60 80 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 (b) Crime log median value Figure 3: Plots from a single run of the Bayesian Ensemble model on the full Boston dataset. (a) Comparison of uncertainty bound widths with Cook’s distance measure. (b) Partial dependence plot for the effect of crime on the response (log median property value), with 90% uncertainty bounds. linear regression at x. To see how the width of the 90% posterior intervals corresponded to Dx, we plotted them together in Figure 3(a). Although the linear model may not be strictly appropriate, the plot is suggestive: all points with large Dx values have wider uncertainty bounds. Uncertainty bounds can also be used in graphical summaries such as a partial dependence plot (Friedman 2001), which shows the effect of one (or more) predictor on the response, margining out the effect of other predictors. Since BART provides posterior draws for f(x), calculation of a posterior distribution for the partial dependence function is straightforward. Computational details are provided in Chipman et al. (2006). For the Boston Housing data, Figure 3(b) shows the partial dependence plot for crime, with 90% posterior intervals. The vast majority of data values occur for crime < 5, causing the intervals to widen as crime increases and the data become more sparse. 5 Discussion Our approach is a fully Bayesian approach to learning with ensembles of tree models. Because of the nature of the underlying tree model, we are able to specify simple, effective priors and fully exploit the benefits of Bayesian methodology. Our prior provides the regularization needed to obtain good predictive performance. In particular, our default prior, which is minimially dependent on the data, performs well compared to other methods which rely on cross-validation to pick model parameters. We obtain inference in the natural Bayesian way from the variation in the posterior draws. While predictive performance in always our first goal, many researchers want to interpret the results. In this case, gauging the inferential uncertainty is essential. No other competitive methods do this in a convenient way. Chipman et al. (2006) and Abreveya & McCulloch (2006) provide further evidence of the predictive performance of our approach. In addition Abreveya & McCulloch (2006) illustrate the ability of our method to uncover interesting interaction effects in a real example. Chipman et al. (2006) and and Hill & McCulloch (2006) illustrate the inferential capabilities. Posterior intervals are shown to have good frequentist coverage. Chipman et al. (2006) also illustrates the method’s ability to obtain inference in the very difficult “big p, small n” problem, where there are few observations and many potential predictors. A common concern with Bayesian approaches is sensitivity to prior parameters. Chipman et al. (2006) found that results were robust to a reasonably wide range of prior parameters, including ν, q, σµ, as well as the number of trees, m. m needs to be large enough to provide enough complexity to capture f, but making m “too large” does not appreciably degrade accuracy (although it does make it slower to run). Chipman et al. (2006) provide guidelines for choosing m. In practice, the stability of the MCMC makes the method easy to use. Typcially, it burns-in rapidly. If the method is run twice with different seeds the same results are obtained both for fit and inference. Code is publicly available in the R-package BayesTree. Acknowledgments The authors would like to thank three anonymous referees, whose comments improved an earlier draft, and Wei-Yin Loh who generously provided the datasets used in the experiment. This research was supported by the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chairs program, the Acadia Centre for Mathematical Modelling and Computation, the University of Chicago Graduate School of Business, NSF grant DMS 0605102 and by NIH/NIAID award AI056983. References Abreveya, J. & McCulloch, R. (2006), Reversal of fortune: a statistical analysis of penalty calls in the national hockey league, Technical report, Purdue University. Breiman, L. (2001), ‘Random forests’, Machine Learning 45, 5–32. Chipman, H. A., George, E. I. & McCulloch, R. E. (1998), ‘Bayesian CART model search (C/R: p948-960)’, Journal of the American Statistical Association 93, 935–948. Chipman, H. A., George, E. I. & McCulloch, R. E. (2006), BART: Bayesian additive regression trees, Technical report, University of Chicago. Cook, R. D. (1977), ‘Detection of influential observations in linear regression’, Technometrics 19(1), 15–18. Efron, B., Hastie, T., Johnstone, I. & Tibshirani, R. (2004), ‘Least angle regression’, Annals of Statistics 32, 407–499. Freund, Y. & Schapire, R. E. (1997), ‘A decision-theoretic generalization of on-line learning and an application to boosting’, Journal of Computer and System Sciences 55, 119–139. Friedman, J. H. (2001), ‘Greedy function approximation: A gradient boosting machine’, The Annals of Statistics 29, 1189–1232. Green, P. J. (1995), ‘Reversible jump MCMC computation and Bayesian model determination’, Biometrika 82, 711–732. Hastie, T. & Tibshirani, R. (2000), ‘Bayesian backfitting (with comments and a rejoinder by the authors’, Statistical Science 15(3), 196–223. Hill, J. L. & McCulloch, R. E. (2006), Bayesian nonparametric modeling for causal inference, Technical report, Columbia University. Kim, H., Loh, W.-Y., Shih, Y.-S. & Chaudhuri, P. (2007), ‘Visualizable and interpretable regression models with good prediction power’, IEEE Transactions: Special Issue on Data Mining and Web Mining. In press. Meek, C., Thiesson, B. & Heckerman, D. (2002), Staged mixture modelling and boosting, Technical Report MS-TR-2002-45, Microsoft Research. Wu, Y., Tjelmeland, H. & West, M. (2007), ‘Bayesian CART: Prior specification and posterior simulation’, Journal of Computational and Graphical Statistics. In press.
|
2006
|
15
|
2,977
|
A Theory of Retinal Population Coding Eizaburo Doi Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 edoi@cnbc.cmu.edu Michael S. Lewicki Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 lewicki@cnbc.cmu.edu Abstract Efficient coding models predict that the optimal code for natural images is a population of oriented Gabor receptive fields. These results match response properties of neurons in primary visual cortex, but not those in the retina. Does the retina use an optimal code, and if so, what is it optimized for? Previous theories of retinal coding have assumed that the goal is to encode the maximal amount of information about the sensory signal. However, the image sampled by retinal photoreceptors is degraded both by the optics of the eye and by the photoreceptor noise. Therefore, de-blurring and de-noising of the retinal signal should be important aspects of retinal coding. Furthermore, the ideal retinal code should be robust to neural noise and make optimal use of all available neurons. Here we present a theoretical framework to derive codes that simultaneously satisfy all of these desiderata. When optimized for natural images, the model yields filters that show strong similarities to retinal ganglion cell (RGC) receptive fields. Importantly, the characteristics of receptive fields vary with retinal eccentricities where the optical blur and the number of RGCs are significantly different. The proposed model provides a unified account of retinal coding, and more generally, it may be viewed as an extension of the Wiener filter with an arbitrary number of noisy units. 1 Introduction What are the computational goals of the retina? The retina has numerous specialized classes of retinal ganglion cells (RGCs) that are likely to subserve a variety of different tasks [1]. An important class directly subserving visual perception is the midget RGCs (mRGCs) which constitute 70% of RGCs with an even greater proportion at the fovea [1]. The problem that mRGCs face should be to maximally preserve signal information in spite of the limited representational capacity, which is imposed both by neural noise and the population size. This problem was recently addressed (although not specifically as a model of mRGCs) in [2], which derived the theoretically optimal linear coding method for a noisy neural population. This model is not appropriate, however, for the mRGCs, because it does not take into account the noise in the retinal image (Fig. 1). Before being projected on the retina, the visual stimulus is distorted by the optics of the eye in a manner that depends on eccentricity [3]. This retinal image is then sampled by cone photoreceptors whose sampling density also varies with eccentricity [1]. Finally, the sampled image is noisier in the dimmer illumination condition [4]. We conjecture that the computational goal of mRGCs is to represent the maximum amount of information about the underlying, non-degraded image signal subject to limited coding precision and neural population size. Here we propose a theoretical model that achieves this goal. This may be viewed as a generalization of both Wiener filtering [5] and robust coding [2]. One significant characteristic of the proposed model is that it can make optimal use of an arbitrary number of neurons in order to preserve the maximum amount of signal information. This allows the model to predict theoretically optimal representations at any retinal eccentricity in contrast to the earlier studies [4, 6, 7, 8]. Visual angle [arc min] Intensity 0 4 -4 0 4 -4 (b) Fovea retinal image (c) 40 degrees eccentricity (a) Undistorted image Figure 1: Simulation of retinal images at different retinal eccentricities. (a) Undistorted image signal. (b) The convolution kernel at the fovea [3] superimposed on the photoreceptor array indicated by triangles under the x-axis [1]. (c) The same as in (b) but at 40 degrees of retinal eccentricity. 2 The model First let us define the problem (Fig. 2). We assume that data sampled by photoreceptors (referred to as the observation) x ∈RN are blurred versions of the underlying image signal s ∈RN with additive white noise ν ∼N(0, σ2 νIN), x = Hs + ν (1) where H ∈RN×N implements the optical blur. To encode the image, we assume that the observation is linearly transformed into an M-dimensional representation. To model limited neural precision, it is assumed that the representation is subject to additive channel noise, δ ∼N(0, σ2 δIM). The noisy neural representation is therefore expressed as r = W(Hs + ν) + δ (2) where each row of W ∈RM×N corresponds to a receptive field. To evaluate the amount of signal information preserved in the representation, we consider a linear reconstruciton ˆs = Ar where A ∈RN×M. The residual is given by ϵ = (IN −AWH)s −AWν −Aδ, (3) where IN is the N-dimensional identity matrix, and the mean squared error (MSE) is E = tr[Σs] −2 tr[AWHΣs] + tr[AW(HΣsHT + σ2 νIN)WT AT ] + σ2 δtr[AAT ] (4) with E = tr⟨ϵϵT ⟩by definition, ⟨·⟩the average over samples, and Σs the covariance matrix of the image signal s. The problem is to find W and A that minimize E. To model limited neural capacity, the representation r must have limited SNR. This constraint is equivalent to fixing the variance of filter output ⟨wT j x⟩= σ2 u, where wj is the j-th row of W (here we assume all neurons have the same capacity). It is expressed in the matrix form as diag[WΣxWT ] = σ2 u1M (5) where Σx = HΣsHT + σ2 νIN is the covariance of the observation. It can further be simplified to diag[VVT ] = 1M, (6) W = σuVS−1 x ET , (7) sensory noise channel noise observation reconstruction encoder decoder optical blur representation image ν δ H s ˆs A x W r Figure 2: The model diagram. If there is no degradation of the image (H = I and σ2 ν = 0), the model is reduced to the original robust coding model [2]. If channel noise is zero as well (σ2 δ = 0), it boils down to conventional block coding such as PCA, ICA, or wavelet transforms. where Sx = diag( p α1λ1 + σ2ν, · · · , p αNλN + σ2ν) (the square root of Σx’s eigenvalues), √αk and λk are respectively the eigenvalues of H and Σs, and the columns of E are their common eigenvectors1. Note that √αk defines the modulation transfer function of the optical blur H, i.e., the attenuation of the amplitude of the signal along the k-th eigenvector. Now, the problem is to find V and A that minimize E. The optimal A should satisfy ∂E/∂A = O, which yields A = ΣsHT WT [W(HΣsHT + σ2 νIN)WT + σ2 δIM]−1 (8) = γ2 σu ESsP[IN + γ2VT V]−1VT (9) where γ2 = σ2 u/σ2 δ (neural SNR), Ss = diag(√λ1, · · · , √λN), P = diag(√φ1, · · · , √φN), and φk = αkλk/(αkλk +σ2 ν) (the power ratio between the attenuated signal and that signal plus sensory noise; as we will see below, φk characterizes the generalized solutions of robust coding, and if there is neither sensory noise nor optical blur, φk becomes 1 that reduces the solutions of the current model to those in the original robust coding model [2]). This implies that the optimal A is determined once the optimal V is found. With eqn. 7 and 9, E becomes E = N X k=1 λk(1 −φk) + tr[Ss 2P2(IN + γ2VT V)−1]. (10) Finally, the problem is reduced to finding V that minimizes eqn. 10. Solutions for 2-D data In this section we present the explicit characterization of the optimal solutions for two-dimensional data. It entails under-complete, complete, and over-complete representations, and provides precise insights into the numerical solutions for the high-dimensional image data (Section 3). This is a generalization of the analysis in [2] with the addition of optical blur and additive sensory noise. From eqn. 6 we can parameterize V with V = cos θ1 sin θ1 ... ... cos θM sin θM (11) where θj ∈[0, 2π), j = 1, · · · , M, which yields E = 2 X k=1 λk(1 −φk) + (ψ1 + ψ2) M 2 γ2 + 1 −γ2 2 (ψ1 −ψ2) Re(Z) M 2 γ2 + 1 2 −1 4γ4|Z|2 , (12) with ψk ≡φkλk and Z ≡P j (cos 2θj + i sin 2θj). In the following we analyze the cases when ψ1 = ψ2 and when ψ1 ̸= ψ2. Without loss of generality we consider ψ1 > ψ2 for the latter case. (In the previous analysis of robust coding [2], these cases depend only on the ratio between λ1 and λ2, i.e., the isotropy of the data. In the current, general model, these also depend on the isotropy of the optical blur (α1 and α2) and the variance of sensory noise (σ2 ν), and no simple meaning is attatched to the individual cases.) 1). If ψ1 = ψ2 (≡ψ): E in eqn. 10 becomes E = 2 X k=1 λk(1 −φk) + 2ψ M 2 γ2 + 1 M 2 γ2 + 1 2 −1 4γ4|Z|2 . (13) Therefore, E is minimized when |Z|2 is minimized. 1The eigenvectors of Σs and H are both Fourier basis functions because we assume that s are natural images [9] and H is a circulant matrix [10]. 1-a). If M = 1 (single neuron case): By definition |Z|2 = 1, implying that E is constant for any θ1, E = λ1(1 −φ1) + ψ1 γ2 + 1 + λ2 = λ2(1 −φ2) + ψ2 γ2 + 1 + λ1, (14) W = σu ( cos θ1 sin θ1 ) 1/ p α1λ1 + σ2ν 0 0 1/ p α2λ2 + σ2ν ET . (15) Because there is only one neuron, only one direction in the two dimensional space can be reconstructed, and eqn. 15 implies that any direction can be equally good. The first equality in eqn. 14 can be interpreted as the case when W represents the direction along the first eigenvector, and consequently, the whole data variance along the second eigenvector λ2 is left in the error E. 1-b). If M ≥2 (multiple neuron case): There always exists Z that satisfies |Z| = 0 if M ≥2, with which E is minimized [2]. Accordingly, E = 2 X k=1 " λk(1 −φk) + ψk M 2 γ2 + 1 # , (16) W = σuV 1/ p α1λ1 + σ2ν 0 0 1/ p α2λ2 + σ2ν ET , (17) where V is arbitrary as long as it satisfies |Z| = 0. Note that W takes the same form as in M = 1 except that there are more than two neurons. Also, eqn.16 shares the second term with eqn. 14 except that the SNR of the representation γ2 is multiplied by M/2. It implies that having n times the neurons is equivalent to increasing the representation SNR by the factor of n (this relation generally holds in the multiple neuron cases below). 2). If ψ1 > ψ2: Eqn. 12 is minimized when Z = Re(Z) ≥0 for a fixed value of |Z|2. Therefore, the problem is reduced to seeking a real value Z = y ∈[0, M] that minimizes E = 2 X k=1 λk(1 −φk) + (ψ1 + ψ2) M 2 γ2 + 1 −γ2 2 (ψ1 −ψ2) y M 2 γ2 + 1 2 −1 4γ4y2 . (18) 2-a). If M = 1 (single neuron case): Z = Re(Z) holds iff θ1 = 0. Accordingly, E = λ1(1 −φ1) + ψ1 γ2 + 1 + λ2, (19) W = σu p α1λ1 + σ2ν eT 1 . (20) These take the same form as in the case of ψ1 = ψ2 and M = 1 (eqn. 14-15) except that the direction of the representation is specified along the first eigenvector e1, indicating that all the representational resources (namely, one neuron) are devoted to the largest data variance direction. 2-b). If M ≥2 (multiple neuron case): From eqn. 18, the necessary condition for the minimun dE/dy = 0 yields √ψ1 −√ψ2 √ψ1 + √ψ2 M + 2 γ2 −y √ψ1 + √ψ2 √ψ1 −√ψ2 M + 2 γ2 −y = 0. (21) The existence of a root y in the domain [0, M] depends on how γ2 compares to the next quantity, which is a generalized form of the critical point of neural precision [2]: γ2 c = 1 M "s ψ1 ψ2 −1 # . (22) 2-b-i). If γ2 < γ2 c: dE/dy = 0 does not have a root within the domain. Since dE/dy is always negative, E is minimized when y = M. Accordingly, E = λ1(1 −φ1) + ψ1 Mγ2 + 1 + λ2, (23) W = σu p α1λ1 + σ2ν 1MeT 1 . (24) These solutions are the same as in M = 1 (eqn. 19-20) except that the neural SNR γ2 is multiplied by M to yield smaller MSE. 2-b-ii). If γ2 ≥γ2 c: Eqn. 21 has a root within [0, M], y = √ψ1 −√ψ2 √ψ1 + √ψ2 2 γ2 + M , (25) with y = M if γ2 = γ2 c. The optimal solutions are E = 2 X k=1 λk(1 −φk) + 1 M 2 γ2 + 1 (√ψ1 + √ψ2)2 2 , (26) W = σuV 1/ p α1λ1 + σ2ν 0 0 1/ p α2λ2 + σ2ν ET , (27) where V is arbitrary up to satisfying eqn. 25. In Fig. 3 we illustrate some examples of explicit solutions for 2-D data with two neurons. The general strategy of the proposed model is to represent the principal axis of the signal s more accurately as the signal is more degraded (by optical blur and/or sensory noise). Specifically, the two neurons come to represent the identical dimension when the degradation is sufficiently large. 20dB 10dB 0dB -10dB no-blur blur Figure 3: Sensory noise changes the optimal linear filter. The gray (outside) and blue (inside) contours show the variance of the target and reconstructed signal, respectively, and the red (thick) bars the optimal linear filters when there are two neurons. The SNR of the observation is varied from 20 to −10 dB (column-wise). The bottom row is the case where the power of the signal’s minor component is attenuated as in the optical blur (i.e., low pass filtering): (α1, α2) = (1, 0.1); while the top is without the blur: (α1, α2) = (1, 1). The neural SNR is fixed at 10 dB. 3 Optimal receptive field populations We applied the proposed model to a natural images data set [11] to obtain the theoretically optimal population coding for mRGCs. The optimal solutions were derived under the following biological constraints on the observation, or the photoreceptor response, x (Fig. 2). To model the retinal images at different retinal eccentricities, we used modulation transfer functions of the human eye [3] and cone photoreceptor densities of the human retina [1] (Fig. 1). The retinal image is further corrupted by the additive Gaussian noise to model the photon transduction noise by which the SNR of the observartion becomes smaller under dimmer illumination level [4]. This yields the observation at different retinal eccentricities. In the following, we present the optimal solutions for the fovea (where the most accurate visual information is represented while the receptive field characteristics are difficult to measure experimentally) and those at 40 degrees retinal eccentricity (where we can compare the model to recent physiological measurements in the primate retina [12]). The information capacity of neural representations is limited by both the number of neurons and the precision of neural codes. The ratio of cone photoreceptors to mRGCs in the human retina is 1 : 2 at the fovea and 23 : 2 at 40 degrees [13]. We did not model neural rectification (separate on and off channels) and thus assumed the effective cell ratios as 1 : 1 and 23 : 1, respectively. We also fixed the neural SNR at 10 dB, equivalent to assuming ∼1.7 bits coding precision as in real neurons [14]. The optimal W can be derived with the gradient descent on E, and A can be derived from W using eqn. 8. As explained in Section 2, the solution must satisfy the variance constraint (eqn. 6). We formulate this as a constrained optimization problem [15]. The update rule for W is given by ∆W ∝−AT (AWH −IN)ΣsHT −σ2 νAT AW −κ diag ln[diag(WΣxWT )/σ2 u] diag(WΣxWT ) ! WΣx, (28) where κ is a positive constant that controls the strength of the variance constraint. Our initial results indicated that the optimal solutions are not unique and these solutions are equivalent in terms of MSE. We then imposed an additional neural resource constraint that penalizes the spatial extent of a receptive field: the constraint for the k-th neuron is defined by P j |Wkj|(ρ d 2 kj + 1) where dkj is the spatial distance between the j-th weight and the center of mass of all weights, and ρ is a positive constant defining the strength of the spatial constraint. This assumption is consistent with the spatially restricted computation in the retina. If ρ = 0, it imposes sparse weights [16], though not necessarily spatially localized. In our simulations we fixed ρ = 0.5. For the fovea, we examined 15×15 pixel image patches sampled from a large set of natural images, where each pixel corresponds to a cone photoreceptor. Since the cell ratio is assumed to be 1 : 1, there were 225 model neurons in the population. As shown in Fig. 4, the optimal filters show concentric center-surround organization that is well fit with a difference-of-Gaussian function (which is one major characteristic of mRGCs). The precise organization of the model receptive field changes according to the SNR of the observation: as the SNR decreases, the surround inhibition gradually disappears and the center becomes larger, which serves to remove sensory noise by averaging. As a population, this yields a significant overlap among adjacent receptive fields. In terms of spatial-frequency, this change corresponds to a shift from band-pass to low-pass filtering, which is consistent with psychophysical measurements of the human and the macaque [17]. Spatial freq. Magnitude 20dB 20dB 10dB 0dB -10dB -10dB (a) (d) (b) (c) Figure 4: The model receptive fields at the fovea under different SNRs of the observation. (a) A cross-section of the two-dimensional receptive field. (b) Six examples of receptive fields. (c) The tiling of a population of receptive fields in the visual field. The ellipses show the contour of receptive fields at half the maximum. One pair of adjacent filters are highlighted for clarity. The scale bar indicates an interval of three photoreceptors. (d) Spatial-frequency profiles (modulation transfer functions) of the receptive fields at different SNRs. For 40 degrees retinal eccentricity, we examined 35×35 photoreceptor array that are projected to 53 model neurons (so that the cell ratio is 23 : 1). The general trend of the results is the same as in the fovea except that the receptive fields are much larger. This allows the fewer neurons in the population to completely tile the visual field. Furthermore, the change of the receptive field with the sensory noise level is not as significant as that predicted for the fovea, suggesting that the SNR is a less significant factor when neural number is severely limited. We also note that the elliptical shape of the extent of the receptive fields matches experimental observations [12]. 20dB -10dB Figure 5: The theoretically-derived receptive fields for 40 degrees of the retinal eccentricity. Captions as in Fig. 4. Finally, we demonstrate the performance of de-blurring, de-noising, and information preservation by these receptive fields (Fig. 6). The original image is well recovered in spite of both the noisy representation (10% of the code’s variation is noise because of the 10 dB precision) and the noisy, degraded observation. Note that the 40 degrees eccentricity is subject to an additional, significant dimensionality reduction, which is why the reconstruction error (e.g., 34.8% at 20 dB) can be greater than the distortion in the observation (30.5%). original fovea 20 dB -10 dB 20 dB -10 dB observation reconstruction 40 deg. 1024.7% 30.5% 25.7% 1029.5% 34.8% 57.5% 10.1% 53.5% Figure 6: Reconstruction example. For both the fovea and 40 degrees retinal eccentricity, two sensory noise conditions are shown (20 and −10 dB). The percentages indicate the average distortion in the observation or the reconstruction error, respectively, over 60,000 samples. The blocking effect is caused by the implementation of the optical blur on each image patch using a matrix H instead of convolving the whole image. 4 Discussion The proposed model is a generalization of the robust coding model [2] and allows a complete characterization of the optimal representation as a function of both image degradation (optical blur and additive sensory noise) and limited neural capacity (neural precision and population size). If there is no sensory noise σ2 ν = 0 and no optical blur H = IN, then φk = 1 for all k, which reduces all the optimal solutions above to those reported in [2]. The proposed model may also be viewed as a generalization of the Wiener filter: if there is no channel noise σ2 δ = 0 and the cell ratio is 1 : 1, and by assuming A ≡IN without loss of generality, the problem is reformulated as finding W ∈RN×N that provides the best estimate of the original signal ˆs = W(Hs + ν) in terms of the MSE. The optimal solution is given by the Wiener filter: W = ΣsHT [HΣsHT + σ2 νIN]−1 = E diag √α1λ1 α1λ1 + σ2ν , · · · , √αNλN αNλN + σ2ν ET , (29) E = tr[Σs] −tr[WHΣs] = PN k=1 λk(1 −φk), (30) (note that the diagonal matrix in eqn. 29 corresponds to the Wiener filter formula in the frequency domain [5]). This also implies that the Wiener filter is optimal only in the limiting case in our setting. Here, we have treated the model primarily as a theory of retinal coding, but its generality would allow it to be applied to a wide range of problems in signal processing. We should also note several limitations. The model assumes Gaussian signal structure. Modeling non-Gaussian signal distributions might account for coding efficiency constraints on the retinal population. The model is linear, but the framework allows for the incorporation of non-linear encoding and decoding methods, at the expense of analytic tractability. There have been earlier approaches to theoretically characterizing the retinal code [4, 6, 7, 8]. Our approach differs from these in several respects. First, it is not restricted to the so-called complete representation (M = N) and can predict properties of mRGCs at any retinal eccentricity. Second, we do not assume a single, translation invariant filter and can derive the optimal receptive fields for a neural population. Third, we accurately model optical blur, retinal sampling, cell ratio, and neural precision. Finally, we assumed that, as in [4, 8], the objective of the retinal coding is to form the neural code that yields the minimum MSE with linear decoding, while others assumed it to form the neural code that maximally preserves information about signal [6, 7]. To the best of our knowledge, we don’t know a priori which objective should be appropriate for the retinal coding. As suggested earlier [8], this issue could be resolved by comparing different theoretical predictions to physiological data. References [1] R. W. Rodieck. The First Steps in Seeing. Sinauer, MA, 1998. [2] E. Doi, D. C. Balcan, and M. S. Lewicki. A theoretical analysis of robust coding over noisy overcomplete channels. In Advances in Neural Information Processing Systems, volume 18. MIT Press, 2006. [3] R. Navarro, P. Artal, and D. R. Williams. Modulation transfer of the human eye as a function of retinal eccentricity. Journal of Optical Society of America A, 10:201–212, 1993. [4] M. V. Srinivasan, S. B. Laughlin, and A. Dubs. Predictive coding: a fresh view of inhibition in the retina. Proc. R. Soc. Lond. B, 216:427–459, 1982. [5] R. C. Gonzalez and R. E. Woods. Digital image processing. Prentice Hall, 2nd edition, 2002. [6] J. J. Atick and A. N. Redlich. Towards a theory of early visual processing. Neural Computation, 2:308– 320, 1990. [7] J. H. van Hateren. Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. J. Comp. Physiol. A, 171:157–170, 1992. [8] D. L. Ruderman. Designing receptive fields for highest fidelity. Network, 5:147–155, 1994. [9] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 4:2379–2394, 1987. [10] R. M. Gray. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory, 2:155–239, 2006. [11] E. Doi, T. Inui, T.-W. Lee, T. Wachtler, and T. J. Sejnowski. Spatiochromatic receptive field properties derived from information-theoretic analyses of cone mosaic responses to natural scenes. Neural Computation, 15:397–417, 2003. [12] E. S. Frechette, A. Sher, M. I. Grivich, D. Petrusca, A. M. Litke, and E. J. Chichilnisky. Fidelity of the ensemble code for visual motion in primate retina. Journal of Neurophysiology, 94:119–135, 2005. [13] C. A. Curcio and K. A. Allen. Topography of ganglion cells in human retina. Journal of Comparative Neurology, 300:5–25, 1990. [14] A. Borst and F. E. Theunissen. Information theory and neural coding. Nature Neuroscience, 2:947–957, 1999. [15] E. Doi and M. S. Lewicki. Sparse coding of natural images using an overcomplete set of limited capacity units. In Advances in Neural Information Processing Systems, volume 17. MIT Press, 2005. [16] B. T. Vincent and R. J. Baddeley. Synaptic energy efficiency in retinal processing. Vision Research, 43:1283–1290, 2003. [17] R. L. De Valois, H. Morgan, and D. M. Snodderly. Psychophysical studies of monkey vision - III. Spatial luminance contrast sensitivity test of macaque and human observers. Vision Research, 14:75–81, 1974.
|
2006
|
150
|
2,978
|
An Approach to Bounded Rationality Eli Ben-Sasson Department of Computer Science Technion — Israel Institute of Technology Adam Tauman Kalai Department of Computer Science College of Computing Georgia Tech Ehud Kalai MEDS Department Kellogg Graduate School of Management Northwestern University Abstract A central question in game theory and artificial intelligence is how a rational agent should behave in a complex environment, given that it cannot perform unbounded computations. We study strategic aspects of this question by formulating a simple model of a game with additional costs (computational or otherwise) for each strategy. First we connect this to zero-sum games, proving a counter-intuitive generalization of the classic min-max theorem to zero-sum games with the addition of strategy costs. We then show that potential games with strategy costs remain potential games. Both zero-sum and potential games with strategy costs maintain a very appealing property: simple learning dynamics converge to equilibrium. 1 The Approach and Basic Model How should an intelligent agent play a complicated game like chess, given that it does not have unlimited time to think? This question reflects one fundamental aspect of “bounded rationality,” a term coined by Herbert Simon [1]. However, bounded rationality has proven to be a slippery concept to formalize (prior work has focused largely on finite automata playing simple repeated games such as prisoner’s dilemma, e.g. [2, 3, 4, 5]). This paper focuses on the strategic aspects of decisionmaking in complex multi-agent environments, i.e., on how a player should choose among strategies of varying complexity, given that its opponents are making similar decisions. Our model applies to general strategic games and allows for a variety of complexities that arise in real-world applications. For this reason, it is applicable to one-shot games, to extensive games, and to repeated games, and it generalizes existing models such as repeated games played by finite automata. To easily see that bounded rationality can drastically affect the outcome of a game, consider the following factoring game. Player 1 chooses an n-bit number and sends it to Player 2, who attempts to find its prime factorization. If Player 2 is correct, he is paid 1 by Player 1, otherwise he pays 1 to Player 1. Ignoring complexity costs, the game is a trivial win for Player 2. However, for large n, the game should is essentially a win for Player 1, who can easily output a large random number that Player 2 cannot factor (under appropriate complexity assumptions). In general, the outcome of a game (even a zero-sum game like chess) with bounded rationality is not so clear. To concretely model such games, we consider a set of available strategies along with strategy costs. Consider an example of two players preparing to play a computerized chess game for $100K prize. Suppose the players simultaneously choose among two available options: to use a $10K program A or an advanced program B, which costs $50K. We refer to the row chooser as white and to the column chooser as black, with the corresponding advantages reflected by the win probabilities of white described in Table 1a. For example, when both players use program A, white wins 55% of the time and black wins 45% of the time (we ignore draws). The players naturally want to choose strategies to maximize their expected net payoffs, i.e., their expected payoff minus their cost. Each cell in Table 1b contains a pair of payoffs in units of thousands of dollars; the first is white’s net expected payoff and the second is black’s. a) A B A 55% 13% B 93% 51% b) A (-10) B (-50) A (-10) 45, 35 3, 37 B (-50) 43,-3 1,-1 Figure 1: a) Table of first-player winning probabilities based on program choices. b) Table of expected net earnings in thousands of dollars. The unique equilibrium is (A,B) which strongly favors the second player. A surprising property is evident in the above game. Everything about the game seems to favor white. Yet due to the (symmetric) costs, at the unique Nash equilibrium (A,B) of Table 1b, black wins 87% of the time and nets $34K more than white. In fact, it is a dominant strategy for white to play A and for black to play B. To see this, note that playing B increases white’s probability of winning by 38%, independent of what black chooses. Since the pot is $100K, this is worth $38K in expectation, but B costs $40K more than A. On the other hand, black enjoys a 42% increase in probability of winning due to B, independent of what white does, and hence is willing to pay the extra $40K. Before formulating the general model, we comment on some important aspects of the chess example. First, traditional game theory states that chess can be solved in “only” two rounds of elimination of dominated strategies [10], and the outcome with optimal play should always be the same: either a win for white or a win for black. This theoretical prediction fails in practice: in top play, the outcome is very nondeterministic with white winning roughly twice as often as black. The game is too large and complex to be solved by brute force. Second, we have been able to analyze the above chess program selection example exactly because we formulated as a game with a small number of available strategies per player. Another formulation that would fit into our model would be to include all strategies of chess, with some reasonable computational costs. However, it is beyond our means to analyze such a large game. Third, in the example above we used monetary software cost to illustrate a type of strategy cost. But the same analysis could accommodate many other types of costs that can be measured numerically and subtracted from the payoffs, such as time or effort involved in the development or execution of a strategy, and other resource costs. Additional examples in this paper include the number of states in a finite automaton, the number of gates in a circuit, and the number of turns on a commuter’s route. Our analysis is limited, however, to cost functions that depend only on the strategy of the player and not the strategy chosen by its opponent. For example, if our players above were renting computers A or B and paying for the time of actual usage, then the cost of using A would depend on the choice of computer made by the opponent. Generalizing the example above, we consider a normal form game with the addition of strategy costs, a player-dependent cost for playing each available strategy. Our main results regard two important classes of games: constant-sum and potential games. Potential games with strategy costs remain potential games. While two-person constant-sum games are no longer constant, we give a basic structural description of optimal play in these games. Lastly, we show that known learning dynamics converge in both classes of games. 2 Definition of strategy costs We first define an N-person normal-form game G = (N, S, p) consisting of finite sets of (available) pure strategies S = (S1, . . . , SN) for the N players, and a payoff function p : S1 × . . . × SN → RN. Players simultaneously choose strategies si ∈Si after which player i is rewarded with pi(s1, . . . , sN). A randomized or mixed strategy σi for player i is a probability distribution over its pure strategies Si, σi ∈∆i = n x ∈R|Si| : X xj = 1, xj ≥0 o . We extend p to ∆1 × . . . × ∆N in the natural way, i.e., pi(σ1, . . . , σN) = E[pi(s1, . . . , sN)] where each si is drawn from σi, independently. Denote by s−i = (s1, s2, . . . , si−1, si+1, . . . , sN) and similarly for σ−i. A best response by player i to σ−i is σi ∈∆i such that pi(σi, σ−i) = maxσ′ i∈∆i pi(σ′ i, σ−i). A (mixed strategy) Nash equilibrium of G is a vector of strategies (σ1, . . . , σN) ∈∆1 × . . . × ∆N such that each σi is a best response to σ−i. We now define G−c, the game G with strategy costs c = (c1, . . . , cN), where ci : Si →R. It is simply an N-person normal-form game G−c = (N, S, p−c) with the same sets of pure strategies as G, but with a new payoff function p−c : S1 × . . . × SN →RN where, p−c i (s1, . . . , sN) = pi(s1, . . . , sN) −ci(si), for i = 1, . . . , N. We similarly extend ci to ∆i in the natural way. 3 Two-person constant-sum games with strategy costs Recall that a game is constant-sum (k-sum for short) if at every combination of individual strategies, the players’ payoffs sum to some constant k. Two-person k-sum games have some important properties, not shared by general sum games, which result in more effective game-theoretic analysis. In particular, every k-sum game has a unique value v ∈R. A mixed strategy for player 1 is called optimal if it guarantees payoff ≥v against any strategy of player 2. A mixed strategy for player 2 is optimal if it guarantees ≥k −v against any strategy of player 1. The term optimal is used because optimal strategies guarantee as much as possible (v + k −v = k) and playing anything that is not optimal can result in a lesser payoff, if the opponent responds appropriately. (This fact is easily illustrated in the game rock-paper-scissors – randomizing uniformly among the strategies guarantees each player 50% of the pot, while playing anything other than uniformly random enables the opponent to win strictly more often.) The existence of optimal strategies for both players follows from the min-max theorem. An easy corollary is that the Nash equilibria of a k-sum game are exchangeable: they are simply the cross-product of the sets of optimal mixed strategies for both players. Lastly, it is well-known that equilibria in two-person k-sum games can be learned in repeated play by simple dynamics that are guaranteed to converge [17]. With the addition of strategy costs, a k-sum game is no longer k-sum and hence it is not clear, at first, what optimal strategies there are, if any. (Many examples of general-sum games do not have optimal strategies.) We show the following generalization of the above properties for zero-sum games with strategies costs. Theorem 1. Let G be a finite two-person k-sum game and G−c be the game with strategy costs c = (c1, c2). 1. There is a value v ∈R for G−c and nonempty sets OPT1 and OPT2 of optimal mixed strategies for the two players. OPT1 is the set of strategies that guarantee player 1 payoff ≥v −c2(σ2), against any strategy σ2 chosen by player 2. Similarly, OPT2 is the set of strategies that guarantee player 2 payoff ≥k −v −c1(σ1) against any σ1. 2. The Nash equilibria of G−c are exchangeable: the set of Nash equilibria is OPT1×OPT2. 3. The set of net payoffs possible at equilibrium is an axis-parallel rectangle in R2. For zero-sum games, the term optimal strategy was natural: the players could guarantee v and k −v, respectively, and this is all that there was to share. Moreover, it is easy to see that only pairs of optimal strategies can have the Nash equilibria property, being best responses to each other. In the case of zero-sum games with strategy costs, the optimal structure is somewhat counterintuitive. First, it is strange that the amount guaranteed by either player depends on the cost of the other player’s action, when in reality each player pays the cost of its own action. Second, it is not even clear why we call these optimal strategies. To get a feel for this latter issue, notice that the sum of the net payoffs to the two players is always k −c1(σ1) −c2(σ2), which is exactly the total of what optimal strategies guarantee, v −c2(σ2) + k −v −c1(σ1). Hence, if both players play what we call optimal strategies, then neither player can improve and they are at Nash equilibrium. On the other hand, suppose player 1 selects a strategy σ1 that does not guarantee him payoff at least v −c2(σ2). This means that there is some response σ2 by player 2 for which player 1’s payoff is < v −c2(σ2) and hence player 2’s payoff is > k −v −c1(σ1). Thus player 2’s best response to σ1 must give player 2 payoff > k −v −c1(σ1) and leave player 1 with < v −c2(σ2). The proof of the theorem (the above reasoning only implies part 2 from part 1) is based on the following simple observation. Consider the k-sum game H = (N, S, q) with the following payoffs: q1(s1, s2) = p1(s1, s2) −c1(s1) + c2(s2) = p−c 1 (s1, s2) + c2(s2) q2(s1, s2) = p2(s1, s2) −c2(s1) + c1(s1) = p−c 2 (s1, s2) + c1(s1) That is to say, Player 1 pays its strategy cost to Player 2 and vice versa. It is easy to verify that, ∀σ1, σ′ 1 ∈∆1, σ2 ∈∆2 q1(σ1, σ2) −q1(σ′ 1, σ2) = p−c 1 (σ1, σ2) −p−c 1 (σ′ 1, σ2) (1) This means that the relative advantage in switching strategies in games G−c and H are the same. In particular, σ1 is a best response to σ2 in G−c if and only if it is in H. A similar equality holds for player 2’s payoffs. Note that these conditions imply that the games G−c and H are strategically equivalent in the sense defined by Moulin and Vial [16]. Proof of Theorem 1. Let v be the value of the game H. For any strategy σ1 that guarantees player 1 payoff ≥v in H, σ1 guarantees player 1 ≥v −c2(σ2) in G−c. This follows from the definition of H. Similarly, any strategy σ2 that guarantees player 2 payoff ≥k −v in H will guarantee ≥k −v −c1(σ1) in G−c. Thus the sets OPT1 and OPT2 are non-empty. Since v −c2(σ2)+k −v − c1(σ1) = k −c1(σ1) −c2(σ2) is the sum of the payoffs in G−c, nothing greater can be guaranteed by either player. Since the best responses of G−c and H are the same, the Nash equilibria of the two games are the same. Since H is a k-sum game, its Nash equilibria are exchangeable, and thus we have part 2. (This holds for any game that is strategically equivalent to k-sum.) Finally, the optimal mixed strategies OPT1, OPT2 of any k-sum game are convex sets. If we look at the achievable costs of the mixed strategies in OPTi, by the definition of the cost of a mixed strategy, this will be a convex subset of R, i.e., an interval. By parts 1 and 2, the set of achievable net payoffs at equilibria of G−c are therefore the cross-product of intervals. To illustrate Theorem 1 graphically, Figure 2 gives a 4 × 4 example with costs of 1, 2, 3, and 4, respectively. It illustrates a situation with multiple optimal strategies. Notice that player 1 is completely indifferent between its optimal choices A and B, and player 2 is completely indifferent between C and D. Thus the only question is how kind they would like to be to their opponent. The (A,C) equilibrium is perhaps most natural as it is yields the highest payoffs for both parties. Note that the proof of the above theorem actually shows that zero-sum games with costs share additional appealing properties of zero-sum games. For example, computing optimal strategies is a polynomial time-computation in an n×n game, as it amounts to computing the equilibria of H. We next show that they also have appealing learning properties, though they do not share all properties of zero-sum games.1 3.1 Learning in repeated two-person k-sum games with strategy costs Another desirable property of k-sum games is that, in repeated play, natural learning dynamics converge to the set of Nash equilibria. Before we state the analogous conditions for k-sum games with costs, we briefly give a few definitions. A repeated game is one in which players chooses a sequence of strategies vectors s1, s2, . . ., where each st = (st 1, . . . , st N) is a strategy vector of some fixed stage game G = (N, S, p). Under perfect monitoring, when selecting an action in any period the players know all the previous selected actions.As we shall discuss, it is possible to learn to play without perfect monitoring as well. 1One property that is violated by the chess example is the “advantage of an advantage” property. Say Player 1 has the advantage over Player 2 in a square game if p1(s1, s2) ≥p2(s2, s1) for all strategies s1, s2. At equilibrium of a k-sum game, a player with the advantage must have a payoff at least as large as its opponent. This is no longer the case after incorporating strategy costs, as seen in the chess example, where Player 1 has the advantage (even including strategy costs), yet his equilibrium payoff is smaller than 2’s. a) A B C D A 6, 4 5, 5 3, 7 2, 8 B 7, 3 6, 4 4, 6 3, 7 C 7.5, 2.5 6.5, 3.5 4.5, 5.5 3.5, 6.5 D 8.5, 1.5 7, 3 5.5, 4.5 4.5, 5.5 b) A (-1) B (-2) C (-3) D (-4) A (-1) 5, 3 4, 3 2, 4 1, 4 B (-2) 5, 2 4, 2 2, 3 1, 3 C (-3) 4.5, 1.5 3.5, 1.5 1.5, 2.5 0.5, 2.5 D (-4) 4.5, 0.5 3, 1 1.5, 1.5 0.5, 1.5 A,A A,B A,C A,D B,A B,B B,C B,D C,A C,B C,C C,D D,A D,B D,C D,D value PLAYER 1 NET PAYOFF PLAYER 2 NET PAYOFF Nash Eq. A,A A,B A,C A,D B,A B,B B,C B,D C,A C,B C,C C,D D,A D,B D,C D,D value PLAYER 1 NET PAYOFF PLAYER 2 NET PAYOFF Nash Eq. Figure 2: a) Payoffs in 10-sum game G. b) Expected net earnings in G−c. OPT1 is any mixture of A and B, and OPT2 is any mixture of C and D. Each player’s choice of equilibrium strategy affects only the opponent’s net payoff. c) A graphical display of the payoff pairs. The shaded region shows the rectangular set of payoffs achievable at mixed strategy Nash equilibria. Perhaps the most intuitive dynamics are best-response: at each stage, each player selects a best response to the opponent’s previous stage play. Unfortunately, these naive dynamics fails to converge to equilibrium in very simple examples. The fictitious play dynamics prescribe, at stage t, selecting any strategy that is a best response to the empirical distribution of opponent’s play during the first t −1 stages. It has been shown that fictitious play converges to equilibrium (of the stage game G) in k-sum games [17]. However, fictitious play requires perfect monitoring. One can learn to play a two-person k-sum game with no knowledge of the payoff table or anything about the other players actions. Using experimentation, the only observations required by each player are its own payoffs in each period (in addition to the number of available actions). So-called bandit algorithms [7] must manage the exploration-exploitation tradeoff. The proof of their convergence follows from the fact that they are no-regret algorithms. (No-regret algorithms date back to Hannan in the 1950’s [12], but his required perfect monitoring). The regret of a player i at stage T is defined to be, regret of i at T = 1 T max si∈Si T X t=1 ¡ pi(si, st −i) −pi(st i, st −i) ¢ , that is, how much better in hindsight player i could have done on the first T stages had it used one fixed strategy the whole time (and had the opponents not changed their strategies). Note that regret can be positive or negative. A no-regret algorithm is one in which each player’s asymptotic regret converges to (−∞, 0], i.e., is guaranteed to approach 0 or less. It is well-known that noregret condition in two-person k-sum games implies convergence to equilibrium (see, e.g., [13]). In particular, the pair of mixed strategies which are the empirical distributions of play over time approaches the set of Nash equilibrium of the stage game. Inverse-polynomial rates of convergence (that are polynomial also in the size of the game) can be given for such algorithms. Hence no-regret algorithms provide arguably reasonable ways to play a k-sum game of moderate size. Note that in general-sum games, no such dynamics are known. Fortunately, the same algorithm that works for learning in k-sum games seem to work for learning in such games with strategy costs. Theorem 2. Fictitious play converges to the set of Nash equilibria of the stage game in a two-person k-sum game with strategy costs, as do no-regret learning dynamics. Proof. The proof again follows from equation (1) regarding the game H. Fictitious play dynamics are defined only in terms of best response play. Since G−c and H share the same best responses, fictitious play dynamics are identical for the two games. Since they share the same equilibria and fictitious play converges to equilibria in H, it must converge in G−c as well. For no-regret algorithms, equation (1) again implies that for any play sequence, the regret of each player i with respect to game G−c is the same as its regret with respect to the game H. Hence, no regret in G−c implies no regret in H. Since no-regret algorithms converge to the set of equilibria in k-sum games, they converge to the set of equilibria in H and therefore in G−c as well. 4 Potential games with strategic costs Let us begin with an example of a potential game, called a routing game [18]. There is a fixed directed graph with n nodes and m edges. Commuters i = 1, 2, . . . , N each decide on a route πi, to take from their home si to their work ti, where si and ti are nodes in the graph. For each edge, uv, let nuv be the number of commuters whose path πi contains edge uv. Let fuv : Z →R be a nonnegative monotonically increasing congestion function. Player i’s payoff is −P uv∈πi fuv(nuv), i.e., the negative sum of the congestions on the edges in its path. An N-person normal form game G is said to be a potential game [15] if there is some potential function Φ : S1 × . . . SN →R such that changing a single player’s action changes its payoff by the change in the potential function. That is, there exists a single function Φ, such that for all players i and all pure strategy vectors s, s′ ∈S1 × . . . × SN that differ only in the ith coordinate, pi(s) −pi(s′) = Φ(s) −Φ(s′). (2) Potential games have appealing learning properties: simple better-reply dynamics converge to purestrategy Nash equilibria, as do the more sophisticated fictitious-play dynamics described earlier [15]. In our example, this means that if players change their individual paths so as to selfishly reduce the sum of congestions on their path, this will eventually lead to an equilibrium where no one can improve. (This is easy to see because Φ keeps increasing.) The absence of similar learning properties for general games presents a frustrating hole in learning and game theory. It is clear that the theoretically clean commuting example above misses some realistic considerations. One issue regarding complexity is that most commuters would not be willing to take a very complicated route just to save a short amount of time. To model this, we consider potential games with strategy costs. In our example, this would be a cost associated with every path. For example, suppose the graph represented streets in a given city. We consider a natural strategy complexity cost associated with a route π, say λ(#turns(π))2, where there is a parameter λ ∈R and #turns(π) is defined as the number of times that a commuter has to turn on a route. (To be more precise, say each edge in the graph is annotated with a street name, and a turn is defined to be a pair of consecutive edges in the graph with different street names.) Hence, a best response for player i would minimize: min π from si to ti (total congestion of π) + λ(#turns(π))2. While adding strategy costs to potential games allows for much more flexibility in model design, one might worry that appealing properties of potential games, such as having pure strategy equilibria and easy learning dynamics, no longer hold. This is not the case. We show that strategic costs fit easily into the potential game framework: Theorem 3. For any potential game G and any cost functions c, G−c is also a potential game. Proof. Let Φ be a potential function for G. It is straightforward to verify that the G−c admits the following potential function Φ′: Φ′(s1, . . . , sN) = Φ(s1, . . . , sN) −c1(s1) −. . . −cN(sN). 5 Additional remarks Part of the reason that the notion of bounded rationality is so difficult to formalize is that understanding enormous games like chess is a daunting proposition. That is why we have narrowed it down to choosing among a small number of available programs. A game theorist might begin by examining the complete payoff table of Figure 1a, which is prohibitively large. Instead of considering only the choices of programs A and B, each player considers all possible chess strategies. In that sense, our payoff table in 1a would be viewed as a reduction of the “real” normal form game. A computer scientist, on the other hand, may consider it reasonable to begin with the existing strategies that one has access to. Regardless of how you view the process, it is clear that for practical purposes players in real life do simplify and analyze “smaller” sets of strategies. Even if the players consider the option of engineering new chess-playing software, this can be viewed as a third strategy in the game, with its own cost and expected payoffs. Again, when considering small number of available strategies, like the two programs above, it may still be difficult to assess the expected payoffs that result when (possibly randomized) strategies play against each other. An additional assumption made throughout the paper is that the players share the same assessments about these expected payoffs. Like other common-knowledge assumptions made in game theory, it would be desirable to weaken this assumption. In the special families of games studied in this paper, and perhaps in additional cases, learning algorithms may be employed to reach equilibrium without knowledge of payoffs. 5.1 Finite automata playing repeated games There has been a large body of interesting work on repeated games played by finite automata (see [14] for a survey). Much of this work is on achieving cooperation in the classic prisoner’s dilemma game (e.g., [2, 3, 4, 5]). Many of these models can be incorporated into the general model outlined in this paper. For example, to view the Abreu and Rubinstein model [6] as such, consider the normal form of an infinitely repeated game with discounting, but restricted to strategies that can be described by finite automata (the payoffs in every cell of the payoff table are the discounted sums of the infinite streams of payoffs obtained in the repeated game). Let the cost of a strategy be an increasing function of the number of states it employs. For Neyman’s model [3], consider the normal form of a finitely repeated game with a known number of repetitions. You may consider strategies in this normal form to be only ones with a bounded number of states, as required by Neyman, and assign zero cost to all strategies. Alternatively, you may allow all strategies but assign zero cost to ones that employ number of states below Neyman’s bounds, and an infinite cost to strategies that employ a number of states that exceeds Neyman’s bounds. The structure of equilibria proven in Theorem 1 applies to all the above models when dealing with repeated k-sum games, as in [2]. 6 Future work There are very interesting questions to answer about bounded rationality in truly large games that we did not touch upon. For example, consider the factoring game from the introduction. A pure strategy for Player 1 would be outputting a single n-bit number. A pure strategy for Player 2 would be any factoring program, described by a circuit that takes as input an n-bit number and attempts to output a representation of its prime factorization. The complexity of such a strategy would be an increasing function of the number of gates in the circuit. It would be interesting to make connections between asymptotic algorithm complexity and games. Another direction regards an elegant line of work on learning to play correlated equilibria by repeated play [11]. It would be natural to consider how strategy costs affect correlated equilibria. Finally, it would also be interesting to see how strategy costs affect the so-called “price of anarchy” [19] in congestion games. Acknowledgments This work was funded in part by a U.S. NSF grant SES-0527656, a Landau Fellowship supported by the Taub and Shalom Foundations, a European Community International Reintegration Grant, an Alon Fellowship, ISF grant 679/06, and BSF grant 2004092. Part of this work was done while the first and second authors were at the Toyota Technological Institute at Chicago. References [1] H. Simon. The sciences of the artificial. MIT Press, Cambridge, MA, 1969. [2] E. Ben-Porath. Repeated games with finite automata, Journal of Economic Theory 59: 17–32, 1993. [3] A. Neyman. Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoner’s Dilemma. Economic Letters, 19: 227–229, 1985. [4] A. Rubenstein. Finite automata play the repeated prisoner’s dilemma. Journal of Economic Theory, 39:83– 96, 1986. [5] C. Papadimitriou, M. Yannakakis: On complexity as bounded rationality. In Proceedings of the TwentySixth Annual ACM Symposium on Theory of Computing, pp. 726–733, 1994. [6] D. Abreu and A. Rubenstein. The Structure of Nash Equilibrium in Repeated Games with Finite Automata. Econometrica 56:1259-1281, 1988. [7] P. Auer, N. Cesa-Bianchi, Y. Freund, R. Schapire. The Nonstochastic Multiarmed Bandit Problem. SIAM J. Comput. 32(1):48-77, 2002. [8] X. Chen, X. Deng, and S. Teng. Computing Nash Equilibria: Approximation and smoothed complexity. Electronic Colloquium on Computational Complexity Report TR06-023, 2006. [9] K. Daskalakis, P. Goldberg, C. Papadimitriou. The complexity of computing a Nash equilibrium. Electronic Colloquium on Computational Complexity Report TR05-115, 2005. [10] C. Ewerhart. Chess-like Games Are Dominance Solvable in at Most Two Steps. Games and Economic Behavior, 33:41-47, 2000. [11] D. Foster and R. Vohra. Regret in the on-line decision problem. Games and Economic Behavior, 21:40-55, 1997. [12] J. Hannan. Approximation to Bayes risk in repeated play. In M. Dresher, A. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume 3, pp. 97–139. Princeton University Press, 1957. [13] S. Hart and A. Mas-Colell. A General Class of Adaptive Strategies. Journal of Economic Theory 98(1):26– 54, 2001. [14] E. Kalai. Bounded rationality and strategic complexity in repeated games. In T. Ichiishi, A. Neyman, and Y. Tauman, editors, Game Theory and Applications, pp. 131–157. Academic Press, San Diego, 1990. [15] D. Monderer, L. Shapley. Potential games. Games and Economic Behavior, 14:124–143, 1996. [16] H. Moulin and P. Vial. Strategically Zero Sum Games: the Class of Games Whose Completely Mixed Equilibria Cannot Be Improved Upon. International Journal of Game Theory, 7:201–221, 1978. [17] J. Robinson, An iterative method of solving a game, Ann. Math. 54:296–301, 1951. [18] R. Rosenthal. A Class of Games Possessing Pure-Strategy Nash Equilibria. International Journal of Game Theory, 2:65–67, 1973. [19] E. Koutsoupias and C. Papadimitriou. Worstcase equilibria. In Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, pp. 404–413, 1999.
|
2006
|
151
|
2,979
|
Fundamental Limitations of Spectral Clustering Boaz Nadler∗, Meirav Galun Department of Applied Mathematics and Computer Science Weizmann Institute of Science, Rehovot, Israel 76100 boaz.nadler,meirav.galun@weizmann.ac.il Abstract Spectral clustering methods are common graph-based approaches to clustering of data. Spectral clustering algorithms typically start from local information encoded in a weighted graph on the data and cluster according to the global eigenvectors of the corresponding (normalized) similarity matrix. One contribution of this paper is to present fundamental limitations of this general local to global approach. We show that based only on local information, the normalized cut functional is not a suitable measure for the quality of clustering. Further, even with a suitable similarity measure, we show that the first few eigenvectors of such adjacency matrices cannot successfully cluster datasets that contain structures at different scales of size and density. Based on these findings, a second contribution of this paper is a novel diffusion based measure to evaluate the coherence of individual clusters. Our measure can be used in conjunction with any bottom-up graph-based clustering method, it is scale-free and can determine coherent clusters at all scales. We present both synthetic examples and real image segmentation problems where various spectral clustering algorithms fail. In contrast, using this coherence measure finds the expected clusters at all scales. Keywords: Clustering, kernels, learning theory. 1 Introduction Spectral clustering methods are common graph-based approaches to (unsupervised) clustering of data. Given a dataset of n points {xi}n i=1 ⊂Rp, these methods first construct a weighted graph G = (V, W), where the n points are the set of nodes V and the weighted edges Wi,j are computed by some local symmetric and non-negative similarity measure. A common choice is a Gaussian kernel with width σ, where ∥· ∥denotes the standard Euclidean metric in Rp Wi,j = exp −∥xi −xj∥2 2σ2 (1) In this framework, clustering is translated into a graph partitioning problem. Two main spectral approaches for graph partitioning have been suggested. The first is to construct a normalized cut (conductance) functional to measure the quality of a partition of the graph nodes V into k clusters[1, 2]. Specifically, for a 2-cluster partition V = S ∪(V \ S) minimizing the following functional is suggested in [1] φ(S) = X i∈S,j∈V \S Wi,j 1 a(S) + 1 a(V \ S) (2) where a(S) = P i∈S,j∈V Wi,j. While extensions of this functional to more than two clusters are possible, both works suggest a recursive top-down approach where additional clusters are found by ∗Corresponding author. www.wisdom.weizmann.ac.il/∼nadler minimizing the same clustering functional on each of the two subgraphs. In [3] the authors also propose to augment this top-down approach by a bottom-up aggregation of the sub-clusters. As shown in [1] minimization of (2) is equivalent to maxy(yT Wy)/(yT Dy), where D is a diagonal n × n matrix with Di,i = P j Wi,j, and y is a vector of length n that satisfies the constraints yT D1 = 0 and yi ∈{1, −b} with b some constant in (0, 1). Since this maximization problem is NP-hard, both works relax it by allowing the vector y to take on real values. This approximation leads to clustering according to the eigenvector with second largest eigenvalue of the normalized graph Laplacian, Wy = λDy. We note that there are also graph partitioning algorithms based on a non-normalized functional leading to clustering according to the second eigenvector of the standard graph Laplacian matrix D −W, also known as the Fiedler vector [4]. A second class of spectral clustering algorithms does not recursively employ a single eigenvector, but rather proposes to map the original data into the first k eigenvectors of the normalized adjacency matrix (or a matrix similar to it) and then apply a standard clustering algorithm such as k-means on these new coordinates, see for example [5]-[11] and references therein. In recent years, much theoretical work was done to justify this approach. Belkin and Niyogi [8] showed that for data uniformly sampled from a manifold, these eigenvectors approximate the eigenfunctions of the Laplace Beltrami operator, which give an optimal low dimensional embedding under a certain criterion. Optimality of these eigenvectors, including rotations, was derived in [9] for multiclass spectral clustering. Probabilistic interpretations, based on the fact that these eigenvectors correspond to a random walk on the graph were also given by several authors [11]-[15]. Limitations of spectral clustering in the presence of background noise and multiscale data were noted in [10, 16], with suggestions to replace the uniform σ2 in eq. (1) with a location dependent scale σ(xi)σ(xj). The aim of this paper is to present fundamental limitations of spectral clustering methods, and propose a novel diffusion based coherence measure to evaluate the internal consistency of individual clusters. First, in Section 2 we show that based on the isotropic local similarity measure (1), the NP-hard normalized cut criterion may not be a suitable global functional for data clustering. We construct a simple example with only two clusters, where we prove that the minimum of this functional does not correspond to the natural expected partitioning of the data into its two clusters. Further, in Section 3 we show that spectral clustering suffers from additional limitations, even with a suitable similarity measure. Our theoretical analysis is based on the probabilistic interpretation of spectral clustering as a random walk on the graph and on the intimate connection between the corresponding eigenvalues and eigenvectors and the characteristic relaxation times and processes of this random walk. We show that similar to Fourier analysis, spectral clustering methods are global in nature. Therefore, even with a location dependent σ(x) as in [10], these methods typically fail to simultaneously identify clusters at different scales. Based on this analysis, we present in Section 4 simple examples where spectral clustering fails. We conclude with Section 5, where we propose a novel diffusion based coherence measure. This quantity measures the coherence of a set of points as all belonging to a single cluster, by comparing the relaxation times on the set and on its suggested partition. Its main use is as a decision tool whether to divide a set of points into two subsets or leave it intact as a single coherent cluster. As such, it can be used in conjunction with either top-down or bottom-up clustering approaches and may overcome some of their limitations. We show how use of this measure correctly clusters the examples of Section 4, where spectral clustering fails. 2 Unsuitability of normalized cut functional with local information As reported in the literature, clustering by approximate minimization of the functional (2) performs well in many cases. However, a theoretical question still remains: Under what circumstances is this functional indeed a good measure for the quality of clustering ? Recall that the basic goal of clustering is to group together highly similar points while setting apart dissimilar ones. Yet this similarity measure is typically based only on local information as in (1). Therefore, the question can be rephrased - is local information sufficient for global clustering ? While this local to global concept is indeed appealing, we show that it does not work in general. We construct a simple example where local information is insufficient for correct clustering according to the functional (2). Consider data sampled from a mixture of two densities in two dimensions p(x) = p(x1, x2) = 1 2 [pL,ε(x1, x2) + pG(x1, x2)] (3) 2 4 6 −2 −1 0 1 2 3 Original Data (a) 2 4 6 −2 −1 0 1 2 3 Normalized cut σ = 0.05 (b) Figure 1: A dataset with two clusters and result of normalized cut algorithm [2]. Other spectral clustering algorithms give similar results. where pL,ε denotes uniform density in a rectangular region Ω= {(x1, x2) | 0 < x1 < L, −ε < x2 < 0} of length L and width ε, and pG denotes a Gaussian density centered at (µ1, µ2) with diagonal covariance matrix ρ2I. In fig. 1(a) a plot of n = 1400 points from this density is shown with L = 8, ε = 0.05 ≪L, (µ1, µ2) = (2, 0.2) and ρ = 0.1. Clearly, the two clusters are the Gaussian ball and the rectangular strip Ω. However, as shown in fig. 1(b), clustering based on the second eigenvector of the normalized graph Laplacian with weights Wi,j given by (1) partitions the points somewhere along the long strip instead of between the strip and the Gaussian ball. We now show that this result is not due to the approximation of the NP-hard problem but rather a feature of the original functional (2). Intuitively, the failure of the normalized cut criterion is clear. Since the overlap between the Gaussian ball and the rectangular strip is larger than the width of the strip, a cut that separates them has a higher penalty than a cut somewhere along the thin strip. To show this mathematically, we consider the penalty of the cut due to the numerator in (2) in the limit of a large number of points n →∞. In this population setting, as n →∞each point has an infinite number of neighbors, so we can consider the limit σ →0. Upon normalizing the similarity measure (1) by 1/2πσ2, the numerator is given by Cut(Ω1) = lim n→∞ 1 |V | X x∈Ω1 X y∈Ω2 Wi,j = 1 2πσ2 Z Ω1 Z Ω2 p(x)p(y)e−∥x−y∥2/2σ2dxdy (4) where Ω1, Ω2 ⊂R2 are the regions of the two clusters. For ε ≪L, a vertical cut of the strip at location x = x1 far away from the ball (|x1 −x0| ≫ρ) gives Cut(x > x1) ≃lim σ→0 Z ∞ 0 Z 0 −∞ 1 L2 1 2πσ2 e−(x−x′)2/2σ2dxdx′ = 1 2πL2 (5) A similar calculation shows that for a horizontal cut at y = 0, Cut(y > 0) ≃1 L e−µ2 2/2ρ2 √ 8πρ (6) Finally, note that for a vertical cut far from the rectangle boundary ∂Ω, the denominators of the two cuts in eq. (2) have the same order of magnitude. Therefore, if L ≫ρ and µ2/ρ = O(1) the horizontal cut between the ball and the strip has larger normalized penalty than a vertical cut of the strip. This analysis explains the numerical results in fig. 1(b). Other spectral clustering algorithms that use two eigenvectors, including those that take a local scale into account, also fail to separate the ball from the strip and yield similar results to fig.1(b). A possible solution to this problem is to introduce multiscale anisotropic features that capture the geometry and dimensionality of the data in the similarity metric. In the context of image and texture segmentation, the need for multiscale features is well known [17, 18, 19]. Our example highlights its importance in general data clustering. 3 Additional Limitations of Spectral Clustering Methods An additional problem with recursive bi-partitioning is the need of a saliency criterion when required to return k > 2 clusters. Consider, for example a dataset which contains k = 3 clusters. After the first cut, the recursive algorithm should decide which subgraph to further partition and which to leave intact. A common approach that avoids this decision problem is to directly find three clusters by using the first three eigenvectors of Wv = λDv. Specifically, denote by {λj, vj} the set of eigenvectors of Wv = λDv with eigenvalues sorted in decreasing order, and denote by vj(xi) the i-th entry (corresponding to the point xi) in the j-th eigenvector vj. Many algorithms propose to map each point xi ∈Rp into Ψ(xi) = (v1(xi), . . . , vk(xi)) ∈Rk, and apply simple clustering algorithms to the points Ψ(xi) [8, 9, 12]. Some works [6, 10] use the eigenvectors ˜vj of D−1/2WD−1/2 instead, related to the ones above via ˜vj = D1/2vj. We now show that spectral clustering that uses the first k eigenvectors for finding k clusters also suffers from fundamental limitations. Our starting point is the observation that vj are also eigenvectors of the Markov matrix M = D−1W [13, 12]. Assuming the graph is connected, the largest eigenvalue is λ1 = 1 with |λj| < 1 for j > 1. Therefore, regardless of the initial condition the random walk converges to the unique equilibrium distribution πs, given by πs(i) = Di,i/ P j Dj,j. Moreover, as shown in [13], the Euclidean distance between points mapped to these eigenvectors is equal to a so called ’diffusion distance’ between points on the graph, X j λ2t j (vj(x) −vj(y))2 = ∥p(z, t | x) −p(z, t | y)∥2 L2(1/πs) (7) where p(z, t | x) is the probability distribution of a random walk at time t given that it started at x, πs is the equilibrium distribution, and ∥· ∥L2(w) is the weighted L2 norm with weight w(z). Therefore, the eigenvalues and eigenvectors {λj, vj} for j > 1, capture the characteristic relaxation times and processes of the random walk on the graph towards equilibrium. Since most methods use the first few eigenvector coordinates for clustering, it is instructive to study the properties of these relaxation times and of the corresponding eigenvectors. We perform this analysis under the following statistical model: we assume that the points {xi} are random samples from a smooth density p(x) in a smooth domain Ω⊂Rp. We write the density in Boltzmann form p(x) = e−U(x)/2 and denote U(x) as the potential. As described in [13], in the limit n →∞, σ →0, the random walk with transition matrix M on the graph of points sampled from this density converges to a stochastic differential equation (SDE) ˙x(t) = −∇U(x) + √ 2 ˙w(t) (8) where w(t) is standard white noise (Brownian motion), and the right eigenvectors of the matrix M converge to the eigenfunctions of the following Fokker-Planck operator Lψ(x) ≡∆ψ −∇ψ · ∇U = −µψ(x) (9) defined for x ∈Ωwith reflecting boundary conditions on ∂Ω. This operator is non-positive and its eigenvalues are µ1 = 0 < µ2 ≤µ3 ≤. . .. The eigenvalues −µj of L and the eigenvalues λj of M are related by µj = limn→∞,σ→0(1−λj)/σ. Therefore the top eigenvalues of M correspond to the smallest of L. Eq. (7) shows that these eigenfunctions and eigenvalues capture the leading characteristic relaxation processes and time scales of the SDE (8). These have been studied extensively in the literature [20], and can give insight into the success and limitations of spectral clustering [13]. For example, if Ω= Rp and the density p(x) consists of k highly separated Gaussian clusters of roughly equal size (k clusters), then there are exactly k eigenvalues very close or equal to zero, and their corresponding eigenfunctions are approximately piecewise constant in each of these clusters. Therefore, in this setting spectral clustering with k eigenvectors works very well. To understand the limitations of spectral clustering, we now explicitly analyze situations with clusters at different scales of size and density. For example, consider a density with three isotropic Gaussian clusters: one large cloud (cluster #1) and two smaller clouds (clusters 2 and 3). These correspond to one wide well and two narrow wells in the potential U(x). A representative 2-D dataset drawn from such a density is shown in fig. 2 (top left). The SDE (8) with this potential has a few characteristic time scales which determine the structure of its leading eigenfunctions. The slowest one is the mean passage time between cluster 1 and clusters 2 or 3, approximately given by [20] τ1,2 = 2π p |U ′′ minU ′′ max| e(U(xmax)−U(xmin)) (10) where xmin is the bottom of the deepest well, xmax is the saddle point of U(x), and U ′′ min, U ′′ max are the second derivatives at these points. Eq. (10), also known as Arrhenius or Kramers formula of chemical reaction theory, shows that the mean first passage time is exponential in the barrier height [20]. The corresponding eigenfunction ψ2 is approximately piecewise constant inside the large well and inside the two smaller wells with a sharp transition near the saddle point xmax. This eigenfunction easily separates cluster 1 from clusters 2 and 3 (see top center panel in fig. 2). A second characteristic time is τ2,3, the mean first passage time between clusters 2 and 3, also given by a formula similar to (10). If the potential barrier between these two wells is much smaller than between wells 1 and 2, then τ2,3 ≪τ1,2. A third characteristic time is the equilibration time inside cluster 1. To compute it we consider a diffusion process only inside cluster 1, e.g. with an isotropic parabolic potential of the form U(x) = U(x1)+U ′′ 1 ∥x−x1∥2/2, where x1 is the bottom of the well. In 1-D the eigenvalues and eigenfunctions are given by µk = (k −1)U ′′ 1 , with ψk(x) a polynomial of degree k −1. The corresponding intra-well relaxation times are given by τ R k = 1/µk+1 (k ≥1). The key point in our analysis is that if the equilibration time inside the wide well is slower than the mean first passage time between the two smaller wells, τ R 1 > τ2,3, then the third eigenfunction of L captures the relaxation process inside the large well and is approximately constant inside the two smaller wells. This eigenfunction cannot separate between clusters 2 and 3. Moreover, if τ R 2 = τ R 1 /2 is still larger than τ2,3 then even the next leading eigenfunction captures the equilibration process inside the wide well, see a plot of ψ3, ψ4 in fig. 2 (rows 1,2). Therefore, even this next eigenfunction is not useful for separating the two small clusters. In the example of fig. 2, only ψ5 separates these two clusters. This analysis shows that when confronted with clusters of different scales, corresponding to a multiscale landscape potential, standard spectral clustering which uses the first k eigenvectors to find k clusters will fail. We present explicit examples in Section 4 below. The fact that spectral clustering with a single scale σ may fail to correctly cluster multiscale data was already noted in [10, 16]. To overcome this failure, [10] proposed replacing the uniform σ2 in eq. (1) with σ(xi)σ(xj) where σ(x) is proportional to the local density at x. Our analysis can also provide a probabilistic interpretation to their method. In a nutshell, the effect of this scaling is to speed up the diffusion process at regions of low density, thus changing some of its characteristic times. If the larger cluster has low density, as in the examples in their paper, this approach is successful as it decreases τ R 1 . However, if the large cluster has a high density (comparable to the density of the small clusters), this approach is not able to overcome the limitations of spectral clustering, see fig. 3. Moreover, this approach may also fail in the case of uniform density clusters defined solely by geometry (see fig. 4). 4 Examples We illustrate the theoretical analysis of Section 3 with three examples, all in 2-D. In the first two examples, the n points {xi} ⊂R2 are random samples from the following mixture of three Gaussians α1N(x1, σ2 1I) + α2N(x2, σ2 2I) + α3N(x3, σ2 3I) (11) with centers xi isotropic standard deviations σi and weights αi (P i αi = 1). Specifically, we consider one large cluster with σ1 = 2 centered at x1 = (−6, 0), and two smaller clusters with σ2 = σ3 = 0.5 centered at x2 = (0, 0) and x3 = (2, 0). We present the results of both the NJW algorithm [6] and the ZP algorithm [10] for two different weight vectors. Example I: Weights (α1, α2, α3) = (1/3, 1/3, 1/3). In the top left panel of fig. 2, n = 1000 random points from this density clearly show the difference in scales between the large cluster and the smaller ones. The first few eigenvectors of M with a uniform σ = 1 are shown in the first two rows of the figure. The second eigenvector ψ2 is indeed approximately piecewise constant and easily separates the larger cluster from the smaller ones. However, ψ3 and ψ4 are constant on the smaller clusters, capturing the relaxation process in the larger cluster (ψ3 captures relaxation along the y-direction, hence it is not a function of the x-coordinate). In this example, only ψ5 can separate the two small clusters. Therefore, as predicted theoretically, the NJW algorithm [6] fails to produce reasonable clusterings for all values of σ. In this example, the density of the large cluster is low, and therefore as expected and shown in the last row of fig. 2, the ZP algorithm clusters correctly. Example II: Weights (α1, α2, α3) = (0.8, 0.1, 0.1). In this case the density of the large cluster is high, and comparable to that of the small clusters. Indeed, as seen in fig. 3 and predicted theoretically −10 −5 0 −4 −2 0 2 4 Original Data x y −10 −5 0 0 2 4 6 8 ψ2 x −10 −5 0 −10 0 10 ψ3 x −10 −5 0 −10 0 10 20 ψ4 x −10 −5 0 0 2 4 6 8 ψ5 x −10 −5 0 −5 0 5 NJW clustering, σ= 1 −10 −5 0 −5 0 5 ZP clustering, kNN = 7 −10 −5 0 −2 0 2 4 6 8 ZP − ψ2 x −10 −5 0 −2 0 2 4 6 8 ZP − ψ3 x Figure 2: A three cluster dataset corresponding to example I (top left), clustering results of NJW and ZP algorithms [6, 10] (center and bottom left, respectively), and various eigenvectors of M vs. the x coordinate (blue dots in 2nd and 3rd columns). The red dotted line is the potential U(x, 0). −10 −5 0 5 −5 0 5 Original Data x y −10 −5 0 5 −5 0 5 ZP results kNN = 7 −10 −5 0 0 5 10 ZP − ψ2 x −10 −5 0 −5 0 5 10 ZP − ψ3 x Figure 3: Dataset corresponding to example II and result of ZP algorithm. the ZP algorithm fails to correctly cluster this data for all values of the parameter kNN in their algorithm. Needless to say, the NJW algorithm also fails to correctly cluster this example. Example III: Consider data {xi} uniformly sampled from a domain Ω⊂R2, which consists of three clusters, one a large rectangular container and two smaller disks, all connected by long and narrow tubes (see fig. 4 (left)). In this example the container is so large that the relaxation time inside it is slower than the characteristic time to diffuse between the small disks, hence NJW algorithm fails to cluster correctly. Since density is uniform, the ZP algorithm fails as well, fig. 4 (right). Note that spectral clustering with the eigenvectors of the standard graph Laplacian has similar limitations, since the Euclidean distance between these eigenvectors is equal to the mean commute time on the graph [11]. Therefore, these methods may also fail when confronted with multiscale data. 5 Clustering with a Relaxation Time Coherence Measure The analysis and examples of Sections 3 and 4 may suggest the use of more than k eigenvectors in spectral clustering. However, clustering with k-means using 5 eigenvectors on the examples of Section 4 produced unsatisfactory results (not shown). Moreover, since the eigenvectors of the matrix M are orthonormal under a specific weight function, they become increasingly oscillatory. Therefore, it is quite difficult to use them to detect a small cluster, much in analogy to Fourier analysis, where it is difficult to detect a localized bump in a function from its Fourier coefficients. −10 −5 0 5 10 −10 −5 0 5 10 Original Data −10 −5 0 5 10 −10 −5 0 5 10 ZP kNN = 7 Figure 4: Three clusters defined solely by geometry, and result of ZP clustering (Example III). Original Image (a) Coherence Measure (b) Ncut with 4 clusters (c) Figure 5: Normalized cut and coherence measure segmentation on a synthetic image. Based on our analysis, we propose a different approach to graph-based clustering. Given the importance of relaxation times on the graph as indication of clusters, we propose a novel and principled measure for the coherence of a set of points as belonging to a single cluster. Our coherence measure can be used in conjunction with any clustering algorithm. Specifically, let G = (V, W) be a weighted graph of points and let V = S ∪(V \ S) be a possible partition (computed by some clustering algorithm). Our aim is to construct a meaningful measure to decide whether to accept or reject this partition. To this end, let λ2 denote the second largest eigenvalue of the Markov matrix M corresponding to the full graph G. We define τV = 1/(1 −λ2) as the characteristic relaxation time of this graph. Similarly, τ1 and τ2 denote the characteristic relaxation times of the two subgraphs corresponding to the partitions S and V \ S. If V is a single coherent cluster, then we expect τV = O(τ1 + τ2). If, however, V consists of two weakly connected clusters defined by S and V \ S, then τ1 and τ2 measure the characteristic relaxation times inside these two clusters while τV measures the overall relaxation time. If the two sub-clusters are of comparable size, then τV ≫(τ1 + τ2). If however, one of them is much smaller than the other, then we expect max(τ1, τ2)/ min(τ1, τ2) ≫1. Thus, we define a set V as coherent if either τV < c1(τ1 + τ2) or if max(τ1, τ2)/ min(τ1, τ2) < c2. In this case, V is not partitioned further. Otherwise, the subgraphs S and V \ S need to be further partitioned and similarly checked for their coherence. While a theoretical analysis is beyond the scope of this paper, reasonable numbers that worked in practice are c1 = 1.8 and c2 = 10. We note that other works have also considered relaxation times for clustering with different approaches [21, 22]. We now present use of this coherence measure with normalized cut clustering on the third example of Section 4. The first partition of normalized cut on this data with σ = 1 separates between the large container and the two smaller disks. The relaxation times of the full graph and the two subgraphs are (τV , τ1, τ2) = (1350, 294, 360). These numbers indicate that the full dataset is not coherent, and indeed should be partitioned. Next, we try to partition the large container. Normalized cuts partitions the container roughly into two parts with (τV , τ1, τ2) = (294, 130, 135), which according to our coherence measure means that the big container is a single structure that should not be split. Finally, normalized cut on the two small disks correctly separates them giving (τV , τ1, τ2) = (360, 18, 28), which indicates that indeed the two disks should be split. Further analysis of each of the single disks by our measure shows that each is a coherent cluster. Thus, combination of our coherence measure with normalized cut not only clusters correctly, but also automatically finds the correct number of clusters, regardless of cluster scale. Similar results are obtained for the other examples in this paper. Finally, our analysis also applies to image segmentation. In fig. 5(a) a synthetic image is shown. The segmentation results of normalized cuts [24] and of the coherence measure combined with [23] appear in panels (b) and (c). Results on a real image are shown in fig. 6. Each segments is Original Image Coherence Measure Ncut 6 clusters Ncut 20 clusters Figure 6: Normalized cut and coherence measure segmentation on a real image. represented by a different color. With a small number of clusters normalized cut cannot find the small coherent segments in the image, whereas with a large number of clusters, large objects are segmented. Implementing our coherence measure with [23] finds salient clusters at different scales. Acknowlegments: The research of BN was supported by the Israel Science Foundation (grant 432/06), by the Hana and Julius Rosen fund and by the William Z. and Eda Bess Novick Young Scientist fund. References [1] J. Shi and J. Malik. Normalized cuts and image segmentation, PAMI, Vol. 22, 2000. [2] R. Kannan, S. Vempala, A. Vetta, On clusterings: good, bad and spectral, J. ACM, 51(3):497-515, 2004. [3] D. Cheng, R. Kannan, S. Vempala, G. Wang, A divide and merge methodology for clustering, ACM SIGMOD/PODS, 2005. [4] F.R.K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics Vol. 92, 1997. [5] Y. Weiss, Segmentation using eigenvectors: a unifying view, ICCV 1999. [6] A.Y. Ng, M.I. Jordan, Y. Weiss, On Spectral Clustering: Analysis and an algorithm, NIPS Vol. 14, 2002. [7] N. Cristianini, J. Shawe-Taylor, J. Kandola, Spectral kernel methods for clustering, NIPS, Vol. 14, 2002. [8] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering, NIPS Vol. 14, 2002. [9] S. Yu and J. Shi. Multiclass spectral clustering. ICCV 2003. [10] L. Zelnik-Manor, P. Perona, Self-Tuning spectral clustering, NIPS, 2004. [11] M. Saerens, F. Fouss, L. Yen and P. Dupont, The principal component analysis of a graph and its relationships to spectral clustering. ECML 2004. [12] M. Meila, J. Shi. A random walks view of spectral segmentation, AI and Statistics, 2001. [13] B. Nadler, S. Lafon, R.R. Coifman, I.G. Kevrekidis, Diffusion maps spectral clustering and eigenfunctions of Fokker-Planck operators, NIPS, 2005. [14] S. Lafon, A.B. Lee, Diffusion maps and coarse graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterization, PAMI, 28(9):1393-1403, 2006. [15] D. Harel and Y. Koren, On Clustering Using Random Walks, FST TCS, 2001. [16] I. Fischer, J. Poland, Amplifying the block matrix structure for spectral clustering, Proceedings of the 14th Annual Machine Learning Conference of Belgium and the Netherlands, pp. 21-28, 2005. [17] J. Malik, S. Belongie, T. Leung, J. Shi, Contour and texture analysis for image segmentation, Int. J. Comp. Vis. 43(1):7-27, 2001. [18] E. Sharon, A. Brandt, R. Basri, Segmentation and Boundary Detection Using Multiscale Intensity Measurements, CVPR, 2001. [19] M. Galun, E. Sharon, R. Basri and A. Brandt, Texture Segmentation by Multiscale Aggregation of Filter Responses and Shape Elements, ICCV, 2003. [20] C.W. Gardiner, Handbook of stochastic methods, third edition, Springer NY, 2004. [21] N. Tishby, N. Slonim, Data clustering by Markovian relaxation and the information bottleneck method, NIPS, 2000. [22] C. Chennubhotla, A.J. Jepson, Half-lives of eigenflows for spectral clustering, NIPS, 2002. [23] E. Sharon, A. Brandt, R. Basri, Fast multiscale image segmentation, ICCV, 2000. [24] T. Cour, F. Benezit, J. Shi. Spectral Segmentation with Multiscale Graph Decomposition. CVPR, 2005.
|
2006
|
152
|
2,980
|
Generalized Maximum Margin Clustering and Unsupervised Kernel Learning Hamed Valizadegan Computer Science and Engineering Michigan State University East Lansing, MI 48824 valizade@msu.edu Rong Jin Computer Science and Engineering Michigan State University East Lansing, MI 48824 rongjin@cse.msu.edu Abstract Maximum margin clustering was proposed lately and has shown promising performance in recent studies [1, 2]. It extends the theory of support vector machine to unsupervised learning. Despite its good performance, there are three major problems with maximum margin clustering that question its efficiency for real-world applications. First, it is computationally expensive and difficult to scale to large-scale datasets because the number of parameters in maximum margin clustering is quadratic in the number of examples. Second, it requires data preprocessing to ensure that any clustering boundary will pass through the origins, which makes it unsuitable for clustering unbalanced dataset. Third, it is sensitive to the choice of kernel functions, and requires external procedure to determine the appropriate values for the parameters of kernel functions. In this paper, we propose “generalized maximum margin clustering” framework that addresses the above three problems simultaneously. The new framework generalizes the maximum margin clustering algorithm by allowing any clustering boundaries including those not passing through the origins. It significantly improves the computational efficiency by reducing the number of parameters. Furthermore, the new framework is able to automatically determine the appropriate kernel matrix without any labeled data. Finally, we show a formal connection between maximum margin clustering and spectral clustering. We demonstrate the efficiency of the generalized maximum margin clustering algorithm using both synthetic datasets and real datasets from the UCI repository. 1 Introduction Data clustering, the unsupervised classification of samples into groups, is an important research area in machine learning for several decades. A large number of algorithms have been developed for data clustering, including the k-means algorithm [3], mixture models [4], and spectral clustering [5, 6, 7, 8, 9]. More recently, maximum margin clustering [1, 2] was proposed for data clustering and has shown promising performance. The key idea of maximum margin clustering is to extend the theory of support vector machine to unsupervised learning. However, despite its success, the following three major problems with maximum margin clustering has prevented it from being applied to real-world applications: • High computational cost. The number of parameters in maximum margin clustering is quadratic in the number of examples. Thus, it is difficult to scale to large-scale datasets. Figure 1 shows the computational time (in seconds) of the maximum margin clustering algorithm with respect to different numbers of examples. We 40 60 80 100 120 140 160 180 200 220 0 200 400 600 800 1000 1200 1400 1600 Number of Samples Time (seconds) Time comparision Generalized Maxmium Marging Clustering Maximum Margin Clustering Figure 1: The scalability of the original maximum margin clustering algorithm versus the generalized maximum margin clustering algorithm 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 (a) Data distribution 10 20 30 40 50 60 70 80 90 100 0 5 10 15 20 25 30 35 40 45 50 Kernel Width (% of data range) in RBF function Clustering error (b) Clustering error versus kernel width Figure 2: Clustering error of spectral clustering using the RBF kernel with different kernel width. The horizonal axis of Figure 2(b) represents the percentage of the distance range (i.e., the difference between the maximum and the minimum distance) that is used for kernel width. clearly see that the computational time increases dramatically when we apply the maximum margin clustering algorithm to even modest numbers of examples. • Requiring clustering boundaries to pass through the origins. One important assumption made by the maximum margin clustering in [1] is that the clustering boundaries will pass through the origins. To this end, maximum margin clustering requires centralizing data points around the origins before clustering data. It is important to note that centralizing data points at the origins does not guarantee clustering boundaries to go through origins, particularly when cluster sizes are unbalanced with one cluster significantly more popular than the other. • Sensitive to the choice of kernel functions. Figure 2(b) shows the clustering error of maximum margin clustering for the synthesized data of two overlapped Gaussians clusters (Figure 2(a)) using the RBF kernel with different kernel width. We see that the performance of maximum margin clustering depends critically on the choice of kernel width. The same problem is also observed in spectral clustering [10]. Although a number of studies [8, 9, 10, 6] are devote to automatically identifying appropriate kernel matrices in clustering, they are either heuristic approaches or require additional labeled data. In this paper, we propose “generalized maximum margin clustering” framework that resolves the above three problems simultaneously. In particular, the proposed framework reformulates the problem of maximum margin clustering to include the bias term in the classification boundary, and therefore remove the assumption that clustering boundaries have to pass through the origins. Furthermore, the new formulism reduces the number of parameters to be linear in the number of examples, and therefore significantly reduces the computational cost. Finally, it is equipped with the capability of unsupervised kernel learning, and therefore, is able to determine the appropriate kernel matrix and clustering memberships simultaneously. More interestingly, we will show that spectral clustering, such as the normalized cut algorithm, can be viewed as a special case of the generalized maximum margin clustering. The remainder of the paper is organized as follows: Section 2 reviews the work of maximum margin clustering and kernel learning. Section 3 presents the framework of generalized maximum margin clustering. Our empirical studies are presented in Section 4. Section 5 concludes this work. 2 Related Work The key idea of maximum margin clustering is to extend the theory of support vector machine to unsupervised learning. Given the training examples D = (x1, x2, . . . , xn) and their class labels y = (y1, y2, . . . , yn) ∈{−1, +1}n, the dual problem of support vector machine can be written as: max α∈Rn α⊤e −1 2α⊤diag(y)Kdiag(y)α s. t. 0 ≤α ≤C, α⊤y = 0 (1) where K ∈Rn×n is the kernel matrix and diag(y) stands for the diagonal matrix that uses the vector y as its diagonal elements. To apply the above formulism to unsupervised learning, the maximum margin clustering approach relaxes class labels y to continuous variables, and searches for both y and α that maximizes the classification margin. This leads to the following optimization problem: min y,λ,ν,δ t s. t. µ (yy⊤) ◦K e + ν −δ + λy (e + ν −δ + λy)⊤ t −2Cδ⊤e ¶ ⪰0 ν ≥0, δ ≥0 where ◦stands for the element wise product between two matrices. To convert the above problem into a convex programming problem, the authors of [1] makes two important relaxations. The first one relaxes yy⊤into a positive semi-definitive (PSD) matrix M ⪰0 whose diagonal elements are set to be 1. The second relaxation sets λ = 0, which is equivalent to assuming that there is no bias term b in the expression of classification boundaries, or in other words, classification boundaries have to pass through the origins of data. These two assumption simplify the above optimization problem as follows: min M,ν,δ t s. t. µ M ◦K e + ν −δ (e + ν −δ)⊤ t −2Cδ⊤e ¶ ⪰0 ν ≥0, δ ≥0, M ⪰0 (2) Finally, a few additional constraints of M are added to the above optimization problem to prevent skewed clustering sizes [1]. As a consequence of these two relaxations, the number of parameters is increased from n to n2, which will significantly increase the computational cost. Furthermore, by setting λ = 0, the maximum margin clustering algorithm requires clustering boundaries to pass through the origins of data, which is unsuitable for clustering data with unbalanced clusters. Another important problem with the above maximum margin clustering is the difficulty in determining the appropriate kernel similarity matrix K. Although many kernel based clustering algorithms set the kernel parameters manually, there are several studies devoted to automatic selection of kernel functions, in particular the kernel width for the RBF kernel, i.e., σ in exp ³ −∥xi−xj∥2 2 2σ2 ´ . Shi et al. [8] recommended choosing the kernel width as 10% to 20% of the range of the distance between samples. However, in our experiment, we found that this is not always a good choice, and in many situations it produces poor results. Ng et al. [9] chose kernel width which provides the least distorted clusters by running the same clustering algorithm several times for each kernel width. Although this approach seems to generate good results, it requires running seperate experiments for each kernel width, and therefore could be computationally intensive. Manor et al. in [10] proposed a self-tuning spectral clustering algorithm that computes a different local kernel width for each data point xi. In particular, the local kernel width for each xi is computed as the distance of xi to its kth nearest neighbor. Although empirical study seems to show the effectiveness of this approach, it is unclear how to find the optimal k in computing the local kernel width. As we will see in the experiment section, the clustering accuracy depends heavily on the choice of k. Finally, we will briefly overview the existing work on kernel learning. Most previous work focus on supervised kernel learning. The representative approaches in this category include the kernel alignment [11, 12], semi-definitive programming [13], and spectral graph partitioning [6]. Unlike these approaches, the proposed framework is designed for unsupervised kernel learning. 3 Generalized Maximum Margin Clustering and Unsupervised Kernel Learning We will first present the proposed clustering algorithm for hard margin, followed by the extension to soft margin and unsupervised kernel learning. 3.1 Hard Margin In the case of hard margin, the dual problem of SVM is almost identical to the problem in Eqn. (1) except that the parameter α does not have the upper bound C. Following [13], we further convert the problem in (1) into its dual form: min ν,y,λ 1 2(e + ν + λy)T diag(y)K−1diag(y)(e + ν + λy) s. t. ν ≥0, y ∈{+1, −1}n (3) where e is a vector with all its elements being one. Unlike the treatment in [13], which rewrites the above problem as a semi-definitive programming problem, we introduce variables z that is defined as follows: z = diag(y)(e + ν) Given that ν ≥0, the above expression for z is essentially equivalent to the constraint |zi| ≥1 or z2 i ≥1 for i = 1, 2, . . . , n. Then, the optimization problem in (3) is rewritten as follows: min z,λ 1 2(z + λe)T K−1(z + λe) s. t. z2 i ≥1, i = 1, 2, . . . , n (4) Note that the above problem may not have unique solutions for z and λ due to the translation invariance of the objective function. More specifically, given an optimal solution z and λ, we may be able to construct another solution z′ and λ′ such that: z′ = z + ϵe, λ′ = λ −ϵ. Evidently, both solutions result in the same value for the objective function in (4). Furthermore, with appropriately chosen ϵ, the new solution z′ and λ′ will be able to satisfy the constraint z2 i ≥1. Thus, z′ and λ′ is another optimal solution for (3). This is in fact related to the problem in SVM where the bias term b may not be unique [14]. To remove the translation invariance from the objective function, we introduce an additional term Ce(z⊤e)2 into the objective function, i.e. min z,λ 1 2(z + λe)T K−1(z + λe) + Ce(z⊤e)2 s. t. z2 i ≥1, i = 1, 2, . . . , n (5) where constant Ce weights the important of the punishment factor against the original objective. It is set to be 10, 000 in our experiment. For the simplicity of our expression, we further define w = (z; λ) and P = (In, e). Then, the problem in (4) becomes min w∈Rn+1 wT P T K−1Pw + Ce(e⊤ 0 w)2 s. t. w2 i ≥1, i = 1, 2, . . . , n (6) where e0 is a vector with all its elements being 1 except its last element which is zero. We then construct the Lagrangian as follows L(w, γ) = wT P T K−1Pw + Ce(e⊤ 0 w)2 − n X i=1 γi(w⊤Ii n+1w −1) = w⊤ Ã P T K−1P + Cee0e⊤ 0 − n X i=1 γiIi n+1 ! w + n X i=1 γi where Ii n+1 is an (n + 1) × (n + 1) matrix with all the elements being zero except the ith diagonal element which is 1. Hence, the dual problem of (6) is max γ∈Rn n X i=1 γi s. t. P T K−1P + Cee0e⊤ 0 − n X i=1 γiIi n+1 ⪰0 γi ≥0, i = 1, 2, . . . , n (7) Finally, the solution w can be computed using the KKT condition, i.e., Ã P T K−1P + Cee0e⊤ 0 − n X i=1 γiIi n+1 ! w = 0n+1 In other words, the solution w is proportional to the eigenvector of matrix ¡ P T K−1P + Cee0e⊤ 0 −Pn i=1 γiIi n+1 ¢ for the zero eigenvalue. Since wi = (1 + νi)yi, i = 1, 2, . . . , n and νi ≥0, the class labels {yi}n i=1 can be inferred directly from the sign of {wi}n i=1. Remark I It is important to realize that the problem in (5) is non-convex due to the nonconvex constraint w2 i ≥1. Thus, the optimal solution found by the dual problem in (7) is not necessarily the optimal solution for the prime problem in (5). Our hope is that although the solution found by the dual problem is not optimal for the prime problem, it is still a good solution for the prime problem in (5). This is similar to the SDP relaxation made by the maximum margin clustering algorithm in (2) that relaxes a non-convex programming problem into a convex one. However, unlike the relaxation made in (2) that increases the number of variables from n to n2, the new formulism of maximum margin does not increase the number of parameters (i.e., γ), and therefore will be computational more efficient. This is shown in Figure 1, in which the computational time of generalized maximum margin clustering is increased much slower than that of the maximum margin algorithm. Remark II To avoid the high computational cost in estimating K−1, we replace K−1 with its normalized graph Laplacian L(K) [15], which is defined as L(K) = I −D1/2KD1/2 where D is a diagonal matrix whose diagonal elements are computed as Di,i = Pn j=1 Ki,j, i = 1, 2, . . . , n. This is equivalent to defining a kernel matrix ˜K = L(K)† where † stands for the operator of pseudo inverse. More interesting, we have the following theorem showing the relationship between generalized maximum margin clustering and the normalized cut. Theorem 1. The normalized cut algorithm is a special case of the generalized maximum margin clustering in (7) if the following conditions hold, i.e., (1) K−1 is set to be the normalized Laplacian ¯L(K), (2) all the γs are enforced to be the same, i.e., γi = γ0, i = 1, 2, . . . , n, and (3) Ce ≥1. Proof sketch: Given the conditions 1 to 3 in the theorem, the new objective function in (7) becomes: max γ≥0 γ s.t. ¯L(K) ⪰γIn and the solution for this problem is the largest eigenvector of ¯L(K). 3.2 Soft Margin We extend the formulism in (7) to the case of soft margin by considering the following problem: min ν,y,λ,δ 1 2(e + ν −δ + λy)T diag(y)K−1diag(y)(e + ν −δ + λy) + Cδ n X i=1 δ2 i s. t. ν ≥0, δ ≥0, y ∈{+1, −1}n (8) where Cδ weights the importance of the clustering errors against the clustering margin. Similar to the previous derivation, we introduce the slack variable z and simplify the above problem as follows: min z,δ,λ 1 2(z + λe)T K−1(z + λe) + Ce(z⊤e)2 + Cδ n X i=1 δ2 i s. t. (zi + δi)2 ≥1, δi ≥0, i = 1, 2, . . . , n (9) By approximating (zi + δi)2 as z2 i + δ2 i , we have the dual form of the above problem written as: max γ∈Rn n X i=1 γi s. t. P ⊤K−1P + Cee0e⊤ 0 − n X i=1 γiIi n+1 ⪰0 0 ≤γi ≤Cδ, i = 1, 2, . . . , n (10) The main difference between the above formulism and the formulism in (7) is the introduction of the upper bound Cδ for γ in the case of soft margin. In the experiment, we set the parameter Cδ to be 100, 000, a very large value. 3.3 Unsupervised Kernel Learning As already pointed out, the performance of many clustering algorithms depend on the right choice of the kernel similarity matrix. To address this problem, we extend the formulism in (10) by including the kernel learning mechanism. In particular, we assume that a set of m kernel similarity matrices K1, K2, . . . , Km are available. Our goal is to identify the linear combination of kernel matrices, i.e., K = Pm i=1 βiKi, that leads to the optimal clustering accuracy. More specifically, we need to solve the following optimization problem: max γ,β n X i=1 γi s. t. P ⊤ Ã m X i=1 βiKi !−1 P + Cee0e⊤ 0 − n X i=1 γiIi n+1 ⪰0 0 ≤γi ≤Cδ, i = 1, 2, . . . , n, m X i=1 βi = 1, βi ≥0, i = 1, 2, . . . , m (11) Unfortunately, it is difficult to solve the above problem due to the complexity introduced by (Pm i=1 βiKi)−1. Hence, we consider an alternative problem to the above one. We first introduce a set of normalized graph Laplacian ¯L1, ¯L2, . . . , ¯Lm. Each Laplacian Li is constructed from the kernel similarity matrix Ki. We then defined the inverse of the combined matrix as K−1 = Pm i=1 βi ¯Li. Then, we have the following optimization problem max γ,β n X i=1 γi s. t. m X i=1 βiP ⊤¯LiP + Cee0e⊤ 0 − n X i=1 γiIi n+1 ⪰0 0 ≤γi ≤Cδ, i = 1, 2, . . . , n, m X i=1 βi = 1, βi ≥0, i = 1, 2, . . . , m (12) 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 (a) Overlapped Gaussian −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 (b) Two Circles −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 (c) Two Connected Circles Figure 3: Data distribution of the three synthesized datasets By solving the above problem, we are able to resolve both γ (corresponding to clustering memberships) and β (corresponding to kernel learning) simultaneously. 4 Experiment We tested the generalized maximum margin clustering algorithm on both synthetic datasets and real datasets from the UCI repository. Figure 3 gives the distribution of the synthetic datasets. The four UCI datasets used in our study are “Vote”, “Digits”, “Ionosphere”, and “Breast”. These four datasets comprise of 218, 180, 351, and 285 examples, respectively, and each example in these four datasets is represented by 17, 64, 35, and 32 features. Since the “Digits” dataset consists of multiple classes, we further decompose it into four datasets of binary classes that include pairs of digits difficult to distinguish. Both the normalized cut algorithm [8] and the maximum margin clustering algorithm [1] are used as the baseline. The RBF kernel is used throughout this study to construct the kernel similarity matrices. In our first experiment, we examine the optimal performance of each clustering algorithm by using the optimal kernel width that is acquired through an exhaustive search. The optimal clustering errors of these three algorithms are summarized in the first three columns of Table 1. It is clear that generalized maximum margin clustering algorithm achieve similar or better performance than both maximum margin clustering and normlized cut for most datasets when they are given the optimal kernel matrices. Note that the results of maximum margin clustering are reported for a subset of samples(including 80 instances) in UCI datasets due to the out of memory problem. Table 1: Clustering error (%) of normalized cut (NC), maximum margin clustering (MMC), generalized maximum margin clustering (GMMC) and self-tuning spectral clustering (ST). Dataset Optimal Kernel Width Unsupervised Kernel Learning NC MMC GMMC GMMC ST (Best k) ST(Worst k) Two Circles 2 0 0 0 0 50 Two Jointed Circles 7 6.25 0 0 1 45 Two Gaussian 1.25 2.5 1.25 3.75 5 7.5 Vote 25 15 9.6 11.90 11 40 Digits 3-8 35 10 5.6 5.6 5 50 Digits 1-7 45 31.25 2.2 3 0 47 Digits 2-7 34 1.25 .5 5.6 1.5 50 Digits 8-9 48 3.75 16 12 9 48 Ionosphere 25 21.25 23.5 27.3 26.5 48 Breast 36.5 38.75 36.1 37 37.5 41.5 In the second experiment, we evaluate the effectiveness of unsupervised kernel learning. Ten kernel matrices are created by using the RBF kernel with the kernel width varied from 10% to 100% of the range of distance between any two examples. We compare the proposed unsupervised kernel learning to the self-tuning spectral clustering algorithm in [10]. One of the problem with the self-tuning spectral clustering algorithm is that its clustering error usually depends on the parameter k, i.e., the number of nearest neighbor used for computing the kernel width. To provide a full picture of the self-tuning spectral clustering, we vary k from 1 and 15 , and calculate both best and worst performance using different k. The last three columns of Table 1 summarizes the clustering errors of generalized maximum margin clustering and self-tuning spectral clustering with both best and worst k. First, observe the big gap between best and worst performance of self-tuning spectral clustering with different choice of k, which implies that this algorithm is sensitive to parameter k. Second, for most datasets, generalized maximum margin clustering achieves similar performance as self-tuning spectral clustering with the best k. Furthermore, for a number of datasets, the unsupervised kernel learning method achieves the performance close to the one using the optimal kernel width. Both results indicate that the proposed algorithm for unsupervised kernel learning is effective in identifying appropriate kernels. 5 Conclusion In this paper, we proposed a framework for the generalized maximum margin clustering. Compared to the existing algorithm for maximum margin clustering, the new framework has three advantages: 1) it reduces the number of parameters from n2 to n, and therefore has a significantly lower computational cost, 2) it allows for clustering boundaries that do not pass through the origin, and 3) it can automatically identify the appropriate kernel similarity matrix through unsupervised kernel learning. Our empirical study with three synthetic datasets and four UCI datasets shows the promising performance of our proposed algorithm. References [1] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In Advances in Neural Information Processing Systems (NIPS) 17, 2004. [2] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI-05)., 2005. [3] J. Hartigan and M. Wong. A k-means clustering algorithm. Appl. Statist., 28:100–108, 1979. [4] R. A. Redner and H. F. Walker. Mixture densities, maximum likelihood and the em algorithm. SIAM Review, 26:195–239, 1984. [5] C. Ding, X. He, H. Zha, M. Gu, and H. Simon. A min-max cut algorithm for graph partitioning and data clustering. In Proc. IEEE Int’l Conf. Data Mining, 2001. [6] F. R. Bach and M. I. Jordan. Learning spectral clustering. In Advances in Neural Information Processing Systems 16, 2004. [7] R. Jin, C. Ding, and F. Kang. A probabilistic approach for optimizing spectral clustering. In Advances in Neural Information Processing Systems 18, 2006. [8] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [9] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems 14, 2001. [10] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. In Advances in Neural Information Processing Systems 17, pages 1601–1608, 2005. [11] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. S. Kandola. On kernel-target alignment. In NIPS, pages 367–373, 2001. [12] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty. Nonparametric transforms of graph kernels for semi-supervised learning. In Advances in Neural Information Processing Systems 17, pages 1641–1648, 2005. [13] G. R. G. Lanckriet, N. Cristianini, P. L. Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27–72, 2004. [14] C. J. C. Burges and D. J. Crisp. Uniqueness theorems for kernel methods. Neurocomputing, 55(1-2):187–220, 2003. [15] F.R.K. Chung. Spectral Graph Theory. Amer. Math. Society, 1997.
|
2006
|
153
|
2,981
|
Clustering Under Prior Knowledge with Application to Image Segmentation M´ario A. T. Figueiredo Instituto de Telecomunicac¸˜oes Instituto Superior T´ecnico Technical University of Lisbon Portugal mario.figueiredo@lx.it.pt Dong Seon Cheng, Vittorio Murino Vision, Image Processing, and Sound Laboratory Dipartimento di Informatica University of Verona Italy cheng@sci.univr.it, vittorio.murino@univr.it Abstract This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1 Introduction Most approaches to semi-supervised learning (SSL) see the problem from one of two (dual) perspectives: supervised classification with additional unlabelled data (see [20] for a recent survey); clustering with prior information or constraints (e.g., [4, 10, 11, 15, 17]). The second perspective, usually termed semi-supervised clustering (SSC), is usually adopted when labels are totaly absent, but there are (usually pair-wise) relations that one wishes to enforce or encourage. Most SSC techniques work by incorporating the constrains (or prior) into classical algorithms such as K-means or EM for mixtures. The semi-supervision may be hard (i.e., grouping constraints [15, 17]), or have the form of a prior under which probabilistic clustering is performed [4, 11]. The later is clearly the most natural formulation for cases where one wishes to encourage, not enforce, certain relations; an obvious example is image segmentation, seen as clustering under a spatial prior, where neighboring sites should be encouraged, but not constrained, to belong to the same cluster/segment. However, the previous EM-type algorithms for this class of methods have a major drawback: the presence of the prior makes the E-step non-trivial, forcing the use of expensive Gibbs sampling [11] or suboptimal methods such as the iterated conditional modes algorithm [4]. In this paper, we introduce a new approach to mixture-based SSC, leading to a simple, fully deterministic, generalized EM (GEM) algorithm. The keystone is the formulation of SSC as a penalized logistic regression problem, where the labels are only indirectly observed. The linearity of the resulting complete log-likelihood, w.r.t. the missing group labels, underlies the simplicity of the resulting GEM algorithm. When applied to image segmentation, our method allows using spatial priors which are typical of image estimation problems (e.g., restoration/denoising), such as Gaussian fields or wavelet-based priors. Under these priors, the M-step of our GEM algorithm reduces to a simple image denoising procedure, for which there are several extremely efficient algorithms. 2 Formulation We start from the standard formulation of finite mixture models: X = {x1, ..., xn} is an observed data set, where each xi ∈IRd was generated (independently) according to one of a set of K probability (density or mass) functions {p(·|φ(1)), ..., p(·|φ(K))}. In image segmentation, each xi is a pixel value (gray scale, d = 1; color, d = 3) or a vector of local (e.g., texture) features. Associated with X, there is a hidden label set Y = {y1, ..., yn}, where yi = [y(1) i , ..., y(K) i ]T ∈{0, 1}K, with y(k) i = 1 if and only if xi was generated by source k (the so-called “1-of-K” binary encoding). Thus, p (X |Y, φ) = K Y k=1 Y i: y(k) i =1 p(xi|φ(k)) = n Y i=1 K Y k=1 h p(xi|φ(k)) iy(k) i , (1) where φ = (φ(1), ..., φ(K)) is the set of parameters of the generative models of classes. In standard mixture models, all the yi are assumed to be independent and identically distributed samples following a multinomial distribution with probabilities {η(1), ..., η(K)}, i.e., P(Y) = Q i Q k(η(k))y(k) i . This is the part of standard mixture models that has to be modified in order to insert grouping constraints [15] or a grouping prior p(Y) [4, 11]. However, this prior destroys the simplicity of the standard E-step for finite mixtures, which is critically based on the independence assumption. We follow a different route to avoid that roadblock. Let the hidden labels Y = {y1, ..., yn} depend on a new set of variables Z = {z1, ..., zn}, where each zi = [z(1) i , ..., z(K) i ]T ∈IRK, following a multinomial logistic model [5]: P(Y|Z) = n Y i=1 K Y k=1 P[y(k) i = 1|zi] y(k) i , where P[y(k) i = 1|zi] = ez(k) i PK l=1 ez(l) i . (2) Due to the normalization, we can set (w.l.o.g.) z(K) i = 0, for i = 1, ..., n [5]. We’re thus left with n (K −1) real variables, i.e., Z = {z(1), ..., z(K−1)}, where z(k) = [z(k) 1 , ..., z(k) n ]T ; of course, Z can be seen as an n × (K −1) matrix, where z(k) is the k-th column and zi is the i-th row. With this formulation, certain grouping preferences may be expressed by a prior p(Z). For example, preferred pair-wise relations can be easily embodied in a Gaussian prior p(Z) ∝ K−1 Y k=1 exp −1 4 n X i=1 n X j=1 Ai,j(z(k) i −z(k) j )2 = K−1 Y k=1 exp −1 2(z(k))T ∆z(k) , (3) where A is a matrix (with a null diagonal) encoding pair-wise preferences (Ai,j > 0 expresses preference, with strength proportional to Ai,j, for having points i and j in the same cluster) and ∆ is the well-known graph-Laplacian matrix [20], ∆= diag nPn j=1 A1,j, ..., Pn j=1 An,j o −A. (4) For image segmentation, each z(k) is an image with real-valued elements and a natural choice for A is to have Ai,j = λ, if i and j are neighbors, and zero otherwise. Assuming periodic boundary conditions for the neighborhood system, ∆is a block-circulant matrix with circulant blocks [2]. However, as shown below, other more sophisticated priors (such as wavelet-based priors) can also be used at no additional computational cost [1]. 3 Model Estimation 3.1 Marginal Maximum A Posteriori and the GEM Algorithm Based on the formulation presented in the previous section, SSC is performed by estimating Z and φ, seeing Y as missing data. The marginal maximum a posteriori estimate is obtained by marginalizing out the hidden labels (over all the possible label configurations), b Z, bφ = arg max Z,φ X Y p(X, Y, Z|φ) = arg max Z,φ X Y p(X|Y, φ) P(Y|Z) p(Z), (5) where we’re assuming a flat prior for φ. One of the key advantages of this approach is that (5) is a continuous (not combinatorial) optimization problem. This is in contrast Markov random field approaches to image segmentation, which lead to hard combinatorial problems, since they perform optimization directly with respect to the (discrete) label variables Y. Finally, notice that once in possession of an estimate b Z, one may compute P(Y| b Z) which gives the probability that each data point belongs to each class. By finding arg maxk P[y(k) i = 1|zi], for every i, one may obtain a hard clustering/segmentation. We handle (5) with a generalized EM (GEM) algorithm [13], i.e., by applying the following iterative procedure (until some convergence criterion is satisfied): E-step: Compute the conditional expectation of the complete log-posterior, given the current estimates ( b Z, bφ) and the observations X: Q(Z, φ| b Z, bφ) = EY h log p(Y, Z, φ|X) b Z, bφ, X i . (6) M-step: Update the estimate: ( b Z, bφ) ←( b Znew, bφnew), with new values such that Q( b Znew, bφnew| b Z, bφ) ≥Q( b Z, bφ| b Z, bφ). (7) Under mild conditions, it is well known that GEM algorithms converge to a local maximum of the marginal log-posterior [18]. 3.2 E-step The complete log-posterior is log p(Y, Z, φ|X) .= log p(X|Y, φ) + log P(Y|Z) + log p(Z) .= n X i=1 K X k=1 y(k) i log p(xi|φ(k)) + n X i=1 " K X k=1 y(k) i z(k) i −log K X k=1 ez(k) i # + log p(Z) (8) where .= stands for “equal up to an additive constant”. The key observation is that this function is linear w.r.t. the hidden variables y(k) i . Consequently, the E-step reduces to computing their conditional expectations, which are then plugged into (8). As in standard mixtures, each missing y(k) i is binary, thus its expectation (denoted by(k) i ) equals its posterior probability of being equal to one, easily obtained via Bayes law: by(k) i ≡E[y(k) i | b Z, bφ, X] = P[y(k) i = 1|bzi, bφ, xi] = p(xi|bφ (k)) P[y(k) i = 1|bzi] PK j=1 p(xi|bφ (j)) P[y(j) i = 1|bzi] . (9) Notice that this is the same as the E-step for a standard finite mixture, where the probabilities P[y(k) i = 1|bzi] (given by (2)) play the role of the probabilities of the classes/components. Finally, the Q function is obtained by plugging the expectations by(k) i into (8). 3.3 M-Step It’s clear from (8) that the maximization w.r.t. φ can be performed separately w.r.t. to each φ(k), bφ (k) new = arg max φ (k) n X i=1 by(k) i log p(xi|φ(k)). (10) This is the familiar weighted maximum likelihood criterion, exactly as it appears in EM for standard mixtures. The explicit form of this update depends on the choice of p(·|φ(k)); e.g., this step can be easily applied to any finite mixture of exponential family densities [3]. In supervised image segmentation, these parameters are known (e.g., previously estimated from training data) and thus it’s not necessary to estimate them; the M-step reduces to the estimation of Z. In unsupervised image segmentation, φ is unknown and (10) will have to be applied. To update the estimate of Z, we need to maximize (or at least improve, see (7)) L(Z| bY) ≡ n X i=1 " K X k=1 by(k) i z(k) i −log K X k=1 ez(k) i # + log p(Z). (11) Without the prior, this would be a simple logistic regression (LR) problem, with an identity design matrix [5]; however, instead of the usual hard labels y(k) i ∈{0, 1}, we have “soft” labels by(k) i ∈[0, 1]. Arguably, the two standard approaches to maximum likelihood LR are the Newton-Raphson algorithm (a.k.a. iteratively reweighted least squares – IRLS [7]) and the minorize-maximize (MM) approach (formerly known as bound optimization) [5, 9]. We will show below that the MM approach can be easily modified to accommodate the presence of a prior. Let’s briefly review the MM approach for maximizing a twice differentiable concave function E(θ) with bounded Hessian [5, 9]. Let the Hessian H(θ) of E(θ) be bounded below by −B (that is, H(θ) ⪰−B, in the matrix sense, meaning that H(θ)+B is positive definite), where B is a positive definite matrix. It’s trivial to show that E(θ) −R(θ, bθ) has a minimum at θ = bθ, where R(θ, bθ) = −1 2 θ −bθ −B−1g(bθ) T B θ −bθ −B−1g(bθ) , (12) with g(bθ) denoting the gradient of E(θ) at bθ. Thus, the iteration bθnew = arg max θ R(θ, bθ) = bθ + B−1g(bθ) (13) is guaranteed to monotonically improve E(θ), i.e., E(bθnew) ≥E(bθ). It was shown in [5] that the gradient and the Hessian of the logistic log-likelihood function, i.e., (11) without the log-prior, verify (with Ia denoting an a × a identity matrix and 1a a vector of a ones) g(z) = by −η(z) and H(z) ⪰−1 2 IK−1 −1K−1 1T K−1 K ! ⊗In ≡−B, (14) where z = [z(1) 1 , ..., z(1) n , z(2) 1 , ..., z(K−1) n ]T denotes the lexicographic vectorization of Z, by denotes the corresponding lexicographic vectorization of bY, and η(z) = [p(1) 1 , ..., p(1) n , p(2) 1 , ..., p(K−1) n ]T with p(k) i = P[y(k) i = 1|zi]. Defining v = bz + B−1(by −η(bz)), the MM update equation for solving (11) is thus bznew(v) = arg min z 1 2 (z −v)T B (z −v) −log p(z) , (15) where p(z) is equivalent to p(Z), because z is simply the lexicographic vectorization of Z. We now summarize our GEM algorithm: E-step: compute by, using (9), for all i = 1, ..., n and k = 1, ..., K −1. (Generalized) M-step: Apply one or more iterations (15), keeping by fixed, that is, loop through the following two steps: v ←bz + B−1(by −η(bz)) and bz ←bznew(v). 3.4 Speeding Up the Algorithm In image segmentation, the MM update equation (15) is formally equivalent to the MAP estimation of an image with n pixels in IRK−1, under prior p(z), where v plays the role of observed image, and B is the inverse covariance matrix of the noise. Due to the structure of B, even if the prior models the several z(k) as independent, i.e., if log p(z) = log p(z(1)) + · · · + log p(z(K−1)), (15) can not be decoupled into the several components {z(1), ..., z(K−1)}. We sidestep this difficulty, at the cost of using a less tight bound in (14), based the following lemma: Lemma 1 Let ξK = 1/2, if K > 2, and ξK = 1/4, if K = 2. Then, B ⪯ξK In(K−1). Proof: Inserting K = 2 in (14) yields B = I/4, which proves the case K = 2. For K > 2, the inequality I/2 ⪰B is equivalent to λmin(I/2 −B) ≥0, which is equivalent to λmax(B) ≤(1/2). Since the eigenvalues of the Kronecker product are the products of the eigenvalues of the matrices, λmax(B) = λmax(I −(1/K) 1 1T )/2. Since 1 1T is a rank-1 matrix with eigenvalues {0, ..., 0, K − 1}, the eigenvalues of (I −(1/K) 1 1T ) are {1, ..., 1, 1/K}, thus λmax(I −(1/K) 1 1T ) = 1, and λmax(B) = 1/2. This lemma allows replacing B with ξK In(K−1) in (15) which (assuming independent priors, as is the case of (3)) becomes decoupled, leading to bz(k) new(v(k)) = arg min z(k) ξK 2
z(k) −v(k)
2 −log p(z(k)) , for k = 1, ..., K −1, (16) where v(k) = bz(k) +(1/ξK)(by(k) −η(k)(bz(k))). Moreover, the “noise” in each of these “denoising” problems is white and Gaussian, of variance 1/ξK. 3.5 Stationary Gaussian Field Priors Consider a Gaussian prior of form (3), where Ai,j only depends on the relative position of i and j and the neighborhood system defined by A has periodic boundary conditions. In this case, both A and ∆ are block-circulant matrices, with circulant blocks [2], thus diagonalizable by a 2D discrete Fourier transform (2D-DFT). Formally, ∆= UHDU, where D is diagonal, U is the orthogonal matrix representing the 2D-DFT, and (·)H denotes conjugate transpose. The log-prior is then expressed in the DFT domain, log p(z(k)) .= 1 2(Uz(k))HD(Uz(k)), and the solution of (16) is bz(k) new(v(k)) = ξK UH [ξKIn + D]−1 U v(k), for k = 1, ..., K −1. (17) Observe that (17) corresponds to filtering each image v(k), in the DFT domain, with a fixed filter with frequency response [ξKIn + D]−1; this inversion can be computed off-line and is trivial because ξKIn + D is diagonal. Finally, it’s worth stressing that the matrix-vector products by U and UH are not carried out explicitly but more efficiently via the FFT algorithm, with cost O(n log n). 3.6 Wavelet-Based Priors for Segmentation It’s known that piece-wise smooth images have sparse wavelet-based representations (see [12] and the many references therein); this fact underlies the state-of-the-art denoising performance of wavelet-based methods. Piece-wise smoothness of the z(k) translates into segmentations in which pixels in each class tend to form connected regions. Consider a wavelet expansion of each z(k) z(k) = Wθ(k), k = 1, ..., K −1, (18) where the θ(k) are sets of coefficients and W is the matrix representation of an inverse wavelet transform; W may be orthogonal or have more columns than lines (over-complete representations) [12]. A wavelet-based prior for z(k) is induced by placing a prior on the coefficients θ(k). A classical choice for p(θ(k)) is a generalized Gaussian [14]. Without going into details, under this class of priors (and others), (16) becomes a non-linear wavelet-based denoising step, which has been widely studied in the image processing literature. For several choices of p(θ(k)) and W, this denoising step has a very simple closed form, which essentially corresponds to computing a wavelet transform of the observations, applying a coefficient-wise non-linear shrinkage/thresholding operation, and applying the inverse transform to the processed coefficients. This is computationally very efficient, due to the existence of fast algorithms for computing direct and inverse wavelet transforms; e.g., O(n) for an orthogonal wavelet transform or O(n log n) for a shift-invariant redundant transform. 4 Extensions 4.1 Semi-Supervised Segmentation For semi-supervised image segmentation, the user defines regions in the image for which the true label is known. Our GEM algorithm is trivially modified to handle this case: if at location i the label is known to be (say) k, we freeze by(k) i = 1, and by(j) i = 0, for j ̸= k. The E-step is only applied to those locations for which the label is unknown. The M-step remains unchanged. 4.2 Discriminative Features Our formulation (as most probabilistic segmentation methods) adopts a generative perspective, where each p(·|φ(k)) models the data generation mechanism in the corresponding class. However, discriminative methods (such as support vector machines) are seen as the current state-of-the-art in classification [7]. We will now show how a pre-trained discriminative classifier can be used in our GEM algorithm instead of the generative likelihoods. The E-step (see (9)) obtains the posterior probability that xi was generated by the k-th model, by combining (via Bayes law) the corresponding likelihood p(xi|bφ (k)) with the local prior probability P[y(k) i = 1|bzi]. Consider that, instead of likelihoods derived from generative models, we have a discriminative classifier, i.e., one that directly provides estimates of the posterior class probabilities P[y(k) i = 1|xi]. To use these values in our segmentation algorithm, we need a way to bias these estimates according to the local prior probabilities P[y(k) i = 1|bzi], which are responsible for encouraging spatial coherence. Let us assume that we know that the discriminative classifier was trained using mk samples from the k-th class. It can thus be assumed that these posterior class probabilities verify P[y(k) i = 1|xi] ∝mk p(xi|y(k) i = 1). It is then possible to “bias” these classifiers, with the local prior probabilities P[y(k) i = 1|bzi], simply by computing P[y(k) i = 1|xi] = P[y(k) i = 1|xi] P[y(k) i = 1|bzi] mk K X j=1 P[y(j) i = 1|xi] P[y(j) i = 1|bzi] mj −1 . 5 Experiments In this section we will show experimental results of image segmentation in supervised, unsupervised, semi-supervised, and discriminative modes. Assessing the performance of a segmentation method is not a trivial task. Moreover, the performance of segmentation algorithms depends more critically on the adopted features (which is not the focus of this paper) than on the spatial coherence prior. For these reasons, we will not present any careful comparative study, but simply a set of experimental examples testifying for the promising behavior of the proposed approach. 5.1 Supervised and Unsupervised Image Segmentation The first experiment, reported in Fig. 1, illustrates the algorithm on a synthetic gray scale image with four Gaussian classes of means 1, 2, 3, and 4, and standard deviation 0.6. For this image, both supervised and unsupervised segmentation lead to almost visually indistinguishable results, so we only show the supervised segmentation results. In the Gaussian prior, matrix A corresponds to a first order neighborhood, that is, Ai,j = γ if and only if j is one of the four nearest neighbors of i. For wavelet-based segmentation, we have used undecimated Haar wavelets and the Bayes-shrink denoising procedure [6]. Figure 1: From left to right: observed image, maximum likelihood segmentation, GEM result with Gaussian prior, GEM result with wavelet-based prior. 5.2 Semi-supervised Image Segmentation We illustrate the semi-supervised mode of our approach on two real RGB images, shown in Fig. 2. Each region is modelled by a single multivariate Gaussian density in RGB space. In the example in the first row, the goal is to segment the image into skin, cloth, and background regions; in the second example, the goal is to segment the horses from the background. These examples show how the semi-supervised mode of our algorithm is able to segment the image into regions which “look like” the seed regions provided by the user. Figure 2: From left to right (in each row): observed image with regions indicated by the user as belonging to each class, segmentation result, region boundaries. 5.3 Discriminative Texture Segmentation Finally, we illustrate the behavior of the algorithm when used with discriminative classifiers by applying it to texture segmentation. We build on the work in [8], where SVM classifiers are used for texture classification (see [8] for complete details about the kernels and texture features used). Fig. 3 shows two experiments; one with a two-texture 256×512 image and the other with a 5-texture 256×256 image. In the two-class case, one binary SVM was trained on 1000 random patterns from each class. For the 5-class case, 5 binary SVMs were trained in the “1-vs-all” mode, with 500 samples from each class. In the 2-class and 5-class cases, the error rates of the SVM classifier are 12.69% and 13.92%, respectively. Our GEM algorithm achieves 0.51% and 2.22%, respectively. These examples show that our method is able to take class predictions produced by a classifier lacking any spatial prior and produce segmentations with a high degree of spatial coherence. 6 Conclusions We have introduced an approach to probabilistic semi-supervised clustering which is particularly suited for image segmentation. The formulation allows supervised, unsupervised, semi-supervised, and discriminative modes, and can be used with classical standard image priors (such as Gaussian fields, or wavelet-based priors). Unlike the usual Markov random field approaches, which involve combinatorial optimization, our segmentation algorithm consists of a simple generalized EM algorithm. Several experimental examples illustrated the promising behavior of our method. Ongoing work includes a thorough experimental comparison with state-of-the-art segmentation algorithms, namely, spectral methods [16] and techniques based on “graph-cuts” [19]. Acknowledgement: This work was partially supported by the (Portuguese) Fundac¸˜ao para a Ciˆencia e Tecnologia (FCT), grant POSC/EEA-SRI/61924/2004. Figure 3: From left to right (in each row): observed image, direct SVM segmentation, segmentation produced by our algorithm. References [1] M. Figueiredo. “Bayesian image segmentation using wavelet-based priors”, Proc. IEEE Conf. Computer Vision and Pattern Recognition - CVPR’2005, San Diego, CA, 2005. [2] N. Balram, J. Moura. “Noncausal Gauss-Markov random fields: parameter structure and estimation”, IEEE Trans. Information Theory, vol. 39, pp. 1333–1355, 1993. [3] A. Banerjee, S. Merugu, I. Dhillon, J. Ghosh. “Clustering with Bregman divergences.” Proc. SIAM Intern. Conf. Data Mining – SDM’2004, Lake Buena Vista, FL, 2004. [4] S. Basu, M. Bilenko, R. Mooney. “A probabilistic framework for semi-supervised clustering.” Proc. of the KDD-2004, Seattle, WA, 2004. [5] D. B¨ohning. “Multinomial logistic regression”, Annals Inst. Stat. Math., vol. 44, pp. 197-200, 1992. [6] G. Chang, B. Yu, M. Vetterli. “Adaptive wavelet thresholding for image denoising and compression.” IEEE Trans. Image Proc., vol. 9, pp. 1532–1546, 2000. [7] T. Hastie, R. Tibshirani, J. Friedman. The Elements of Statistical Learning, Springer, 2001. [8] K. I. Kim, K. Jung, S. H. Park, H. J. Kim. “Support vector machines for texture classification.” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 1542–1550, 2002. [9] D. Hunter, K. Lange. “A tutorial on MM algorithms”, The American Statistician, vol. 58, pp. 30–37, 2004. [10] M. Law, A. Topchy, A. K. Jain. “Model-based clustering with probabilistic constraints.” In Proc. of the SIAM Conf. on Data Mining, pp. 641-645, Newport Beach, CA, 2005. [11] Z. Lu, T. Leen. “Probabilistic penalized clustering.” In NIPS 17, MIT Press, 2005. [12] S. Mallat. A Wavelet Tour of Signal Proc.. Academic Press, San Diego, CA, 1998. [13] G. McLachlan, T. Krishnan. The EM Algorithm and Extensions, Wiley, N. York, 1997. [14] P. Moulin, J. Liu. “Analysis of multiresolution image denoising schemes using generalized - Gaussian and complexity priors,” IEEE Trans. Inform. Theory, vol. 45, pp. 909–919, 1999. [15] N. Shental, A. Bar-Hillel, T. Hertz, D. Weinshall. “Computing Gaussian mixture models with EM using equivalence constraints.” In NIPS 15, MIT Press, Cambridge, MA, 2003. [16] J. Shi, J. Malik, “Normalized cuts and image segmentation.” IEEE-TPAMI, vol. 22, pp. 888–905, 2000. [17] K. Wagstaff, C. Cardie, S. Rogers, S. Schr¨odl. “Constrained K-means clustering with background knowledge.” In Proc. of ICML’2001, Williamstown, MA, 2001. [18] C. Wu. “On the convergence properties of the EM algorithm,” Ann. Statistics, vol. 11, pp. 95-103, 1983. [19] R. Zabih, V. Kolmogorov, “Spatially coherent clustering with graph cuts.” Proc. IEEE-CVPR, vol. II, pp. 437–444, 2004. [20] X. Zhu. “Semi-Supervised Learning Literature Survey”, TR-1530, Comp. Sci. Dept., Univ. of Wisconsin, Madison, 2006. Available at www.cs.wisc.edu/˜jerryzhu/pub/ssl_survey.pdf
|
2006
|
154
|
2,982
|
Inferring Network Structure from Co-Occurrences Michael G. Rabbat Electrical and Computer Eng. University of Wisconsin Madison, WI 53706 rabbat@cae.wisc.edu M´ario A.T. Figueiredo Instituto de Telecomunicac¸˜oes Instituto Superior T´ecnico Lisboa, Portugal mtf@lx.it.pt Robert D. Nowak Electrical and Computer Eng. University of Wisconsin Madison, WI 53706 nowak@ece.wisc.edu Abstract We consider the problem of inferring the structure of a network from cooccurrence data: observations that indicate which nodes occur in a signaling pathway but do not directly reveal node order within the pathway. This problem is motivated by network inference problems arising in computational biology and communication systems, in which it is difficult or impossible to obtain precise time ordering information. Without order information, every permutation of the activated nodes leads to a different feasible solution, resulting in combinatorial explosion of the feasible set. However, physical principles underlying most networked systems suggest that not all feasible solutions are equally likely. Intuitively, nodes that co-occur more frequently are probably more closely connected. Building on this intuition, we model path co-occurrences as randomly shuffled samples of a random walk on the network. We derive a computationally efficient network inference algorithm and, via novel concentration inequalities for importance sampling estimators, prove that a polynomial complexity Monte Carlo version of the algorithm converges with high probability. 1 Introduction The study of complex networked systems is an emerging field impacting nearly every area of engineering and science, including the important domains of biology, cognitive science, sociology, and telecommunications. Inferring the structure of signalling networks from experimental data precedes any such analysis and is thus a basic and fundamental task. Measurements which directly reveal network structure are often beyond experimental capabilities or are excessively expensive. This paper addresses the problem of inferring the structure of a network from co-occurrence data: observations which indicate nodes that are activated in each of a set of signaling pathways but do not directly reveal the order of nodes within each pathway. Co-occurrence observations arise naturally in a number of interesting contexts, including biological and communication networks, and networks of neuronal colonies. Biological signal transduction networks describe fundamental cell functions and responses to environmental stress [1]. Although it is possible to test for individual, localized interactions between gene pairs, this approach (called genetic epistatic analysis) is expensive and time-consuming. Highthroughput measurement techniques such as microarrays have successfully been used to identify the components of different signal transduction pathways [2]. However, microarray data only reflects order information at a very coarse, unreliable level. Developing computational techniques for inferring pathway orders is a largely unexplored research area [3]. A similar problem has been studied in telecommunication networks [4]. In this context, each path corresponds to a transmission between an origin and destination. The origin and destination are observed, in addition to the activated switches/routers carrying the transmission through the network. However, due to the geographically distributed nature of the measurement infrastructure and the rapidity at which transmissions are completed, it is not possible to obtain precise ordering information. Another exciting potential application arises in neuroimaging [5,6]. Functional magnetic resonance imaging provides images of brain activity with high spatial resolution but has relatively poor temporal resolution. Treating distinct brain regions as nodes in a functional brain network that co-activate when a subject performs different tasks may lead to a similar network inference problem. Given a collection of co-occurrences, a feasible network (consistent with the observations) is easily obtained by assigning an order to the elements of each co-occurrence, thereby specifying a path through the hypothesized network. Since any arbitrary order of each co-occurrence leads to a feasible network, the number of feasible solutions is proportional to the number of permutations of all the co-occurrence observations. Consequently we are faced with combinatorial explosion of the feasible set, and without additional assumptions or side information there is no reason to prefer one particular feasible network over the others. See the supplementary document [7] for further discussion. Despite the apparent intractability of the problem, physical principles governing most networks suggest that not all feasible solutions are equally plausible. Intuitively, nodes that co-occur more frequently are more likely to be connected in the underlying network. This intuition has been used as a stepping stone by recent approaches proposed in the context of telecommunications [4], and in learning networks of collaborators [8]. However, because of their heuristic nature, these approaches do not produce easily interpreted results and do not readily lend themselves to analysis or to the incorporation of side information. In this paper, we model co-occurrences as randomly permuted samples of a random walk on the underlying network. The random permutation accounts for lack of observed order. We refer to this process as the shuffled Markov model. In this framework, network inference amounts to maximum likelihood estimation of the parameters governing the random walk (initial state distribution and transition matrix). Direct maximization is intractable due to the highly non-convex log-likelihood function and exponential feasible set arising from simultaneously considering all permutations of all co-occurrences. Instead, we derive a computationally efficient EM algorithm, treating the random permutations as hidden variables. In this framework the likelihood factorizes with respect to each pathway/observation, so that the computational complexity of the EM algorithm is determined by the E-step which is only exponential in the longest path. In order to handle networks with long paths, we propose a Monte Carlo E-step based on a simple, linear complexity importance sampling scheme. Whereas the exact E-step has computational complexity which is exponential in path length, we prove that a polynomial number of importance samples suffices to retain desirable convergence properties of the EM algorithm with high probability. In this sense, our Monte Carlo EM algorithm breaks the curse of dimensionality using randomness. It is worth noting that the approach described here differs considerably from that of learning the structure of a directed graphical model or Bayesian network [9, 10]. The aim of graphical modelling is to find a graph corresponding to a factorization of a high-dimensional distribution which predicts the observations well. These probabilistic models do not directly reflect physical structures, and applying such an approach to co-occurrences would ignore physical constraints inherent to the observations: co-occurring vertices must lie along a path in the network. 2 Model Formulation and EM Algorithm 2.1 The Shuffled Markov Model We model a network as a directed graph G = (V, E), where V = {1, . . . , |V |} is the vertex (node) set and E ⊆V 2 is the set of edges (direct connections between vertices). An observation, y ⊂V , is a subset of vertices co-activated when a particular stimulus is applied to the network (e.g., collection of signaling proteins activated in response to an environmental stress). Given a set of T observations, Y = {y(1), . . . , y(T )}, each corresponding to a path, where y(m) = {y(m) 1 , . . . , y(m) Nm }, we say that a graph (V, E) is feasible w.r.t. Y if for each y(m) ∈Y there is an ordered path z(m) = (z(m) 1 , . . . , z(m) Nm ) and a permutation τ (m)= (τ (m) 1 , . . . , τ (m) Nm ) such that z(m) t = y(m) τ (m) t , and (zt−1, zt) ∈E, for t = 2, ..., Nm. The (unobserved) ordered paths, Z = {z(1), ..., z(T )}, are modelled as T independent samples of a first-order Markov chain with state set V . The Markov chain is parameterized by the initial state distribution π and the (stochastic) transition matrix A. We assume that the support of the transition matrix is determined by the adjacency structure of the graph; i.e., Ai,j > 0 ⇔(i, j) ∈E. Each observation y(m) results from shuffling the elements of z(m) via an unobserved permutation τ (m), drawn uniformly from SNm (the set of all permutations of Nm objects); i.e., z(m) t = y(m) τ (m) t , for t = 1, . . . , Nm. All the τ (m) are assumed mutually independent and independent of all the z(m). Under this model, the log-likelihood of the set of observations Y is log P[Y|A, π] = T X m=1 log X τ ∈SNm P[y(m)|τ, A, π] −log(Nm!) . (1) where P[y|τ, A, π] = πyτ1 QN t=2 Ayτt−1,yτt, and network inference consists in computing the maximum likelihood (ML) estimates (AML, πML) = arg maxA,π log P[Y|A, π]. With the ML estimates in hand, we may determine the most likely permutation for each y(m) and obtain a feasible reconstruction from the ordered paths. In general, log P[Y|A, π] is a non-concave function of (A, π), so finding (AML, πML) is not easy. Next, we derive an EM algorithm for this purpose, by treating the permutations as missing data. 2.2 EM Algorithm Let w(m) = (w(m) 1 , ..., w(m) Nm ) be a binary representation of z(m), defined by w(m) t = (w(m) t,1 , ..., w(m) t,|V |) ∈{0, 1}|V |, with (w(m) t,i = 1) ⇔(z(m) t = i); let W = {w(1), ..., w(T )}. Let X = {x(1), . . . , x(T )} be the binary representation for Y, defined in a similar way: x(m) = (x(m) 1 , ..., x(m) Nm), where x(m) t = (x(m) t,1 , ..., x(m) t,|V |) ∈{0, 1}|V |, with (x(m) t,i = 1) ⇔(y(m) t = i). Finally, let R = {r(1), . . . , r(T )} be the collection of permutation matrices corresponding to T = {τ (1), . . . , τ (T )}; i.e., (r(m) t,t′ = 1) ⇔(τ (m) t = t′). With this notation in place, the complete log-likelihood can be written as log P[X, R|A, π] = log P[X|R, A, π] + log P[R], where log P[X|R, A, π] = T X m=1 log P[x(m)|r(m), A, π] = T X m=1 |V | X i,j=1 Nm X t′,t′′=1 Nm X t=2 r(m) t,t′ r(m) t−1,t′′x(m) t′′,ix(m) t′,j log Ai,j + T X m=1 |V | X i=1 Nm X t′=1 r(m) 1,t′ x(m) t′,i log πi, (2) and P[R] is the probability of the set of permutations, which is constant and thus dropped, since the permutations are independent and equiprobable. The EM algorithm proceeds by (the E-step) computing Q A, π; Ak, πk = E log P[X, R|A, π] X, Ak, πk , the expected value of log P[X, R|A, π] w.r.t. the missing R, conditioned on the observations and on the current model estimate (Ak, πk). Examining log P[X, R|A, π] reveals that it is linear w.r.t. simple functions of R: (a) the first row of each r(m), i.e., r(m) 1,t′ ; (b) sums of transition indicators, i.e., α(m) t′,t′′ ≡PNm t=2 r(m) t,t′ r(m) t−1,t′′. Consequently, the E-step reduces to computing the conditional expectations of r(m) 1,t′ and α(m) t′,t′′, denoted ¯r(m) 1,t′ and ¯α(m) t′,t′′, respectively, and plugging them into the complete log-likelihood (2), which yields Q A, π; Ak, πk . Since the permutations are (a priori) equiprobable, we have P[r(m)] = (Nm!)−1, P r(m) 1,t′ = 1] = (Nm −1)!/Nm! = 1/Nm, and P[r(m)|r(m) 1,t′ = 1] = 1/(Nm −1)!. Using these facts, the mutual independence among different observations, and Bayes law, it is not hard to show that ¯r(m) 1,t′ = γ(m) t′ PNm t′=1 γ(m) t′ with γ(m) t′ = X r: r1,t′=1 P x(m)r, Ak, πk , (3) where each term P x(m)r, Ak, πk is easily computed after using r to “unshuffle” x(m): P x(m)r, Ak, πk = P y(m)τ, Ak, πk = πk y(m) τ1 Nm Y t=2 Ak y(m) τt−1,y(m) τt . The computation of ¯α(m) t′,t′′ is similar to that of ¯r(m) 1,t′ ; the key observations are that P[r(m) t,t′ r(m) t−1,t′′ = 1] = (Nm −2)!/Nm! and P[r(m)|r(m) t,t′ r(m) t−1,t′′ = 1] = 1/(Nm −2)!, leading to ¯α(m) t′,t′′ = γ(m) t′,t′′ PNm t′=1 γ(m) t′ , with γ(m) t′,t′′ = X r P[x(m)|r, Ak, πk] Nm X t=2 rt,t′rt−1,t′′. (4) Computing {¯r(m) 1,t′ } and {¯α(m) t′,t′′} requires O Nm! operations. For large Nm, this is a heavy load; in Section 3, we describe a sampling approach for computing approximations to ¯r1,t′ and ¯αt′,t′′. Maximization of Q A, π; Ak, πk w.r.t. A and π, under the normalization constraints, leads to the M-step: Ak+1 i,j = PT m=1 PNm t′,t′′=1 ¯α(m) t′,t′′x(m) t′′,ix(m) t′,j P|S| j=1 PT m=1 PNm t′,t′′=1 ¯α(m) t′,t′′x(m) t′′,ix(m) t′,j and πk+1 i = PT m=1 PNm t′=1 ¯r(m) 1,t′ x(m) t′,i P|S| i=1 PT m=1 PNm t′=1 ¯r(m) 1,t′ x(m) t′,i . (5) Standard convergence results for the EM algorithm due to Boyles and Wu [11,12] guarantee that the sequence {(Ak, πk)} converges monotonically to a local maximum of the likelihood. 2.3 Handling Known Endpoints In some applications, (one or both of) the endpoints of each path are known and only the internal nodes are shuffled. For example, in telecommunications problems, the origin and destination of each transmission are known, but not the network connectivity. In estimating biological signal transduction pathways, a physical stimulus (e.g., hypotonic shock) causes a sequence of protein interactions, resulting in another observable physical response (e.g., a change in cell wall structure); in this case, the stimulus and response act as fixed endpoints, the goal is to infer the order of the sequence of protein interactions. Knowledge of the endpoints of each path imposes the constraints r(m) 1,1 = 1 and r(m) Nm,Nm = 1. Under the first constraint, estimates of the initial state probabilities are simply given by πi = 1 T PT m=1 x(m) 1,i . Thus, EM only needs to be used to estimate A. In this setup, the E-step has a similar form as (4) but with sums over r replaced by sums over permutation matrices satisfying r1,1 = 1 and rN,N = 1. The M-step update for Ak+1 remains unchanged. 3 Large Scale Inference via Importance Sampling For long paths, the combinatorial nature of the exact E-step – summing over all permutations of each sequence in (3) and (4) – may render exact computation intractable. This section presents a Monte Carlo importance sampling (see, e.g., [13]) version of the E-step, along with finite sample bounds guaranteeing that a polynomial complexity Monte Carlo EM algorithm retains desirable convergence properties of the EM algorithm; i.e., monotonic convergence to a local maximum. 3.1 Monte Carlo E-Step by Importance Sampling To lighten notation in this section we drop the superscripts from (Ak, πk), using simply (A, π) for the current parameter estimates. Moreover, since the statistics ¯α(m) t′,t′′ and ¯r(m) 1,t′ depend only on the mth co-activation observation, y(m), we focus on a particular length-N path observation y = (y1, y2, . . . , yN) and drop the superscript (m). A na¨ıve Monte Carlo approximation would be based on random permutations sampled from the uniform distribution on SN. However, the reason we resort to approximation techniques in the first place is that SN is large, but typically only a small fraction of its elements have non-negligible posterior probability, P[τ|y, A, π]. Although we would ideally sample directly from the posterior, this would require determining its value for all N! permutations. Instead, we propose the following sequential scheme for sampling a permutation using the current parameter estimates, (A, π). To ensure the same element is not sampled twice we introduce a vector of binary flags, f = (f1, f2, . . . , f|V |) ∈{0, 1}|V |. Given a probability distribution p = (p1, p2, . . . , p|V |) on the vertex set, V , denote by p|f the restriction of p to those elements i ∈V for which fi = 1; i.e., (p|f)i = pifi P|V | j=1 pjfj , for i = 1, 2, . . . , |V |. (6) Our sampling scheme proceeds as follows: Step 1: Initialize f so that fi = 1 if yt = i for some t = 1, . . . , N, and fi = 0 otherwise. Sample an element v from V according to the distribution π|f on V . Find t such that yt = v. Set τ1 = t. Set fv = 0 to prevent yt from being sampled again (ensure τ is a permutation). Set i = 2. Step 2: Let Av denote the vth row of the transition matrix. Sample an element v′ from V according to the distribution Av|f on V . Find t such that yt = v′. Set τi = t. Set fv′ = 0. Step 3: While i < N, update v ←v′ and i ←i + 1 and repeat Step 2; otherwise, stop. Repeating this sampling procedure L times yields a collection of iid permutations τ 1, τ 2, . . . , τ L, where the superscript now identifies the sample number; the corresponding permutation matrices are r1, r2, . . . , rL. Samples generated according to the scheme described above are drawn from a distribution R[τ|x, A, π] on SN which is different from the posterior P[τ|x, A, π]. Importance sample estimates correct for this disparity and are given by the expressions br1,t′ = PL ℓ=1 uℓrℓ 1,t′ PL ℓ=1 uℓ and bαt′,t′′ = PL ℓ=1 uℓ PN t=2 rℓ t,t′rℓ t−1,t′′ PL ℓ=1 uℓ , (7) where the correction factor (or weight) for sample rℓis given by uℓ= P[rℓ|x, A, π] R[rℓ|x, A, π] = P[τ ℓ|y, A, π] R[τ ℓ|y, A, π] = N Y t=2 N X t′=t Ayτℓ t−1,yτℓ t′ . (8) A detailed derivation of the exact form of the induced distribution, R, and the correction factor, uℓ, based on the sequential nature of the sampling scheme, along with further discussion and comparison with alternative sampling schemes can be found in the supplementary document [7]. In fact, terms in the product (8) are readily available as a byproduct of Step 2 (denominator of Av|f). 3.2 Monotonicity and Convergence Standard EM convergence results directly apply when the exact E-step is used [11, 12]. Let θk = (Ak, πk). By choosing θk+1 according to (5) we have θk+1 = arg maxθ Q(θ; θk), and the monotonicity property, Q(θk+1; θk) ≥Q(θk; θk), is satisfied. Together with the fact that the marginal log-likelihood (1) is continuous in θ and bounded above, the monotonicity property guarantees that the exact EM iterates converge monotonically to a local maximum of log P[Y|θ]. When the Monte Carlo E-step is used, we no longer have monotonicity since now the M-step solves bθ k+1 = arg maxθ bQ(θ; bθ k), where bQ is defined analogously to Q but with ¯α(m) t′,t′′ and ¯r(m) 1,t′ replaced by bα(m) t′,t′′ and br(m) 1,t′ ; for monotonicity we need Q(bθ k+1; bθ k) ≥Q(bθ k; bθ k). To assure the Monte Carlo EM algorithm (MCEM) converges, the number of importance samples, L, must be chosen carefully so that bQ approximates Q well enough; otherwise the MCEM may be swamped with error. Recently, Caffo et al. [14] have proposed a method, based on central limit theorem-like arguments, for automatically adapting the number of Monte Carlo samples used at each EM iteration. They guarantee what we refer to as an (ϵ, δ)-probably approximately monotonic (PAM) update, stating that Q(bθ k+1; bθ k) ≥Q(bθ k; bθ k) −ϵ, with probability at least 1 −δ. Rather than resorting to asymptotic approximations, we take advantage of the specific form of Q in our problem to obtain the finite-sample PAM result below. Because bQ(bθ k+1; bθ k) involves terms log bAk i,j and log bπk i , in practice we bound bAk i,j and bπk i away from zero to ensure that bQ does not blow up. Specifically, we assume a small positive constant θmin so that bAk i,j ≥θmin and bπk i ≥θmin. Theorem 1 Let ϵ, δ > 0 be given. There exist finite constants bm > 0, independent of Nm, so that if Lm = 2b2 mT 2N 4 m | log θmin|2 ϵ2 log 2N 2 m 1 −(1 −δ)1/T (9) importance samples are used for the mth observation, then Q(bθ k+1; bθ k) ≥Q(bθ k; bθ k) −ϵ, with probability greater than 1 −δ. The proof involves two key steps. First, we derive finite sample concentration-style bounds for the importance sample estimates showing, e.g., that bα(m) t′,t′′ converges to ¯α(m) t′,t′′ at a rate which is exponential in the number of importance samples used. These bounds are based on rather novel concentration inequalities for importance sampling estimators, which may be of interest in their own right (see the supplementary document [7] for details). Then, accounting for the explicit form of Q in our problem, the result follows from application of the union bound and the assumptions that bAk i,j, bπk i ≥θmin. In fact, by making a slightly stronger assumption it can be shown that the MCEM update is probably monotonic (i.e., (0, δ)-PAM, not approximately monotonic) if L′ m importance samples are used for the mth observation, where L′ m also depends polynomially on Nm and T. See the supplementary document [7] for further discussion and for the full proof of Theorem 1. Recall that exact E-step computation requires Nm! operations for the mth observation (enumerating all permutations). The bound above stipulates that the number of importance samples required for a PAM update is on the order of N 4 m log N 2 m. Generating one importance sample using the sequential procedure described above requires Nm operations. In contrast to the (exponential complexity) exact EM algorithm, this clearly demonstrates that the MCEM converges with high probability while only having polynomial computational complexity, and, in this sense, the MCEM meaningfully breaks the curse of dimensionality by using randomness to preserve the monotonic convergence property. 4 Experimental Results The performance of our algorithm for network inference from co-occurrences (NICO, pronounced “nee-koh”) has been evaluated on both simulated data and on a biological data set. In these experiments, network structure is inferred by first executing the EM algorithm to infer the parameters (A, π) of a Markov chain. Then, inserting edges in the inferred graph based on the most likely order of each path according to (A, π) ensures the resulting graph is feasible with respect to the observations. Because the EM algorithm is only guaranteed to converge to a local maximum, we rerun the algorithm from multiple random initializations and chose the mostly likely of these solutions. To gauge the performance of our algorithm we use the edge symmetric difference error: the total number of false positives (edges in the inferred network which do not exist in the true network) plus the number of false negatives (edges in the true network not appearing in the inferred network). We simulate co-occurrence observations in the following fashion. A random graph on 50 vertices is sampled. Disjoint sets of vertices are randomly chosen as path origins and destinations, paths are generated between each origin-destination pair using the shortest path algorithm with either unit weight per edge (“shortest path”) or a random weight on each edge (“random routing”), and then co-occurrence observations are formed from each path. We keep the number of origins fixed at 5 and vary the number of destinations between 5 and 40 to see how the number of observations effects performance. NICO performance is compared against the frequency method (FM) described in [4]. Figure 1 plots the edge error for synthetic data generated using (a) shortest path routing, and (b) random routing. Each curve is the average performance over 100 different network and path real5 10 15 20 25 30 35 40 0 1 2 3 4 5 6 7 Num. Destinations Edge Symmetric Difference Freq. Method (Sparsest) Freq. Method (Best) NICO (ML) (a) Shortest path routes 5 10 15 20 25 30 35 40 0 1 2 3 4 5 6 7 Num. Destinations Edge Symmetric Difference Freq. Method (Sparsest) Freq. Method (Best) NICO (ML) (b) Random routes Figure 1: Edge symmetric differences between inferred networks and the network one would obtain using co-occurrence measurements arranged in the correct order. Performance is averaged over 100 different network realizations. For each configuration 10 NICO and FM solutions are obtained via different initializations. We then choose the NICO solution yielding the largest likelihood, and compare with both the sparsest (fewest edges) and clairvoyant best (lowest error) FM solution. izations. For each network/path realization, the EM algorithm is executed with 10 random initializations. Exact E-step calculation is used for observations with Nm ≤12, and importance sampling is used for longer paths. The longest observation in our data has Nm = 19. The FM uses simple pairwise frequencies of co-occurrence to assign an order independently to each path observation. Of the 10 NICO solutions (different random initializations), we use the one based on parameter estimates yielding the highest likelihood score which also always gives the best performance. Because it is a heuristic, the FM does not provide a similar mechanism for ranking solutions from different initializations. We plot FM performance for two schemes; one based on choosing the sparsest FM solution (the one with the fewest edges), and one based on clairvoyantly choosing the FM solution with lowest error. NICO consistently outperforms even the clairvoyant best FM solution. Our method has also been applied to infer the stress-activated protein kinease (SAPK)/Jun Nterminal kinase (JNK) and NFκB signal transduction pathways1 (biological networks). The clustering procedure described in [2] is applied to microarray data in order to identify 18 co-occurrences arising from different environmental stresses or growth factors (path source) and terminating in the production of SAPK/JNK or NFκB proteins. The reconstructed network (combined SAPK/JNK and NFκB signal transduction pathways) is depicted in Figure 2. This structure agrees with the signalling pathways identified using traditional experimental techniques which test individually for each possible edge (e.g., “MAPK” and “NF-κB Signaling” on http://www.cellsignal.com). 5 Conclusion This paper describes a probabilistic model and statistical inference procedure for inferring network structure from incomplete “co-occurrence” measurements. Co-occurrences are modelled as samples of a first-order Markov chain subjected to a random permutation. We describe exact and Monte Carlo EM algorithms for calculating maximum likelihood estimates of the Markov chain parameters (initial state distribution and transition matrix), treating the random permutations as hidden variables. Standard results for the EM algorithm guarantee convergence to a local maximum. Although our exact EM algorithm has exponential computational complexity, we provide finite-sample bounds guaranteeing convergence of the Monte Carlo EM variation to a local maximum with high probability and with only polynomial complexity. Our algorithm is easily extended to compute maximum a posteriori estimates, applying a Dirichlet prior to the initial state distribution and to each row of the Markov transition matrix. 1NFκB proteins control genes regulating a broad range of biological processes including innate and adaptive immunity, inflammation and B cell development. The NFκB pathway is a collection of paths activated by various environmental stresses and growth factors, and terminating in the production of NFκB. PI3K PLCgamma2 ArtCot PKC MALT1 TRAF6 TAK1 IKK NFkappaBC1 JNK bTrCP MEKK MKK Ag NFkappaBC2 NFKappaB AgMHC IL1 dsRNA PKR TNF GF RAS HPK LT NIK UV RAC CDC42 RHO CS2 CS1 FAS GCKs OS ASK1 Figure 2: Inferred topology of the combined SAPK/JNK and NFκB signal transduction pathways. Co-occurrences are obtained from gene expression data via the clustering algorithm described in [2], and then network is inferred using NICO. Acknowledgments The authors of this paper would like to thank D. Zhu and A.O. Hero for providing the data and collaborating on the biological network experiment reported in Section 4. This work was supported in part by the Portuguese Foundation for Science and Technology grant POSC/EEA-SRI/61924/2004, the Directorate of National Intelligence, and National Science Foundation grants CCF-0353079 and CCR-0350213. References [1] E. Klipp, R. Herwig, A. Kowald, C. Wierling, and H. Lehrach. Systems Biology in Practice: Concepts, Implementation and Application. John Wiley & Sons, 2005. [2] D. Zhu, A. O. Hero, H. Cheng, R. Khanna, and A. Swaroop. Network constrained clustering for gene microarray data. Bioinformatics, 21(21):4014–4020, 2005. [3] Y. Liu and H. Zhao. A computational approach for ordering signal transduction pathway components from genomics and proteomics data. BMC Bioinformatics, 5(158), October 2004. [4] M. G. Rabbat, J. R. Treichler, S. L. Wood, and M. G. Larimore. Understanding the topology of a telephone network via internally-sensed network tomography. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. [5] O. Sporns and G. Tononi. Classes of network connectivity and dynamics. Complexity, 7(1):28–38, 2002. [6] O. Sporns, D. R. Chialvo, M. Kaiser, and C. C. Hilgetag. Organization, development and function of complex brain networks. Trends in Cognitive Science, 8(9), 2004. [7] M.G. Rabbat, M.A.T. Figueiredo, and R.D. Nowak. Supplement to inferring network structure from co-occurrences. Technical report, University of Wisconsin-Madison, October 2006. [8] J. Kubica, A. Moore, D. Cohn, and J. Schneider. cGraph: A fast graph-based method for link analysis and queries. In Proc. IJCAI Text-Mining and Link-Analysis Workshop, Acapulco, Mexico, August 2003. [9] D. Heckerman, D. Geiger, and D. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995. [10] N. Friedman and D. Koller. Being Bayesian about Bayesian network structure: A Bayesian approach to structure discovery in Bayesian networks. Machine Learning, 50(1–2):95–125, 2003. [11] R. A. Boyles. On the convergence of the EM algorithm. J. Royal Statistical Society B, 45(1):47–50, 1983. [12] C. F. J. Wu. On the convergence properties of the EM algorithm. Ann. of Statistics, 11(1):95–103, 1983. [13] C. Robert and G. Casella. Monte Carlo Statistical Methods. Springer Verlag, New York, 1999. [14] B. S. Caffo, W. Jank, and G. L. Jones. Ascent-based Monte Carlo EM. J. Royal Statistical Society B, 67(2):235–252, 2005.
|
2006
|
155
|
2,983
|
Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning Peter Auer Ronald Ortner University of Leoben, Franz-Josef-Strasse 18, 8700 Leoben, Austria {auer,rortner}@unileoben.ac.at Abstract We present a learning algorithm for undiscounted reinforcement learning. Our interest lies in bounds for the algorithm’s online performance after some finite number of steps. In the spirit of similar methods already successfully applied for the exploration-exploitation tradeoff in multi-armed bandit problems, we use upper confidence bounds to show that our UCRL algorithm achieves logarithmic online regret in the number of steps taken with respect to an optimal policy. 1 Introduction 1.1 Preliminaries Definition 1. A Markov decision process (MDP) M on a finite set of states S with a finite set of actions A available in each state ∈S consists of (i) an initial distribution µ0 over S, (ii) the transition probabilities p(s, a, s′) that specify the probability of reaching state s′ when choosing action a in state s, and (iii) the payoff distributions with mean r(s, a) and support in [0, 1] that specify the random reward for choosing action a in state s. A policy on an MDP M is a mapping π : S →A. We will mainly consider unichain MDPs, in which under any policy any state can be reached (after a finite number of transitions) from any state. For a policy π let µπ be the stationary distribution induced by π on M.1 The average reward of π then is defined as ρ(M, π) := X s∈S µπ(s)r(s, π(s)). (1) A policy π∗is called optimal on M, if ρ(M, π) ≤ρ(M, π∗) =: ρ∗(M) =: ρ∗for all policies π. Our measure for the quality of a learning algorithm is the total regret after some finite number of steps. When a learning algorithm A executes action at in state st at step t obtaining reward rt, then RT := PT −1 t=0 rt −Tρ∗denotes the total regret of A after T steps. The total regret Rε T with respect to an ε-optimal policy (i.e. a policy whose return differs from ρ∗by at most ε) is defined accordingly. 1.2 Discussion We would like to compare this approach with the various PAC-like bounds in the literature as given for the E3-algorithm of Kearns, Singh [1] and the R-Max algorithm of Brafman, Tennenholtz [2] (cf. also [3]). Both take as inputs (among others) a confidence parameter δ and an accuracy parameter 1Every policy π induces a Markov chain Cπ on M. If Cπ is ergodic with transition matrix P, then there exists a unique invariant and strictly positive distribution µπ, such that independent of µ0 one has µn = µ0 ¯Pn →µπ, where ¯Pn = 1 n Pn j=1 P j. If Cπ is not ergodic, µπ will depend on µ0. ε. The algorithms then are shown to yield ε-optimal return after time polynomial in 1 δ , 1 ε (among others) with probability 1−δ. In contrast, our algorithm has no such input parameters and converges to an optimal policy with expected logarithmic online regret in the number of steps taken. Obviously, by using a decreasing sequence εt, online regret bounds for E3 and R-Max can be achieved. However, it is not clear whether such a procedure can give logarithmic online regret bounds. We rather conjecture that these bounds either will be not logarithmic in the total number of steps (if εt decreases quickly) or that the dependency on the parameters of the MDP – in particular on the distance between the reward of the best and a second best policy – won’t be polynomial (if εt decreases slowly). Moreover, although our UCRL algorithm shares the “optimism under uncertainty” maxim with RMax, our mechanism for the exploitation-exploration tradeoff is implicit, while E3 and R-Max have to distinguish between “known” and “unknown” states explicitly. Finally, in their original form both E3 and R-Max need a policy’s ε-return mixing time Tε as input parameter. The knowledge of this parameter then is eliminated by calculating the ε-optimal policy for Tε = 1, 2, . . ., so that sooner or later the correct ε-return mixing time is reached. This is sufficient to obtain polynomial PACbounds, but seems to be intricate for practical purposes. Moreover, as noted in [2], at some time step the assumed Tε may be exponential in the true Tε, which makes policy computation exponential in Tε. Unlike that, we need our mixing time parameter only in the analysis. This makes our algorithm rather simple and intuitive. Recently, more refined performance measures such as the sample complexity of exploration [3] were introduced. Strehl and Littman [4] showed that in the discounted setting, efficiency in the sample complexity implies efficiency in the average loss. However, average loss is defined in respect to the actually visited states, so that small average loss does not guarantee small total regret, which is defined in respect to the states visited by an optimal policy. For this average loss polylogarithmic online bounds were shown for for the MBIE algorithm [4], while more recently logarithmic bounds for delayed Q-learning were given in [5]. However, discounted reinforcement learning is a bit simpler than undiscounted reinforcement learning, as depending on the discount factor only a finite number of steps is relevant. This makes discounted reinforcement learning similar to the setting with trials of constant length from a fixed initial state [6]. For this case logarithmic online regret bounds in the number of trials have already been given in [7]. Since we measure performance during exploration, the exploration vs. exploitation dilemma becomes an important issue. In the multi-armed bandit problem, similar exploration-exploitation tradeoffs were handled with upper confidence bounds for the expected immediate returns [8, 9]. This approach has been shown to allow good performance during the learning phase, while still converging fast to a nearly optimal policy. Our UCRL algorithm takes into account the state structure of the MDP, but is still based on upper confidence bounds for the expected return of a policy. Upper confidence bounds have been applied to reinforcement learning in various places and different contexts, e.g. interval estimation [10, 11], action elimination [12], or PAC-learning [6]. Our UCRL algorithm is similar to Strehl, Littman’s MBIE algorithm [10, 4], but our confidence bounds are different, and we are interested in the undiscounted case. Another paper with a similar approach is Burnetas, Katehakis [13]. The basic idea of their rather complex index policies is to choose the action with maximal return in some specified confidence region of the MDP’s probability distributions. The online-regret of their algorithm is asymptotically logarithmic in the number of steps, which is best possible. Our UCRL algorithm is simpler and achieves logarithmic regret not only asymptotically but uniformly over time. Moreover, unlike in the approach of [13], knowledge about the MDP’s underlying state structure is not needed. More recently, online reinforcement learning with changing rewards chosen by an adversary was considered under the presumption that the learner has full knowledge of the transition probabilities [14]. The given algorithm achieves best possible regret of O( √ T) after T steps. In the subsequent Sections 2 and 3 we introduce our UCRL algorithm and show that its expected online regret in unichain MDPs is O(log T) after T steps. In Section 4 we consider problems that arise when the underlying MDP is not unichain. 2 The UCRL Algorithm To select good policies, we keep track of estimates for the average rewards and the transition probabilities. For each step t let Nt(s, a) = |{0 ≤τ < t : sτ = s, aτ = a}|, Rt(s, a) = X 0≤τ<t: sτ =s, aτ =a rτ, Pt(s, a, s′) = |{0 ≤τ < t : sτ = s, aτ = a, sτ+1 = s′}|, be the number of steps when action a was chosen in state s, the sum of rewards obtained when choosing this action, and the number of times the transition was to state s′, respectively. From these numbers we immediately get estimates for the average rewards and transition probabilities, ˆrt(s, a) := Rt(s, a) Nt(s, a), ˆpt(s, a, s′) := Pt(s, a, s′) Nt(s, a) , provided that the number of visits in (s, a), Nt(s, a) > 0. In general, these estimates will deviate from the respective true values. However, together with appropriate confidence intervals they may be used to define a set Mt of plausible MDPs. Our algorithm then chooses an optimal policy ˜πt for an MDP ˜ Mt with maximal average reward ˜ρ∗ t := ρ∗( ˜ Mt) among the MDPs in Mt. That is, ˜πt := arg max π {ρ(M, π) : M ∈Mt}, and ˜ Mt := arg max M∈Mt{ρ(M, ˜πt)}. More precisely, we want Mt to be a set of plausible MDPs in the sense that P {ρ∗> ˜ρ∗ t } < t−α (2) for some α > 2. Essentially, condition (2) means that it is unlikely that the true MDP M is not in Mt. Actually, Mt is defined to contain exactly those unichain MDPs M ′ whose transition probabilities p′(·, ·, ·) and rewards r′(·, ·) satisfy for all states s, s′ and actions a r′(s, a) ≤ˆrt(s, a) + q log(2tα|S||A|) 2Nt(s,a) , and (3) |p′(s, a, s′) −ˆpt(s, a, s′)| ≤ q log(4tα|S|2|A|) 2Nt(s,a) . (4) Conditions (3) and (4) describe confidence bounds on the rewards and transition probabilities of the true MDP M such that (2) is implied (cf. Section 3.1 below). The intuition behind the algorithm is that if a non-optimal policy is followed, then this is eventually observed and something about the MDP is learned. In the proofs we show that this learning happens sufficiently fast to approach an optimal policy with only logarithmic regret. As switching policies too often may be harmful, and estimates don’t change very much after few steps, our algorithm discards the policy ˜πt only if there was considerable progress concerning the estimates ˆp(s, ˜πt(s), s′) or ˆr(s, ˜πt(s)). That is, UCRL sticks to a policy until the length of some of the confidence intervals given by conditions (3) and (4) is halved. Only then a new policy is calculated. We will see below (cf. Section 3.3) that this condition limits the number of policy changes without paying too much for not changing to an optimal policy earlier. Summing up, Figure 1 displays our algorithm. Remark 1. The optimal policy ˜π in the algorithm can be efficiently calculated by a modified version of value iteration (cf. [15]). 3 Analysis for Unichain MDPs 3.1 An Upper Bound on the Optimal Reward We show that with high probability the true MDP M is contained in the set Mt of plausible MDPs. Notation: Set confp(t, s, a) := min n 1, q log(4tα|S|2|A|) 2Nt(s,a) o and confr(t, s, a) := min n 1, q log(2tα|S||A|) 2Nt(s,a) o . Initialization: • Set t = 0. • Set N0(s, a) := R0(s, a) := P0(s, a, s′) = 0 for all s, a, s′. • Observe first state s0. For rounds k = 1, 2, . . . do Initialize round k: 1. Set tk := t. 2. Recalculate estimates ˆrt(s, a) and ˆpt(s, a, s′) according to ˆrt(s, a) := Rt(s,a) Nt(s,a), and ˆpt(s, a, s′) := Pt(s,a,s′) Nt(s,a) , provided that Nt(s, a) > 0. Otherwise set ˆrt(s, a) := 1 and ˆpt(s, a, s′) := 1 |S|. 3. Calculate new policy ˜πtk := arg max π {ρ(M, π) : M ∈Mt}, where Mt consists of plausible unichain MDPs M ′ with rewards r′(s, a) −ˆrt(s, a) ≤confr(t, s, a) and transition probabilities |p′(s, a, s′) −ˆpt(s, a, s′)| ≤confp(t, s, a). Execute chosen policy ˜πtk: 4. While confr(t, S, A) > confr(tk, S, A)/2 and confp(t, S, A) > confp(tk, S, A)/2 do (a) Choose action at := ˜πtk(st). (b) Observe obtained reward rt and next state st+1. (c) Update: • Set Nt+1(st, at) := Nt(st, at) + 1. • Set Rt+1(st, at) := Rt(st, at) + rt. • Set Pt+1(st, at, st+1) := Pt(st, at, st+1) + 1. • All other values Nt+1(s, a), Rt+1(s, a), and Pt+1(s, a, s′) are set to Nt(s, a), Rt(s, a), and Pt(s, a, s′), respectively. (d) Set t := t + 1. Figure 1: The UCRL algorithm. Lemma 1. For any t, any reward r(s, a) and any transition probability p(s, a, s′) of the true MDP M we have P n ˆrt(s, a) < r(s, a) − q log(2tα|S||A|) 2Nt(s,a) o < t−α 2|S||A|, (5) P n |ˆpt(s, a, s′) −p(s, a, s′)| > q log(4tα|S|2|A|) 2Nt(s,a) o < t−α 2|S|2|A|. (6) Proof. By Chernoff-Hoeffding’s inequality. Using the definition of Mt as given by (3) and (4) and summing over all s, a, and s′, Lemma 1 shows that M ∈Mt with high probability. This implies that the maximal average reward ˜ρ∗ t assumed by our algorithm when calculating a new policy at step t is an upper bound on ρ∗(M) with high probability. Corollary 1. For any t: P {ρ∗> ˜ρ∗ t } < t−α. 3.2 Sufficient Precision and Mixing Times In order to upper bound the loss, we consider the precision needed to guarantee that the policy calculated by UCRL is (ε-)optimal. This sufficient precision will of course depend on ε or – in case one wants to compete with an optimal policy – the minimal difference between ρ∗and the average reward of some suboptimal policy, ∆:= min π:ρ(M,π)<ρ∗ρ∗−ρ(M, π). It is sufficient that the difference between ρ( ˜ Mt, ˜πt) and ρ(M, ˜πt) is small in order to guarantee that ˜πt is an (ε-)optimal policy. For if |ρ( ˜ Mt, ˜πt) −ρ(M, ˜πt)| < ε, then by Corollary 1 with high probability ε > |ρ( ˜ Mt, ˜πt) −ρ(M, ˜πt)| ≥|ρ∗(M) −ρ(M, ˜πt)|, (7) so that ˜πt is already an ε-optimal policy on M. For ε = ∆, (7) implies the optimality of ˜πt. Thus, we consider bounds on the deviation of the transition probabilities and rewards for the assumed MDP ˜ Mt from the true values, such that (7) is implied. This is handled in the subsequent proposition, where we use the notion of the MDP’s mixing time, which will play an essential role throughout the analysis. Definition 2. Given an ergodic Markov chain C, let Ts,s′ be the first passage time for two states s, s′, that is, the time needed to reach s′ when starting in s. Furthermore let Ts,s the return time to state s. Let TC := maxs,s′∈S E(Ts,s′), and κC := maxs∈S maxs′̸=s E(Ts′,s) 2E(Ts,s) . Then the mixing time of a unichain MDP M is TM := maxπ TCπ, where Cπ is the Markov chain induced by π on M. Furthermore, we set κM := maxπ κCπ. Our notion of mixing time is different from the notion of ε-return mixing time given in [1, 2], which depends on an additional parameter ε. However, it serves a similar purpose. Proposition 1. Let p(·, ·), ˜p(·, ·) and r(·), ˜r(·) be the transition probabilities and rewards of the MDPs M and ˜ M under the policy ˜π, respectively. If for all states s, s′ |˜r(s) −r(s)| < εr := ε 2 and |˜p(s, s′) −p(s, s′)| < εp := ε 2κM|S|2 , then |ρ( ˜ M, ˜π) −ρ(M, ˜π)| < ε. The proposition is an easy consequence of the following result about the difference in the stationary distributions of ergodic Markov chains. Theorem 1 (Cho, Meyer[16]). Let C, ˜C be two ergodic Markov chains on the same state space S with transition probabilities p(·, ·), ˜p(·, ·) and stationary distributions µ, ˜µ. Then the difference in the distributions µ, ˜µ can be upper bounded by the difference in the transition probabilities as follows: max s∈S |µ(s) −˜µ(s)| ≤κC max s∈S X s′∈S |p(s, s′) −˜p(s, s′)|, (8) where κC is as given in Definition 2. Proof of Proposition 1. By (8), X s∈S |µ(s) −˜µ(s)| ≤|S|κM max s∈S X s′∈S |˜p(s, s′) −p(s, s′)| ≤κM|S|2εp. As the rewards are ∈[0, 1] and P s µ(s) = 1, we have by (1) |ρ( ˜ M, ˜π) −ρ(M, ˜π)| ≤ X s∈S |˜µ(s) −µ(s)|˜r(s) + X s∈S |˜r(s) −r(s)|µ(s) < κM|S|2εp + εr = ε. Since εr > εp and the confidence intervals for rewards are smaller than for transition probabilities (cf. Lemma 1), in the following we only consider the precision needed for transition probabilities. 3.3 Bounding the Regret As can be seen from the description of the algorithm, we split the sequence of steps into rounds, where a new round starts whenever the algorithm recalculates its policy. The following facts follow immediately from the form of our confidence intervals and Lemma 1, respectively. Proposition 2. For halving a confidence interval of a reward or transition probability for some (s, a) ∈S × A, the number Nt(s, a) of visits in (s, a) has to be at least doubled. Corollary 2. The number of rounds after T steps cannot exceed |S||A| log2 T |S||A|. Proposition 3. If Nt(s, a) ≥log(4tα|S|2|A|) 2θ2 , then the confidence intervals for (s, a) are smaller than θ. We need to consider three sources of regret: first, by executing a suboptimal policy in a round of length τ, we may lose reward up to τ within this round; second, there may be some loss when changing policies; third, we have to consider the error probabilities with which some of our confidence intervals fail. 3.3.1 Regret due to Suboptimal Rounds Proposition 3 provides an upper bound on the number of visits needed in each (s, a) in order to guarantee that a newly calculated policy is optimal. This can be used to upper bound the total number of steps in suboptimal rounds. Consider all suboptimal rounds with |ˆptk(s, a, s′) −p(s, a, s′)| ≥εp for some s′, where a policy ˜πtk with ˜πtk(s) = a is played. Let m(s, a) be the number of these rounds and τi(s, a) (i = 1, . . . , m(s, a)) their respective lengths. The mean passage time between any state s′′ and s is upper bounded by TM. Then by Markov’s inequality, the probability that it takes more than 2TM steps to reach s from s′′ is smaller than 1 2. Thus we may separate each round i into τi(s,a) 2TM intervals of length ≥2TM, in each of which the probability of visiting state s is at least 1 2. Thus we may lower bound the number of visits Ns,a(n) in (s, a) within n such intervals by an application of Chernoff-Hoeffding’s inequality: P n Ns,a(n) ≥n 2 − p n log T o ≥1 −1 T . (9) Since by Proposition 3, Nt(s, a) < 2 log(4T α|S|2|A|) εp2 , we get m(s,a) X i=1 τi(s, a) 2TM < clog(4T α|S|2|A|) εp2 with probability 1 −1 T for a suitable constant c < 11. This gives for the expected regret in these rounds E m(s,a) X i=1 τi(s, a) < 2 c · TM log(4T α|S|2|A|) εp2 + 2 m(s, a) TM + 1 T T. Applying Corollary 2 and summing up over all (s, a), one sees that the expected regret due to suboptimal rounds cannot exceed 2 c |S||A|TM log(4T α|S|2|A|) εp2 + 2TM|S|2|A|2 log2 T |S||A| + |S||A|. 3.3.2 Loss by Policy Changes For any policy ˜πt there may be some states from which the expected average reward for the next τ steps is larger than when starting in some other state. This does not play a role if τ →∞. However, as we are playing our policies only for a finite number of steps before considering a change, we have to take into account that every time we switch policies, we may need a start-up phase to get into such a favorable state. In average, this cannot take more than TM steps, as this time is sufficient to reach any “good” state from some “bad” state. This is made more precise in the following lemma. We omit a detailed proof. Lemma 2. For all policies π, all starting states s0 and all T ≥0 E T −1 X t=0 r(st, π(st)) ≥Tρ(π, M) −TM. By Corollary 2, the corresponding expected regret after T steps is ≤|S||A|TM log2 T |S||A|. 3.3.3 Regret if Confidence Intervals Fail Finally, we have to take into account the error probabilities, with which in each round a transition probability or a reward, respectively, is not contained in its confidence interval. According to Lemma 1, the probability that this happens at some step t for a given state-action pair is < t−α 2|S||A| + |S| t−α 2|S|2|A| = t−α |S||A|. Now let t1 = 1, t2, . . . , tN ≤T be the steps in which a new round starts. As the regret in each round can be upper bounded by its length, one obtains for the regret caused by failure of confidence intervals N−1 X i=1 t−α i |S||A|(ti+1 −ti) ≤ N−1 X i=1 t−α i |S||A|cti < ∞ X t=1 c t1−α |S||A| < c′, using that ti+1 −ti < cti for a suitable constant c = c(|S|, |A|, TM) and provided that α > 2. 3.3.4 Putting Everything Together Summing up over all the sources of regret and replacing for εp yields the following theorem, which is a generalization of similar results that were achieved for the multi-armed bandit problem in [8]. Theorem 2. On unichain MDPs, the expected total regret of the UCRL algorithm with respect to an (ε-)optimal policy after T > 1 steps can be upper bounded by E(Rε T ) < const · |A|TMκ2 M|S|5 ε2 log T + 3TM|S|2|A|2 log2 T |S||A|, and E(RT ) < const · |A|TMκ2 M|S|5 ∆2 log T + 3TM|S|2|A|2 log2 T |S||A|. 4 Remarks and Open Questions on Multichain MDPs In a multichain MDP a policy π may split up the MDP into ergodic subchains Sπ i . Thus it may happen during the learning phase that one goes wrong and ends up in a part of the MDP that gives suboptimal return but cannot be left under no policy whatsoever. As already observed by Kearns, Singh [1], in this case it seems fair to compete with ρ∗(M) := maxπ minSπ i ρ(Sπ i , π). Unfortunately, the original UCRL algorithm may not work very well in this setting, as it is impossible for the algorithm to distinguish between a very low probability for a transition and its impossibility. Here the “optimism in the face of uncertainty” idea fails, as there is no way to falsify the wrong belief in a possible transition. Obviously, if we knew for each policy which subchains it induces on M (the MDP’s ergodic structure), UCRL could choose an MDP ˜ Mt and a policy ˜πt that maximizes the reward among all plausible MDPs with the given ergodic structure. However, only the empiric ergodic structure (based on the observations so far) is known. As the empiric ergodic structure may not be reliable, one may additionally explore the ergodic structures of all policies. Alas, the number of additional exploration steps will depend on the smallest positive transition probability. If the latter is not known, it seems that logarithmic online regret bounds can be no longer guaranteed. However, we conjecture that for a slightly modified algorithm the logarithmic online regret bounds still hold for communicating MDPs, in which for any two states s, s′ there is a suitable policy π such that s is reachable from s′ under π (i.e., s, s′ are contained in the same subchain Sπ i ). As Theorem 1 does not hold for communicating MDPs in general, a proof would need a different analysis. 5 Conclusion and Outlook Beside the open problems on multichain MDPs, it is an interesting question whether our results also hold when assuming for the mixing time not the slowest policy for reaching any state but the fastest. Another research direction is to consider value function approximation and continuous reinforcement learning problems. For practical purposes, using the variance of the estimates will reduce the width of the upper confidence bounds and will make the exploration even more focused, improving learning speed and regret bounds. In this setting, we have experimental results comparable to those of the MBIE algorithm [10], which clearly outperforms other learning algorithms like R-Max or ε-greedy. Acknowledgements. This work was supported in part by the the Austrian Science Fund FWF (S9104-N04 SP4) and the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002506778. This publication only reflects the authors’ views. References [1] Michael J. Kearns and Satinder P. Singh. Near-optimal reinforcement learning in polynomial time. Mach. Learn., 49:209–232, 2002. [2] Ronen I. Brafman and Moshe Tennenholtz. R-max – a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3:213–231, 2002. [3] Sham M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. [4] Alexander L. Strehl and Michael L. Littman. A theoretical analysis of model-based interval estimation. In Proc. 22nd ICML 2005, pages 857–864, 2005. [5] Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. Pac model-free reinforcement learning. In Proc. 23nd ICML 2006, pages 881–888, 2006. [6] Claude-Nicolas Fiechter. Efficient reinforcement learning. In Proc. 7th COLT, pages 88–97. ACM, 1994. [7] Peter Auer and Ronald Ortner. Online regret bounds for a new reinforcement learning algorithm. In Proc. 1st ACVW, pages 35–42. ¨OCG, 2005. [8] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res., 3:397– 422, 2002. [9] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multi-armed bandit problem. Mach. Learn., 47:235–256, 2002. [10] Alexander L. Strehl and Michael L. Littman. An empirical evaluation of interval estimation for Markov decision processes. In Proc. 16th ICTAI, pages 128–135. IEEE Computer Society, 2004. [11] Leslie P. Kaelbling. Learning in Embedded Systems. MIT Press, 1993. [12] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for reinforcement learning. In Proc. 20th ICML, pages 162–169. AAAI Press, 2003. [13] Apostolos N. Burnetas and Michael N. Katehakis. Optimal adaptive policies for Markov decision processes. Math. Oper. Res., 22(1):222–255, 1997. [14] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Experts in a Markov decision process. In Proc. 17th NIPS, pages 401–408. MIT Press, 2004. [15] Martin L. Puterman. Markov Decision Processes. Discrete Stochastic Programming. Wiley, 1994. [16] Grace E. Cho and Carl D. Meyer. Markov chain sensitivity measured by mean first passage times. Linear Algebra Appl., 316:21–28, 2000.
|
2006
|
156
|
2,984
|
Fast Iterative Kernel PCA Nicol N. Schraudolph Simon G¨unter S.V. N. Vishwanathan {nic.schraudolph,simon.guenter,svn.vishwanathan}@nicta.com.au Statistical Machine Learning, National ICT Australia Locked Bag 8001, Canberra ACT 2601, Australia Research School of Information Sciences & Engineering Australian National University, Canberra ACT 0200, Australia Abstract We introduce two methods to improve convergence of the Kernel Hebbian Algorithm (KHA) for iterative kernel PCA. KHA has a scalar gain parameter which is either held constant or decreased as 1/t, leading to slow convergence. Our KHA/et algorithm accelerates KHA by incorporating the reciprocal of the current estimated eigenvalues as a gain vector. We then derive and apply Stochastic MetaDescent (SMD) to KHA/et; this further speeds convergence by performing gain adaptation in RKHS. Experimental results for kernel PCA and spectral clustering of USPS digits as well as motion capture and image de-noising problems confirm that our methods converge substantially faster than conventional KHA. 1 Introduction Principal Components Analysis (PCA) is a standard linear technique for dimensionality reduction. Given a matrix X ∈Rn×l of l centered, n-dimensional observations, PCA performs an eigendecomposition of the covariance matrix Q := XX⊤. The r × n matrix W whose rows are the eigenvectors of Q associated with the r ≤n largest eigenvalues minimizes the least-squares reconstruction error ||X −W ⊤W X||F , (1) where || · ||F is the Frobenius norm. As it takes O(n2l) time to compute Q and up to O(n3) time to eigendecompose it, PCA can be prohibitively expensive for large amounts of high-dimensional data. Iterative methods exist that do not compute Q explicitly and thereby reduce the computational cost to O(rn) per iteration. One such method is Sanger’s [1] Generalized Hebbian Algorithm (GHA), which updates W as Wt+1 = Wt + ηt[ytx⊤ t −lt(yty⊤ t )Wt]. (2) Here xt ∈Rn is the observation at time t, yt := Wtxt, and lt(·) makes its argument lower triangular by zeroing all elements above the diagonal. For an appropriate scalar gain ηt, Wt will generally tend to converge to the principal component solution as t →∞; though its global convergence is not proven [2]. One can do better than PCA in minimizing the reconstruction error (1) by allowing nonlinear projections of the data into r dimensions. Unfortunately such approaches often pose difficult nonlinear optimization problems. Kernel methods [3] provide a way to incorporate nonlinearity without unduly complicating the optimization problem. Kernel PCA [4] performs an eigendecomposition on the kernel expansion of the data, an l×l matrix. To reduce the attendant O(l2) space and O(l3) time complexity, Kim et al. [2] introduced the Kernel Hebbian Algorithm (KHA) kernelizing GHA. Both GHA and KHA are examples of stochastic approximation algorithms, whose iterative updates employ individual observations in place of — but, in the limit, approximating — statistical properties of the entire data. By interleaving their updates with the passage through the data, stochastic approximation algorithms can greatly outperform conventional methods on large, redundant data sets, even though their convergence is comparatively slow. Both the GHA and KHA updates incorporate a scalar gain parameter ηt, which is either held fixed or annealed according to some predefined schedule. Robbins and Monro [5] established conditions on the sequence of ηt that guarantee the convergence of many stochastic approximation algorithms; a widely used annealing schedule that obeys these conditions is ηt ∝τ/(t + τ), for any τ > 0. Here we propose the inclusion of a gain vector in the KHA, which provides each estimated eigenvector with its individual gain parameter. We present two methods for setting these gains: In the KHA/et algorithm, the gain of an eigenvector is reciprocal to its estimated eigenvalue as well as the iteration number t [6]. Our second method, KHA-SMD, additionally employs Schraudolph’s [7] Stochastic Meta-Descent (SMD) technique for adaptively controlling a gain vector for stochastic gradient descent, derived and applied here in Reproducing Kernel Hilbert Space (RKHS), cf. [8]. The following section summarizes Kim et al.’s [2] KHA. Sections 3 and 4 describe our KHA/et and KHA-SMD algorithms, respectively. We report our experiments with these algorithms in Section 5 before concluding with a discussion. 2 Kernel Hebbian Algorithm (KHA and KHA/t) Kim et al. [2] apply Sanger’s [1] GHA to data mapped into a reproducing kernel Hilbert space (RKHS) H via the function Φ : Rn →H. H and Φ are implicitly defined via the kernel k : Rn × Rn →H with the property ∀x, x′ ∈Rn : k(x, x′) = ⟨Φ(x), Φ(x′)⟩H, where ⟨·, ·⟩H denotes the inner product in H. Let Φ denote the transposed mapped data: Φ := [Φ(x1), Φ(x2), . . . Φ(xl)]⊤. (3) This assumes a fixed set of l observations whereas GHA relies on an infinite sequence of observations for convergence. Following Kim et al. [2], we use an indexing function p : N →Zl which concatenates random permutations of Zl to reconcile this discrepancy. PCA, GHA, and hence KHA all assume that the data is centered. Since the mapping into feature space performed by kernel methods does not necessarily preserve such centering, we must re-center the mapped data: Φ′ := Φ −MΦ, (4) where M denotes the l × l matrix with entries all equal to 1/l. This is achieved by replacing the kernel matrix K := ΦΦ⊤(i.e., [K]ij := k(xi, xj)) by its centered version K′ := Φ′Φ′⊤= (Φ −MΦ)(Φ −MΦ)⊤= K −MK −(MK)⊤+ MKM. (5) Since all rows of MK are identical (as are all elements of MKM) we can precalculate that row in O(l2) time and store it in O(l) space to efficiently implement operations with the centered kernel. The kernel centered on the training data is also used when testing the trained system on new data. From Kernel PCA [4] it is known that the principal components must lie in the span of the centered mapped data; we can therefore express the GHA weight matrix as Wt = AtΦ′, where A is an r × l matrix of expansion coefficients, and r the number of principal components. The GHA weight update (2) thus becomes At+1Φ′ = AtΦ′ + ηt[ytΦ′(xp(t))⊤−lt(yty⊤ t )AtΦ′], (6) where yt := WtΦ′(xp(t)) = AtΦ′Φ′(xp(t)) = Atk′ p(t), (7) using k′ i to denote the ith column of the centered kernel matrix K′. Since we have Φ′(xi)⊤= e⊤ i Φ′, where ei is the unit vector in direction i, (6) can be rewritten solely in terms of expansion coefficients as At+1 = At + ηt[yte⊤ p(t) −lt(yty⊤ t )At]. (8) Introducing the update coefficient matrix Γt := yte⊤ p(t) −lt(yty⊤ t )At (9) we obtain the compact update rule At+1 = At + ηtΓt. (10) In their experiments, Kim et al. [2] employed the KHA update (8) with a constant scalar gain, ηt = const. They also proposed letting the gain decay as ηt = 1/t for stationary data. 3 Gain Decay with Reciprocal Eigenvalues (KHA/et) Consider the term ytx⊤ t = Wtxtx⊤ t appearing on the right-hand side of the GHA update rule (2). At the desired solution, the rows of Wt contain the principal components, i.e., the leading eigenvectors of Q = XX⊤. The elements of yt thus scale with the associated eigenvalues of Q. Wide spreads of eigenvalues can therefore lead to ill-conditioning, hence slow convergence, of the GHA; the same holds for the KHA. In our KHA/et algorithm, we counteract this problem by furnishing KHA with a gain vector ηt that provides each eigenvector estimate with its individual gain parameter. The update rule (10) thus becomes At+1 = At + diag(ηt) Γt, (11) where diag(·) turns a vector into a diagonal matrix. To condition KHA, we set the gain parameters proportional to the reciprocal of both the iteration number t and the current estimated eigenvalue; a similar apporach was used by Chen and Chang [6] for neural network feature selection. Let λt be the vector of eigenvalues associated with the current estimate (as stored in At) of the first r eigenvectors. KHA/et sets the ith element of ηt to [ηt]i = ||λt|| [λt]i l t + l η0, (12) where η0 is a free scalar parameter, and l the size of the data set. This conditions the KHA update (8) by proportionately decreasing (increasing) the gain for rows of At associated with large (small) eigenvalues. The norm ||λt|| in the numerator of (12) is maximized by the principal components; its growth serves to counteract the l/(t + l) gain decay while the leading eigenspace is idientified. This achieves an effect comparable to an adaptive “search then converge” gain schedule [9] without introducing any tuning parameters. As the goal of KHA is to find the eigenvectors in the first place, we don’t know the true eigenvalues while running the algorithm. Instead we use the eigenvalues associated with KHA’s current eigenvector estimate, computed as [λt]i = ||[At]i∗K′|| ||[At]i∗|| (13) where [At]i∗denotes the i-th row of At. This can be stated compactly as λt = s diag[AtK′(AtK′)⊤] diag(AtA⊤ t ) (14) where the division and square root operation are performed element-wise, and diag(·) (when applied to a matrix) extracts the vector of elements along the matrix diagonal. Note that naive computation of AK′ is quite expensive: O(rl2). Since the eigenvalues evolve gradually, it suffices to re-estimate them only occasionally; we determine λt and ηt once for each pass through the training data set, i.e., every l iterations. Below we derive a way to maintain AK′ incrementally in an affordable O(rl) via Equations (17) and (18). 4 KHA with Stochastic Meta-Descent (KHA-SMD) While KHA/et makes reasonable assumptions about how the gains of a KHA update should be scaled, it is by no means clear how close the resulting gains are to being optimal. To explore this question, we now derive and implement the Stochastic Meta-Descent (SMD [7]) algorithm for KHA/et. SMD controls gains adaptively in response to the observed history of parameter updates so as to optimize convergence. Here we focus on the specifics of applying SMD to KHA/et; please refer to [7, 8] for more general derivations and discussion of SMD. Using the KHA/et gains as a starting point, the KHA-SMD update is At+1 = At + ediag(ρt) diag(ηt) Γt, (15) where the log-gain vector ρt is adjusted by SMD. (Note that the exponential of a diagonal matrix is obtained simply by exponentiating the individual diagonal entries.) In an RKHS, SMD adapts a scalar log-gain whose update is driven by the inner product between the gradient and a differential of the system parameters, all in the RKHS [8]. Note that ΓtΦ′ can be interpreted as the gradient in the RKHS of the (unknown) merit function maximized by KHA, and that (15) can be viewed as r coupled updates in RKHS, one for each row of At, each associated with a scalar gain. SMD-KHA’s adaptation of the log-gain vector is therefore driven by the diagonal entries of ⟨ΓtΦ′, BtΦ′⟩H, where Bt := dAt denotes the r × l matrix of expansion coefficients for SMD’s differential parameters: ρt = ρt−1 + µ diag(⟨ΓtΦ′, BtΦ′⟩H) = ρt−1 + µ diag(ΓtΦ′Φ′⊤B⊤ t ) = ρt−1 + µ diag(ΓtK′B⊤ t ), (16) where µ is a scalar tuning parameter. Naive computation of ΓtK′ in (16) would cost O(rl2) time, which is prohibitively expensive for large l. We can, however, reduce this cost to O(rl) by noting that (9) implies ΓtK′ = yte⊤ p(t)K′ −lt(yty⊤ t )AtK′ = ytk′⊤ p(t) −lt(yty⊤ t )AtK′, (17) where the r × l matrix AtK′ can be stored and updated incrementally via (15): At+1K′ = AtK′ + ediag(ρt) diag(ηt) ΓtK′. (18) The initial computation of A1K′ still costs O(rl2) in general but is affordable as it is performed only once. Alternatively, the time complexity of this step can easily be reduced to O(rl) by making A1 suitably sparse. Finally, we apply SMD’s standard update of the differential parameters: Bt+1 = ξBt + ediag(ρt) diag(ηt) (Γt + ξdΓt), (19) where the decay factor 0 ≤ξ ≤1 is another scalar tuning parameter. The differential dΓt of the gradient is easily computed by routine application of the rules of calculus: dΓt = d[yte⊤ p(t) −lt(yty⊤ t )At] = (dAt)k′ p(t)e⊤ p(t) −lt(yty⊤ t )(dAt) −[d lt(yty⊤ t )]At (20) = Btk′ p(t)e⊤ p(t) −lt(yty⊤ t )Bt −lt(Btk′ p(t)y⊤ t + ytk′⊤ p(t)B⊤ t )At. Inserting (9) and (20) into (19) yields the update rule Bt+1 = ξBt + ediag(ρt) diag(ηt)[(At+ ξBt) k′ p(t)e⊤ p(t) (21) −lt(yty⊤ t )(At+ ξBt) −ξ lt(Btk′ p(t)y⊤ t + ytk′⊤ p(t)B⊤ t )At]. In summary, the application of SMD to KHA/et comprises Equations (16), (21), and (15), in that order. The complete KHA-SMD algorithm is given as Algorithm 1. We initialize A1 to an isotropic normal density with suitably small variance, B1 to all zeroes, and ρ0 to all ones. The worst-case time complexity of non-trivial initialization steps is given explicitly; all steps in the repeat loop have a time complexity of O(rl) or less. Algorithm 1 KHA-SMD 1. Initialize: (a) calculate MK, MKM — O(l2) (b) ρ0 := [1 . . . 1]⊤ (c) B1 := 0 (d) A1 ∼N(0, (rl)−1I) (e) calculate A1K′ — O(rl2) 2. Repeat for t = 1, 2, . . . (a) calculate λt (13) (b) calculate ηt (11) (c) select observation xp(t) (d) calculate yt (7) (e) calculate Γt (9) (f) calculate ΓtK′ (17) (g) update ρt−1 →ρt (16) (h) update Bt →Bt+1 (21) (i) update At →At+1 (15) (j) update AtK′ →At+1K′ (18) 5 Experiments We compared our KHA/et and KHA-SMD algorithms with KHA using either a fixed gain (ηt = η0) or a scheduled gain decay (ηt = η0 l/(t + l), denoted KHA/t) in a number of different settings: Performing kernel PCA and spectral clustering on the well-known USPS dataset [10], replicating an image denoising experiment of Kim et al. [2], and denoising human motion capture data. In all experiments the Kernel Hebian Algorithm (KHA) and our enhanced variants are used to find the first r eigenvectors of the centered Kernel matrix K′. To assess the quality of the result, we reconstruct the Kernel matrix from the found eigenvectors and measure the reconstruction error E(A) := ||K′ −(AK′)⊤AK′||F , (22) where || · ||F is the Frobenius norm. The minimal reconstruction error from r eigenvectors, Emin := minA E(A), can be calculated by an eigendecomposition. This allows us to report reconstruction errors as excess errors relative to the optimal reconstruction, i.e., E(A)/ Emin −1. To compare algorithms we plot the excess reconstruction error on a logarithmic scale after each pass through the entire data set. This is a fair comparison since the overhead for KHA/et and KHASMD is negligible compared to the time required by the KHA base algorithm. The most expensive operation, the calculation of a row of the Kernel matrix, is shared by all algorithms. We manually tuned η0 for KHA, KHA/t, and KHA/et; for KHA-SMD we hand-tuned µ, used the same η0 as KHA/et, and the value ξ = 0.99 (set a priori) throughout. Thus a comparable amount of tuning effort went into each algorithm. Parameters were tuned by a local search over values in the set {a · 10b : a ∈{1, 2, 5}, b ∈Z}. 5.1 USPS Digits Our first set of experiments was performed on a subset of the well-known USPS dataset [10], namely the first 100 samples of each digit in the USPS training data. KHA with both a dot-product kernel and a Gaussian kernel with σ = 8 1 was used to extract the first 16 eigenvectors. The results are shown in Figure 1. KHA/et clearly outperforms KHA/t for both kernels, and KHA-SMD is able to increase the convergence speed even further. 1This is the value of σ used by Mika et al. [11]. Figure 1: Excess relative reconstruction error for kernel PCA (16 eigenvectors) on USPS data, using a dot-product (left) vs. Gaussian kernel with σ = 8 (right). 5.2 Multipatch Image PCA For our second set of experiments we replicated the image de-noising problem used by Kim et al. [2], the idea being that reconstructing image patches from their r leading eigenvectors will eliminate most of the noise. The image considered here is the famous Lena picture [12] which was divided in four sub-images. From each sub-image 11×11 pixel windows were sampled on a grid with twopixel spacing to produce 3844 vectors of 121 pixel intensity values each. The KHA with Gaussian kernel (σ = 1) was used to find the 20 best eigenvectors for each sub-image. Results averaged over all four sub-images are shown in Figure 2 (left), including KHA with the constant gain of η0 = 0.05 employed by Kim et al. [2] for comparison. After 50 passes through the training data, KHA/et achieves an excess reconstruction error two orders of magnitude better than conventional KHA; KHA-SMD yields an additional order of magnitude improvement. KHA/t, while superior to a constant gain, is comparatively ineffective here. Kim et al. [2] performed 800 passes through the training data. Replicating this approach we obtain a reconstruction error of 5.64%, significantly worse than KHA/et and KHA-SMD after 50 passes. The signal-to-noise ratio (SNR) of the reconstruction after 800 passes with constant gain is 13.46 2 while KHA/et achieves comparable performance much faster, reaching an SNR of 13.49 in 50 passes. 5.3 Spectral Clustering Spectral Clustering [13] is a clustering method which includes the extraction of the first kernel PCs. In this section we present results of the spectral clustering of all 7291 patterns of the USPS data [10] where 10 kernel PCs were obtained by KHA. We used the spectral clustering method presented in 2Kim et al. [2] reported an SNR of 14.09; the discrepancy is due to different reconstruction methods. Figure 2: Excess relative reconstruction error (left) for multipatch image PCA on a noisy Lena image (center), using a Gaussian kernel with σ = 1; denoised image obtained by KHA-SMD (right). Figure 3: Excess relative reconstruction error (left) and quality of clustering as measured by variation of information (right) for spectral clustering of the USPS data with a Gaussian kernel (σ = 8). [13], and evaluate our results via the Variation of Information (VI) metric [14], which compares the clustering obtained by spectral clustering to that induced by the class labels. On the USPS data, a VI of 4.54 corresponds to random performance, while clustering in perfect accordance with the class labels would give a VI of zero. Our results are shown in Figure 3. Again KHA-SMD dominates KHA/et in both convergence speed and quality of reconstruction (left); KHA/et in turn outperforms KHA/t. The quality of the resulting clustering (right) reflects the quality of reconstruction. KHA/et and KHA-SMD produce a clustering as good as that obtained from a (computationally expensive) full kernel PCA within 10 passes through the data; KHA/t after more than 30 passes. 5.4 Human motion denoising In our final set of experiments we employed KHA to denoise a human walking motion trajectory from the CMU motion capture database (http://mocap.cs.cmu.edu), converted to Cartesian coordinates via Neil Lawrence’s Matlab Motion Capture Toolbox (http://www.dcs.shef. ac.uk/∼neil/mocap/). The experimental setup was similar to that of Tangkuampien and Suter [15]: Gaussian noise was added to the frames of the original motion, then KHA with 25 PCs was used to denoise them. The results are shown in Figure 4. As in the other experiments, KHA-SMD clearly outperformed KHA/et, which in turn was better than KHA/t. KHA-SMD managed to reduce the mean-squared error by 87.5%; it is hard to visually Figure 4: From left to right: Excess relative reconstruction error on human motion capture data with Gaussian kernel (σ = √ 1.5), one frame of the original data, a superposition of this original and the noisy data, and a superposition of the original and reconstructed (denoised) data. detect a difference between the denoised frames and the original ones — see Figure 4 (right) for an example. We include movies of the original, noisy, and denoised walk in the supporting material. 6 Discussion We modified Kim et al.’s [2] Kernel Hebbian Algorithm (KHA) by providing a separate gain for each eigenvector estimate. We then presented two methods, KHA/et and KHA-SMD, to set those gains. KHA/et sets them inversely proportional to the estimated eigenvalues and iteration number; KHA-SMD enhances that further by applying Stochastic Meta-Descent (SMD [7]) to perform gain adaptation in RKHS [8]. In four different experimental settings both methods were compared to a conventional gain decay schedule. As measured by relative reconstruction error, KHA-SMD clearly outperformed KHA/et, which in turn outperformed the scheduled decay, in all our experiments. Acknowledgments National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia’s Ability and the ICT Center of Excellence program. This work is supported by the IST Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778. References [1] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward network. Neural Networks, 2:459–473, 1989. [2] K. I. Kim, M. O. Franz, and B. Sch¨olkopf. Iterative kernel principal component analysis for image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(9): 1351–1366, 2005. [3] B. Sch¨olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [4] B. Sch¨olkopf, A. J. Smola, and K.-R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [5] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400–407, 1951. [6] L.-H. Chen and S. Chang. An adaptive learning algorithm for principal component analysis. IEEE Transaction on Neural Networks, 6(5):1255–1263, 1995. [7] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Computation, 14(7):1723–1738, 2002. [8] S. V. N. Vishwanathan, N. N. Schraudolph, and A. J. Smola. Step size adaptation in reproducing kernel Hilbert space. Journal of Machine Learning Research, 7:1107–1133, 2006. [9] C. Darken and J. E. Moody. Towards faster stochastic gradient search. In J. E. Moody, S. J. Hanson, and R. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 1009–1016. Morgan Kaufmann Publishers, 1992. [10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. J. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541– 551, 1989. [11] S. Mika, B. Sch¨olkopf, A. J. Smola, K.-R. M¨uller, M. Scholz, and G. R¨atsch. Kernel PCA and de-noising in feature spaces. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 536–542. MIT Press, 1999. [12] D. J. Munson. A note on Lena. IEEE Trans. Image Processing, 5(1), 1996. [13] A. Ng, M. Jordan, and Y. Weiss. Spectral clustering: Analysis and an algorithm (with appendix). In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, 2002. [14] M. Meila. Comparing clusterings: an axiomatic view. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 577–584, New York, NY, USA, 2005. ACM Press. [15] T. Tangkuampien and D. Suter. Human motion de-noising via greedy kernel principal component analysis filtering. In Proc. Intl. Conf. Pattern Recognition, 2006.
|
2006
|
157
|
2,985
|
Uncertainty, phase and oscillatory hippocampal recall M´at´e Lengyel and Peter Dayan Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, United Kingdom {lmate,dayan}@gatsby.ucl.ac.uk Abstract Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1 Introduction In a network such as hippocampal area CA3 that shows prominent oscillations during memory retrieval and other functions [1], there are apparently three, somewhat separate, ways in which neurons might represent information within a single cycle: they must choose how many spikes to fire; what the mean phase of those spikes is; and how concentrated those spikes are about that mean. Most groups working on the theory of spiking oscillatory networks have considered only the second of these – this is true, for instance, of Hopfield’s work on olfactory representations [2] and Yoshioka’s [3] and Lengyel & Dayan’s work [4] on analog associative memories in CA3. Since neurons do really fire more or less than one spike per cycle, and furthermore in a way that can be informationally rich [5, 6], this poses a key question as to what the other dimensions convey. The number of spikes per cycle is an obvious analog of a conventional firing rate. Recent sophisticated models of firing rates of single neurons and neural populations treat them as representing uncertainty about the quantities coded, partly driven by the strong psychophysical and computational evidence that uncertainty plays a key role in many aspects of neural processing [7, 8, 9]. Single neurons can convey the certainty of a binary proposition by firing more or less strongly [10, 11]; a whole population can use firing rates to convey uncertainty about a collectively-coded analog quantity [12]. However, if neurons can fire multiple spikes per cycle, then the degree to which the spikes are concentrated around a mean phase is an additional channel for representing information. Concentration is not merely an abstract quantity; rather we can expect that the effect of the neuron on its postsynaptic partners will be strongly influenced by the burstiness of the spikes, an effect apparent, for instance, in the complex time-courses of short term synaptic dynamics. Here, we suggest that concentration codes for the uncertainty about phase – highly concentrated spiking represents high certainty about the mean phase in the cycle. One might wonder whether uncertainty is actually important for the cases of oscillatory processing that have been identified. One key computation for spiking oscillatory networks is memory retrieval [3, 4]. Although it is not often viewed this way, memory retrieval is a genuinely probabilistic task [13, 14], with the complete answer to a retrieval query not being a single memory pattern, but rather a distribution over memory patterns. This is because at the time of the query the memory device only has access to incomplete information regarding the memory trace that needs to be recalled. Most importantly, the way memory traces are stored in the synaptic weight matrix implies a data lossy compression algorithm, and therefore the original patterns cannot be decompressed at retrieval with absolute certainty. In this paper, we first describe how oscillatory structures can use all three activity characteristics at their disposal to represent two pieces of information and two forms of uncertainty (Section 2). We then suggest that this representational scheme is appropriate as a model of uncertainty-aware probabilistic recall in CA3. We derive the recurrent neural network dynamics that manipulate these firing characteristics such that by the end of the retrieval process neurons represent a good approximation of the posterior distribution over memory patterns given the information in the recall cue and in the synaptic weights between neurons (Section 3). We show in numerical simulations that the derived dynamics lead to competent memory retrieval, supplemented by uncertainty signals that are predictive of retrieval errors (Section 4). 2 Representation Single cell The heart of our proposal is a suggestion for how to interpret the activity of a single neuron in a single oscillatory cycle (such as a theta-cycle in the hippocampus) as representing a probability distribution. This is a significant extension of standard work on single-neuron representations of probability [12]. We consider a distribution over two random variables, z ∈{0, 1}, a Bernoulli variable (for the case of memory, representing the participation of the neuron in the memory pattern), and x ∈[0, T), where T is the period of the underlying oscillation, a real valued phase variable (representing an analog quantity associated with that neuron if it participates in that pattern). This distribution is based on three quantities associated with the neuron’s activity (figure 1A): r the number of spikes in a cycle, φ the circular mean phase of those spikes, under the assumption that there is at least one spike, c the concentration of the spikes (mean resultant length of their phases, [15]), which measures how tightly clustered they are about φ In keeping with conventional single-neuron models, we treat r, via a (monotonically increasing) probabilistic activation function 0 ≤ρ(r) ≤1, as describing the probability that z = 1 (figure 1B), so the distribution is q (z; r) = ρ (r)z (1 −ρ (r))1−z. We treat the implied distribution over the true phase x as being conditional on z. If z = 0, then the phase is undefined. However, if z = 1, then the distribution over x is a mixture of q⊓(x), a uniform distribution on [0, T), and a narrow, quasi-delta, distribution q⊥(x; φ) (of width ϵ ≪T) around the mean firing phase (φ) of the spikes. The mixing proportion in this case is determined by a (monotonically increasing) function 0 ≤γ(c) ≤1 of the concentration of the spikes. In total: q (x, z; φ, c, r) = [ρ (r) [γ (c) q⊥(x; φ) + (1 −γ (c)) q⊓(x)]]z (1 −ρ (r))1−z (1) as shown in figure 1C. The marginal confidence in φ being correct is thus λ (c, r) = γ (c) · ρ (r), which we call ‘burst strength’. We can rewrite equation 1 in a more convenient form: q (x, z; φ, c, r) = [λ (c, r) q⊥(x; φ) + (ρ (r) −λ (c, r)) q⊓(x)]z (1 −ρ (r))1−z (2) Population In the case of a population of neurons, the complexity of representing a full joint distribution P[x, z] over random variables x = {xi}, z = {zi} associated with each neuron i grows exponentially with the number of neurons N. The natural alternative is to consider an approximation in which neurons make independent contributions, with marginals as in equation 2. The joint C φ r=2 c q(z ; r) z 1 0 ρ(r) q(x | z=1; φ, c) x 0 T γ(c) φ ε B A Figure 1: Representing uncertainty. A) A neuron’s firing times during a period [0, T) are described by three parameters: r, the number of spikes; φ the mean phase of those spikes; and c, the phase concentration. B) The firing rate r determines the probability ρ(r) that a Bernoulli variable associated with the unit takes the value z = 1. C) If z = 1, then φ and c jointly define a distribution over phase which is a mixture (weighted by γ(c)) of a distribution peaked at φ and a uniform distribution. distribution is then Q (x, z; φ, c, r) = Q i q (xi, zi; φi, ci, ri) (3) whose complexity scales linearly with N. Dynamics When the actual distribution P the population has to represent lies outside the class of representable distributions Q in equation 3 with independent marginals, a key computational step is to find activity parameters φ, c, r for the neurons that make Q as close to P as possible. One way to formalize the discrepancy between the two distributions is the KL-divergence F (φ, c, r) = KL [Q (x, z; φ, c, r) ∥P (x, z)] (4) Minimizing this by gradient descent τ dφi dt = −∂ ∂φi F (φ, c, r) τ dci dt = −∂ ∂ci F (φ, c, r) τ dri dt = −∂ ∂ri F (φ, c, r) (5) defines dynamics for the evolution of the parameters. In general, this couples the activities of neurons, defining recurrent interactions within the network.1 We have thus suggested a general representational framework, in which the specification of a computational task amounts to defining a P distribution which the network should represent as best as possible. Equation 5 then defines the dynamics of the interaction between the neurons that optimizes the network’s approximation. 3 CA3 memory One of the most widely considered tasks that recurrent neural networks need to solve is that of autoassociative memory storage and retrieval. Moreover, hippocampal area CA3, which is thought to play a key role in memory processing, exhibits oscillatory dynamics in which firing phases are known to play an important functional role. It is therefore an ideal testbed for our theory. We characterize the activity in CA3 neurons during recall as representing the probability distribution over memories being recalled. Treating storage from a statistical perspective, we use Bayes rule to define a posterior distribution over the memory pattern implied by a noisy and impartial cue. This distribution is represented approximately by the activities φi, ri, ci of the neurons in the network as in equation 3. Recurrent dynamics among the neurons as in equation 5 find appropriate values of these parameters, and model network interactions during recall in CA3. Storage We consider CA3 as storing patterns in which some neurons are quiet (zm i = 0, for the ith neuron in the mth pattern); and other neurons are active (zm i = 1); their activity then defining 1Of course, the firing rate is really an integer variable, since it is an actual number of spikes per cycle. For simplicitly, in the simulations below, we considered real-valued firing rates – an important next step is to drop this assumption. a firing phase (xi m ∈[0, T), where T is the period of the population oscillation. M such memory traces, each drawn from an (iid) prior distribution, P [x, z] = Q i [pzP (xi)]zi (1 −pz)1−zi , (6) (where pz is the prior probability of firing in a memory pattern; P (x) is the prior distribution for firing phases) are stored locally and additively in the recurrent synaptic weight matrix of a network of N neurons, W, according to learning rule Ω: Wij = PM m=1 zm i zm j Ω xm i , xm j for i ̸= j, and Wii = 0 (7) We assume that Ωis T¨oplitz and periodic in T, and either symmetric or anti-symmetric: Ω(x1, x2) = Ω(x1 −x2) = Ω(x1 −x2 mod T) = ±Ω(x2 −x1). Posterior for memory recall Following [14, 4], we characterize retrieval in terms of the posterior distribution over x, z given three sources of information: a recall cue (˜x,˜z), the synaptic weight matrix, and the prior over the memories. Under some basic independence assumptions, this factorizes into three terms P [x, z | ˜x,˜z, W] ∝P [x, z] · P [˜x,˜z | x, z] · P [W | x, z] (8) The first term is the prior (equation 6). The second term is the likelihood of receiving noisy or partial recall cue (˜x,˜z) if the true pattern to be recalled was (x, z): P [˜x,˜z | x, z] = Y i η1 ˜P1 (˜xi | xi) ˜zi (1 −η1)1−˜zi zi (1 −η0) ˜P0 (˜xi) ˜zi η1−˜zi 0 1−zi (9) where η1 = P [˜z = 1 | z = 1] and η0 = P [˜z = 0 | z = 0] are the probabilities of the presence or absence of a spike in the input given the presence or absence of a spike in the memory to be recalled, ˜P1 (˜x | x) and ˜P0 (˜x) are distributions of the phase of an input spike if there was or was not a spike in the memory to be recalled. The last term in equation 8 is the likelihood that weight matrix W arose from M patterns including (x, z). Making a factorized approximation P [W | x, z] ≃ Q i,j̸=i P [Wij | xi, zi, xj, zj] 1/2. Since the learning rule is additive and memory traces are drawn iid, the likelihood of a synaptic weight is approximately Gaussian for large M, with a quadratic log-likelihood [4]: log P [Wij | xi, zi, xj, zj] +c = zizj σ2 W (Wij −µW ) Ω(xi, xj) −1 2Ω2 (xi, xj) (10) where µW and σ2 W are the mean and variance of the distribution of synaptic weights after storing M −1 random memory traces (µW = 0 for antisymmetric Ω). Dynamics for memory recall Plugging the posterior from equation 8 to the general dynamics equation 5 yields the neuronal update rules that will be appropriate for uncertainty-aware memory recall, and which we treat as a model of recurrent dynamics in CA3. We give the exact formulæ for the dynamics in the supplementary material. They can be shown to couple together the various activity parameters of the neurons in appropriate ways, for instance weighting changes to φi for neuron i according to the burst strength of its presynaptic inputs, and increasing the concentration when the log posterior of the firing phase of the neuron, given that it should fire, log P[φi|zi = 1, ˜x, ˜z, W], is greater than the average of the log posterior. These dynamics generalize, and thus inherit, some of the characteristics of the purely phase-based network suggested in [4]. This means that they also inherit the match with physiologically-measured phase response curves (PRCs) from in vitro CA3 neurons that were measured to test this suggestion [16]. The key difference here is that we expect the magnitude (though not the shape) of the influence of a presynaptic neuron on the phase of a postsynaptic one to scale with its rate, for high concentration. Preliminary in vitro results show that PRCs recorded in response to burst stimulation are not qualitatively different from PRCs induced by single spikes; however, it remains to be seen if their magnitude scales in the way implied by the dynamics here. 0 1 2 3 4 5 !2 0 2 z=1 Phase error 0 1 2 3 4 5 0 0.5 1 z=1 Concentration 0 1 2 3 4 5 0 0.5 1 z=1 Firing rate 0 1 2 3 4 5 0 5 z=0 Time Phase 0 1 2 3 4 5 0 0.5 1 z=0 Time Concentration 0 1 2 3 4 5 0 0.5 1 z=0 Time Firing rate Figure 2: A single retrieval trial in the network. Time evolution of firing phases (left panels), concentrations (middle panels), and rates (right panels) of neurons that should (top row) or should not (bottom row) participate in the memory pattern being retrieved. Note that firing phases in the top row are plotted as a difference from the stored firing phases so that φ = 0 means perfect retrieval. Color code shows precision (blue: low, yellow: high) of the phase of the input to neurons, with red lines showing cells receving incorrect input rate. 4 Simulations Figure 2 shows the course of recall in the full network (with N = 100 neurons, and 10 stored patterns with pz = 0.5). For didactic convenience, we consider the case that the noise in the phase input was varied systematically for different neurons within a recall cue (a fact known to the network, ie incorporated into its dynamics), so that it is possible to see how the differential certainty evolves over the course of the network’s dynamics. The top left panel shows that neurons that should fire in the memory trace (ie for which z = 1) quickly converge on their correct phase, and that this convergence usually takes a longer time for neurons receiving more uncertain input. This is paralleled by the way their firing concentrations change (top middle panel): neurons with reliable input immediately increase their concentrations from the initial γ(c) = 0.5 value to γ(c) = 1, while for those having more unreliable input it takes a longer time to build up confidence about their firing phases (and by the time they become confident their phases are indeed correct). Neurons that should not fire (z = 0) build up their confidence even more slowly, more often remain fairly uncertain or only moderately certain about their firing phases, as expressed by their concentrations (middle bottom panel) – quite righteously. Finally, since the firing rate input to the network is correct 90%, most neurons that should or should not fire do or do not fire, respectively, with maximal certainty about their rate (top and bottom right panels). Various other metrics are important for providing insight into the operation of the network. In particular, we may expect there to be a relationship between the actual error in the phase of firing of the neurons recalled by the memory, and the firing rates and concentrations (in the form of burst strengths) of the associated neurons themselves. Neurons which are erring should whisper rather than shout. Figure 3A shows just this for the network. Here, we have sorted the neurons according to their burst strengths λ, and plotted histograms of errors in firing phase for each group. The lower the burst strength, the more likely are large errors – at least to an approximation. A similar relationship exists between recalled (analogue) and stored (binary) firing rates, where extreme values of the recalled firing rate indicate that the stored firing rate was 0 or 1 with higher certainty (Figure 3B). Figure 3C shows the results of a related analysis of experimental data kindly donated by Francesco Battaglia. He recorded neurons in hippocampal area CA1 (not CA3, although we may hope for some similar properties) whilst rats were shuttling on a linear track for food reward. CA1 neurons have place fields – locations in the environment where they respond with spikes – and the phases of these spikes relative to the ongoing theta oscillation in the hippocampus are also known to convey information about location in space [5]. To create the plot, we first selected epochs with highquality and high power theta activity in the hippocampus (to ensure that phase is well estimated). We then computed the mean firing phase within the theta cycle, φ, of each neuron as a function of the location of the rat, separately for each visit to the same location. We assumed that the ‘true’ phase x a neuron should recall at a given location is the average of these phases across different visits. We A B C !" 0 " 0 0.1 0.2 0.3 0.4 0.5 0.6 Error in firing phase Frequency 0.05 0.2 0.4 0.7 burst strength 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Retrieved firing rate Stored firing rate !" 0 " 0 0.1 0.2 ‘Error’ in firing phase Frequency burst strength (spikes / cycle) 0!0.5 0.5!1.5 1.5!2.5 2.5!3.5 3.5!4.5 Figure 3: Uncertainty signals are predictive of the error a cell is making both in simulation (A,B), and as recorded from behaving animals (C). Burst strength signals overall uncertainty about and thus predicts error in mean firing phase (A,C), while graded firing rates signal certainty about whether to fire or not (B). then evaluated the error a neuron was making at a given location on a given visit as the difference between its φ in that trial at that location and the ‘true’ phase x associated with that location. This allowed us to compute statistics of the error in phase as a function of the burst strength. The curves in the figure show that, as for the simulation, burst strength is at least partly inversely correlated with actual phase error, defined in terms of the overall activity in the population. Of course, this does not constitute a proof of our representational theory. One further way to evaluate the memory is to compare it to two existing associative memories that have previously been studied, and can be seen as special cases. On one hand, our memory adds the dimension of phase to the uncertainty-aware rate-based memory that Sommer & Dayan [14] studied. This memory made a somewhat similar variational approximation, but, as for the meanfield Boltzmann machine [17], only involving r and ρ(r) and no phases. On the other hand, the memory device can be seen as adding the dimension of rate to the phase-based memory that Lengyel & Dayan [4] treated. Note, however, that although this phase-based network used superficially similar probabilistic principles to the one we have developed here, in fact it did not operate according to uncertainty, since it made the key simplification that all neurons participate in all memories, and that they also fire exactly one spike on every cycle during recall. This restricted the dynamics of that network to perform maximum a posteriori (MAP) inference to find the single recalled pattern of activity that best accommodated the probabilistic constraints of the cue, the prior and the synaptic weights, rather than being able to work in the richer space of probabilistic recall of the dynamics we are suggesting here. Given these roots, we can follow the logic in figure 4 and compare the performance of our memory with these precursors in the cases for which they are designed. For instance, to compare with the rate-based network, we construct memories which include phase information. During recall, we present cues with relatively accurate rates, but relatively inaccurate phases, and evaluate the extent to which the network is perturbed by the presence of the phases (which, of course, it has to store in the single set of synaptic weights). Figure 4A shows exactly this comparison. Here, a relatively small network (N = 100) was used, with memories that are dense (pz = 0.5), and it is therefore a stringent test of the storage capacity. Performance is evaluated by calculating the average error made in recalled firing rates). In the figure, the two blue curves are for the full model (with the phase information in the input being relatively unreliable, its circular concentration parameter distributed uniformly between 0.1 and 10 across cells); the two yellow curves are for a network with only rates (which is similar to that described, but not simulated, by Sommer & Dayan [14]). Exactly the same rate information is provided to all networks, and is 10% inaccurate (a degree known to the dynamics in the form of η0 and η1). The two flat dashed lines show the performance in the case that there are no recurrent synaptic weights at all. This is an important control, since we are potentially presenting substantial information in the cues themselves. The two solid curves show that the full model tracks the reduced, rate-based, model almost perfectly until the performance totally breaks down. This shows that the phase information, and the existence of phase uncertainty and processing during recall, does not A B 1 10 100 1000 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Number of stored patterns Average error rate!coded model w/o learning rate!coded model full model w/o learning full model 1 10 100 1000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Number of stored patterns Average error phase!coded model w/o learning phase!coded model full model w/o learning full model Figure 4: Recall performance compared with a rate-only network (A) and a phase-only network (B). The full model (blue lines) performs just as well as the reduced ‘specialist’ models (yellow lines) in comparable circumstances (when the information provided to the networks in the dimension they shared is exactly the same). All models (solid lines) outperform the standard control of using the input and the prior alone (dashed lines). corrupt the network’s capacity to recall rates. Given its small size, the network is quite competent as an auto-associator. Figure 4B shows a similar comparison between this network and a network that only has to deal with uncertainty in firing phases but not in rates. Again, its performance at recalling phase, given uncertain and noisy phase cues, but good rate-cues, is exactly on a par with the pure, phase-based network. Further, the average errors are only modest, so the capacity of the network for storing analog phases is also impressive. 5 Discussion We have considered an interpretation of the activities of neurons in oscillating structures such as area CA3 of the hippocampus as representing distributions over two underlying quantities, one binary and one analogue. We also showed how this representational capacity can be used to excellent effect in the key, uncertainty-sensitive computation of memory recall, an operation in which CA3 is known to be involved. The resulting network model of CA3 encompasses critical aspects of its physiological properties, notably information-bearing firing rates and phases. Further, since it generalizes earlier theories of purely phase-based memories, this model is also consistent with the measured phase response curves of CA3 neurons, which characterize their actual dynamical interactions. Various aspects of this new theory are amenable to experimental investigation. First, the full dynamics (see the supplementary material) imply that firing rate and firing phase should be coupled together both pre-synpatically, in terms of the influence of timed input spikes, and post-synaptically, in terms of how changes in the activity of a neuron should depend on its own activity. In vitro experiments along the lines of those carried out before [16], in which we have precise experimental control over pre- and post-synaptic activity can be used to test these predictions. Further, making the sort of assumptions that underlie figure 3C, we can use data from awake behaving rats to see if the gross statistics of the changes in the activity of the neurons fit the expectations licensed by the theory. From a computational perspective, we have demonstrated that the network is a highly competent associative memory, correctly recalling both binary and analog information, along with certainty about it, and degrading gracefully in the face of overload. In fact, compared with the representation of other analogue quantities (such as the orientation of a visually preseted bar), analogue memory actually poses a particularly tough problem for the representation of uncertainty. This is because for variables like orientation, a whole population is treated as being devoted to the representation of the distribution of a single scalar value. By contrast, for analogue memory, each neuron has an independent analogue value, and so the dimensionality of the distribution scales with the number of neurons involved. This extra representational power comes from the ability of neurons to distribute their spikes within a cycle to indicate their uncertainty about phase (using the dimension of time in just the same way that distributional population codes [12] used the dimension of neural space). This dimension for representing analogue uncertainty is coupled to that of the firing rate for representing binary uncertainty, since neurons have to fire multiple times in a cycle to have a measurable lack of concentration. However, this coupling is exactly appropriate given the form of the distribution assumed in equation 2, since weakly firing neurons express only weak certainty about phase in any case. In fact, it is conceivable that we could combine a different model for the firing rate uncertainty with this model for analogue uncertainty, if, for instance, it is found that neuronal firing rates covary in ways that are not anticipated from equation 2. Finally, the most important direction for future work is understanding the uncertainty-sensitive coupling between multiple oscillating memories, where the oscillations, though dynamically coordinated, need not have the same frequencies. Exactly this seems to characterize the interaction between the hippocampus and the necortex during both consolidation and retrieval [18, 19]. Acknowledgments Funding from the Gatsby Charitable Foundation. We are very grateful to Francesco Battaglia for allowing us to use his data to produce figure 3C, and to him, and Ole Paulsen and Jeehyun Kwag for very helpful discussions. References [1] Szaliszny´o K, ´Erdi P. In The Handbook of Brain Theory and Neural Networks, 533, 2003. [2] Hopfield JJ. Nature 376:33, 1995. [3] Yoshioka M. Physical Review E 65, 2001. [4] Lengyel M, Dayan P. In Advances in Neural Information Processing Systems 17, 769, Cambridge, MA, 2005. MIT Press. [5] O’Keefe J, Recce ML. Hippocampus 3:317, 1993. [6] Huxter J, et al. Nature 425:828, 2003. [7] Ernst M, Banks M. Nature 415:429, 2002. [8] K¨ording K, Wolpert D. Nature 427:244, 2004. [9] Gold JI, Shadlen MN. Neuron 36:299, 2002. [10] Hinton G. Neural Comput 1:143, 1990. [11] Peterson C, Anderson J. Complex Systems 1:995, 1987. [12] Pouget A, et al. Annu Rev Neurosci 26:381, 2003. [13] MacKay DJC. In Maximum Entropy and Bayesian Methods, Laramie, 1990, 237, 1991. [14] Sommer FT, Dayan P. IEEE Trans Neural Netw 9:705, 1998. [15] Fisher NI. Statistical analysis of circular data. Cambridge University Press, 1995. [16] Lengyel M, et al. Nat Neurosci 8:1677, 2005. [17] Dayan P, Abbott LF. Theoretical Neuroscience. MIT Press, 2001. [18] Siapas AG, Wilson MA. Neuron 21:1123, 1998. [19] Jones M, Wilson M. PLoS Biol 3:e402, 2005.
|
2006
|
158
|
2,986
|
Speakers optimize information density through syntactic reduction Roger Levy Department of Linguistics UC San Diego 9500 Gilman Drive La Jolla, CA 92093-0108, USA rlevy@ling.ucsd.edu T. Florian Jaeger Department of Linguistics & Department of Psychology Stanford University & UC San Diego 9500 Gilman Drive La Jolla, CA 92093-0109, USA tiflo@csli.stanford.edu Abstract If language users are rational, they might choose to structure their utterances so as to optimize communicative properties. In particular, information-theoretic and psycholinguistic considerations suggest that this may include maximizing the uniformity of information density in an utterance. We investigate this possibility in the context of syntactic reduction, where the speaker has the option of either marking a higher-order unit (a phrase) with an extra word, or leaving it unmarked. We demonstrate that speakers are more likely to reduce less information-dense phrases. In a second step, we combine a stochastic model of structured utterance production with a logistic-regression model of syntactic reduction to study which types of cues speakers employ when estimating the predictability of upcoming elements. We demonstrate that the trend toward predictability-sensitive syntactic reduction (Jaeger, 2006) is robust in the face of a wide variety of control variables, and present evidence that speakers use both surface and structural cues for predictability estimation. 1 Introduction One consequence of the expressive richness of natural languages is that usually more than one means exists of expressing the same (or approximately the same) message. As a result, speakers are often confronted with choices as to how to structure their intended message into an utterance. At the same time, linguistic communication takes place under a host of cognitive and environmental constraints: speakers and addressees have limited cognitive resources to bring to bear, speaker and addressee have incomplete knowledge of the world and of each other’s state of knowledge, the environment of communication is noisy, and so forth. Under these circumstances, if speakers are rational then we can expect them to attempt to optimize the communicative properties of their utterances. But what are the communicative properties that speakers choose to optimize? The prevalence of ambiguity in natural language—the fact that many structural analyses are typically available for a given utterance—might lead one to expect that speakers seek to minimize structural ambiguity, but both experimental (Arnold et al., 2004, inter alia) and corpus-based (Roland et al., 2006, inter alia) investigations have found little evidence for active use of ambiguity-avoidance strategies. In this paper we argue for a different locus of optimization: that speakers structure utterances so as to optimize information density. Here we use the term “information”in its most basic information-theoretic sense—the negative log-probability of an event—and by “information density” we mean the amount of information per unit comprising the utterance. If speakers behave optimally, they should structure their utterances so as to avoid peaks and troughs in information density (see also (Aylett and Turk, 2004; Genzel and Charniak, 2002)). For example, this principle of uniform information density (UID) as an aspect of rational language production predicts that speakers should modulate phonetic duration in accordance with the predictability of the unit expressed. This has been shown by Bell et al. (2003, inter alia) for words and by Aylett and Turk (2004) for syllables. If UID is a general principle of communicative optimality, however, its effects should be apparent at higher levels of linguistic production as well. In line with this prediction are the results of Genzel and Charniak (2002); Keller (2004), who found that sentences taken out of context have more information the later they occur in a discourse. For phonetic reduction, choices about word duration can directly modulate information density. However, it is less clear how the effects of UID at higher levels of language production observed by Genzel and Charniak (2002) and Keller (2004) come about. Genzel and Charniak (2002) show that at least part of their result is driven by the repetition of open-class words, but it is unclear how this effect relates to a broader range of choice points within language production. In particular, it is unclear whether any choices above the lexical level are affected by information density (as expected if UID is general). In this paper we present the first evidence that speakers’ choice during syntactic planning is affected by information density optimization. This evidence comes from syntactic reduction—a phenomenon in which speakers have the choice of either marking a phrase with an optional word, or leaving it unmarked (Section 3). We show that in cases where the phrase is marked, the marking reduces the phrase’s information density, and that the phrases that get marked are the ones that would otherwise be the most information-dense (Section 4). This provides crucial support for UID as a general principle of language production. The possibility that speakers’ use of syntactic reduction optimizes information density leads to questions as to how speakers estimate the probability of an upcoming syntactic event. In particular, one can ask what types of cues language users employ when estimating these probabilites. For example, speakers could compute information density using only surface cues (such as the words immediately preceding a phrase). On the other hand, they might also take structural features of the utterance into account. We investigate these issues in Section 5 using an incremental model of structured utterance production. In this model, the predictability of the upcoming phrase markable by the optional word is taken as a measure of the phrase’s information density. The resulting predictability estimate, in turn, becomes a covariate in a separate model of syntactic reduction. Through this two-step modeling approach we show that predictability is able to explain a significant part of the variability in syntactic reduction, and that evidence exists for speakers using both structural and surface cues in estimating phrasal predictability. 2 Optimal information density in linguistic utterances We begin with the information-theoretic definition that the information conveyed by a complete utterance u is u’s Shannon information content (also called its surprisal), or log2 1 P (u). If the complete utterance u is realized in n units (for example, words wi), then the information conveyed by u is the sum of the information conveyed by each unit of u: log 1 P(u) = log 1 P(w1) + log 1 P(w2|w1) + · · · + log 1 P(wn|w1 · · · wn−1) (1) For simplicity we assume that each wi occupies an equal amount of time (for spoken language) or space (written language). Optimization of information density entails that the information conveyed by each wi should be as uniform and close to an ideal value as possible. There are at least two ways in which UID may be optimal. First, the transmission of a message via spoken or written language can be viewed as a noisy channel. From this assumption it follows that information density is optimized near the channel capacity, where speakers maximize the rate of information transmission while minimizing the danger of a mistransmitted message (see also Aylett (2000); Aylett and Turk (2004); Genzel and Charniak (2002)). That is, UID is an optimal solution to the problem of rapid yet error-free communication in a noisy environment. Second and independently of whether linguistic communication is viewed as a noisy channel, UID can be seen as minimizing comprehension difficulty. The difficulty incurred by a comprehender in processing a word wi is positively correlated with its surprisal (Hale, 2001; Levy, 2006). If the effect of surprisal on difficulty is superlinear, then the total difficulty of the utterance u (Pn i=1[log 1 P (wi|w1···wi−1)]k with k > 1) is minimized when information density is uniform (for proof see appendix; see also Levy 2005, ch. 2).1 That is, UID is also an optimal solution to the problem of low-effort comprehension. 3 Syntactic reduction UID would be optimal in several ways, but do speakers actually consider UID as a factor when making choices during online syntactic production? We address this question by directly linking a syntactic choice point to UID. If information density optimization is general, i.e. if it applies to all aspects of language production, we should find its effects even in structural choices. We use variation in the form of certain types of English relative clauses (henceforth RCs) to test this hypothesis. At the onset of an RC speakers can, but do not have to, utter the relativizer that.2 We refer to the omission of that as syntactic REDUCTION. (1) How big is [NP the familyi [RC (that) you cook for i ]]? Our dataset consists of a set of 3,452 RCs compatible with the above variation, extracted from the Switchboard corpus of spontaneous American English speech. All RCs were automatically annotated for a variety of control factors that are known to influence syntactic reduction of RCs, including RC size, distance of the RC from the noun it modifies, data about the speaker including gender and speech rate, local measures of speech disfluency, and formal and animacy properties of the RC subject (a full list is given in the appendix; see also (Jaeger, 2006)). These control factors are used in the logistic regression models presented in Section 5. 4 Reduction as a means of information density modulation From a syntactic perspective, the choice to omit a relativizer means that the first word of an RC conveys two pieces of information simultaneously: the onset of a relative clause and part of its internal contents (usually part of its subject, as you in Example (1)). Using the notation w···−1 for the context preceding the RC and w1 for the RC’s first word (excluding the relativizer, if any), these two pieces of information can be expressed as a Markov decomposition of w1’s surprisal: log 1 P(w1|w···−1) = log 1 P(RC|w···−1) + log 1 P(w1|RC, w···−1) (2) Conversely, the choice to use a relativizer separates out these two pieces of information, so that the only information carried by w1 is measured as log 1 P(w1|RC, that, w···−1) (3) If the overall distribution of syntactic reduction is in accordance with principles of informationdensity optimization, we should expect that full forms (overt relativizers) should be used more often when the information density of the RC would be high if the relativizer were omitted. The information density of the RC and subsequent parts of the sentence can be quantified by their Shannon information content. As a first test of this prediction, we use n-gram language models to measure the relationship between the Shannon information content of the first word of an RC and the tendency toward syntactic reduction. We examined the relationship between rate of syntactic reduction and the surprisal that w1 would have if no relativizer had been used—that is, log 1 P (w1|w···−1)—as estimated by a trigram language 1Superlinearity would be a natural consequence of limited cognitive resources, although the issue awaits further empirical investigation. 2To be precise, standard American English restricts omission of that to finite, restrictive, non-pied-piped, non-extraposed, non-subject-extracted relative clauses. Only such RCs are considered here. −5 −4 −3 −2 −1 0.0 0.2 0.4 0.6 0.8 1.0 log(P(W1 | W−2 W−1)) Likelihood of full form N= 1674 Figure 1: RC n-gram-estimated information density and syntactic reduction. Dotted green line indicates lowess fit. model.3 To eliminate circularity from this test (the problem that for an unreduced RC token, P(w1|w···−1) may be low precisely because that is normally inserted between w···−1 and w1), we estimated P(w1|w···−1) from a version of the Switchboard corpus in which all optional relativizers were omitted. That is, if we compare actual English with a hypothetical pseudo-English differing only in the absence of optional relativizers, are the overt relativizers in actual English distributed in a way such that they occur more in the contexts that would be of highest information density in the pseudo-English?4 For every actual instance of an RC onset · · · w−2w−1(that)w1 · · · we calculated the trigram probability P(w1|w−2w−1): that is, an estimate of the probability that w1 would have if no relativizer had been used, regardless of whether a relativizer was actually used or not. We then examined the relationship between this probability and the outcome event: whether or not a relativizer was actually used. Figure 4 shows the relationship between the different quantiles of the log-probability of w1 and the likelihood of syntactic reduction. As can be seen, reduction is more common when the probability P(w1|w−n · · · w−1) is high. This inverse relationship between w1 surprisal and relativizer use matches the predictions of UID. 5 5 Structural predictability and speaker choice Section 4 provides evidence that speakers’ choices about syntactic reduction are correlated with information density: RC onsets that would be more informationally dense in reduced form are less likely to be reduced. This observation does not, however, provide strong evidence that speakers are directly sensitive to information density in their choices about reduction. Furthermore, if speakers are sensitive to information density in their reduction choices, it raises a new question: what kind of information is taken into account in speakers’ estimation of information density? This section addresses the questions of whether reduction is directly sensitive to information density, and what information might be used in estimates of information density, using a two-step modeling approach. The first step involves a incremental stochastic model of structured utterance production. This model is used to construct estimates of the first term in Equation (2) contributing to an RC onset’s information density: the predictability (conditional probability) of an RC beginning at a 3In cases where the conditioning bigram was not found, we backed off to a conditioning unigram, and omitted cases where the conditioning unigram could not be found; no other smoothing was applied. We used hold-one-out estimation of n-gram probabilities to prevent bias. 4Omitting optional relativizers in the language model can alternatively be interpreted as assuming that speakers equate (3) with the second term of (2)—that is, the presence or absence of the relativizer is ignored in estimating the probablity of the first word of a relative clause. 5We also calculated the relationship for estimates of RC information density using a trigram model of the Switchboard corpus as-is. By this method, there is a priori reason to expect a correlation, and indeed reduction is (more strongly than in Figure 4) negatively correlated with this measure. S NP-SBJ PRP it BES ’s NP-PRD(1) CD one PP IN of NP(2) DT DT the JJ JJ last JJ JJ few NNS things PP-LOC IN in NP(3) DT DT the NN world RC you’d ever want to do . . . Figure 2: A flattened-tree representation of a sentence containing an RC. The incremental parse through world consists of everything to the left of the dashed line. given point in a sentence, given an incremental structural representation for the sentence up to that point. Because the target event space of this term is small, a wide variety of cues, or features, can be included in the model, and the reliability of the resulting predictability estimates is relatively high. This model is described in Section 5.1. The resulting predictability estimates serve as a crucial covariate in the second step: a logistic regression model including a number of control factors (see Section 3 and appendix). This model is used in Sections 5.3 as a stringent test of the explanatory power of UID for speakers’ reduction choices, and in Section 5.4 to determine whether evidence exists for speakers using structural as well as surface cues in their predictability estimates. 5.1 A structural predictability model In this section we present a method of estimating the predictability of a relative clause in its sentential context, contingent on the structural analysis of that context. For simplicity, we assume that structural analyses are context-free trees, and that the complete, correct incremental analysis of the sentential context is available for conditioning.6 In general, the task is to estimate P(RCn+1···|w1···n, T1···n) (4) that is, the probability that a phrase of type RC appears in the utterance beginning at wn+1, given the incremental structured utterance ⟨w1···n, T1···n⟩. To estimate these probabilities, we model production as a fully incremental, top-down stochastic tree generation process similar to that used for parsing in Roark (2001). Tree production begins by expanding the root node, and the expansion process for each non-terminal node N consists of the following steps: (a) choosing a leftmost daughter event D1 for N, and making it the active node; (b) recursively expanding the active node; and (c) choosing the next right-sister event Di+1, and making it the active node. Steps (b) and (c) are repeated until a special right-sister event ∗END∗is chosen in step (c), at which point expansion of N is complete. As in Collins (2003) and Roark (2001), this type of directed generative process allows conditioning on arbitrary features of the incremental utterance. 6If predictability from the perspective of the comprehender rather than the producer is taken to be of primary interest, this assumption may seem controversial. Nevertheless, there is little evidence that incremental structural misanalysis is a pervasive phenomenon in naturally occuring language (Roland et al., 2006), and the types of incremental utterances occurring immediately before relative clauses do not seem to be good candidates for local misanalysis. From a practical perspective, assuming access to the correct incremental analysis avoids the considerable difficulty involved in the incremental parsing of speech. After each word wn, the bottom-right preterminal of the incremental parse is taken as the currently active node N0; if its i-th ancestor is Ni then we have:7 P(RCn+1···|w1···n, T1···n) = k X i=0 " P(RC|Ni) i−1 Y j=0 P(∗END ∗|Nj) # (5) Figure 2 gives an example of an incremental utterance just before an RC, and illustrates how Equation (5) might be applied.8 At this point, NN would be the active node, and step (b) of expanding NP(3) would have just been completed. An RC beginning after wn (world in Figure 2) could conceivably modify any of the NPs marked (1)-(3), and all three of those attachments may contribute probability mass to P(RCn+1···), but an attachment at NP(2) can only do so if NP(1) and PP-LOC make no further expansion. 5.2 Model parameters and estimation What remains is to define the relevant event space and estimate the parameters of the tree-generation model. For RC predictability estimation, the only relevant category distinctions are between RC, ∗END∗, and any other non-null category, so we limit our event space to these three outcomes. Furthermore, because RCs are never leftmost daughters, we can ignore the parameters determining first-daughter event outcome probabilities (step (a) in Section 5.1). We estimate event probabilities using log-linear models (Berger et al., 1996; Della Pietra et al., 1997).9 We included five classes of features in our models, chosen by linguistic considerations of what is likely to help predict the next event given an active node in an incremental utterance (see Wasow et al. (ress)): • NGRAM features: the last one, two, and three words in the incremental utterance; • HEAD features: the head word and head part of speech (if yet seen), and animacy (for NPs) of the currently expanded node; • HISTory features: the incremental constituent structure of the currently expanded node N, and the number of words and sister nodes that have appeared to the right of N’s head daughter; • PRENOMinal features: when the currently expanded node is an NP, the prenominal adjectives, determiners, and possessors it contains; • EXTernal features: when the currently expanded node is an NP, its external grammatical function, and the verb in the clause it governs. The complete set of features used is listed in a supplementary appendix. 7Equation (5) relies on the fact that an RC can never be the first daughter of a node expansion; the possibility of RC generation through left-recursion can thus be ignored. 8The phrase structures found in the Penn Treebank were flattened and canonicalized to ensure that the incremental parse structures do not contain implicit information about upcoming constituents. For example, RC structures are annotated with a nested NP structure, such as [NP [NP something else] [RC we could have done]] Tree canonicalization consisted of ensuring that each phrasal node had a preterminal head daughter, and that each preterminal node headed a phrasal node, according to the head-finding algorithm of Collins (2003). VP and S nodes without a verbal head child were given special null-copula head daughters, so that the NP-internal constituency of predicative nouns without overt copulas was distinguished from sentence-level constituency. 9The predictability models were heavily overparameterized, and to prevent overfitting were regularized with a quadratic Bayesian prior. For each trained model the value of the regularization parameter (constant for all features) was chosen to optimize held-out data likelihood. RC probabilities were estimated using ten-fold crossvalidation over the entire Switchboard corpus, so that a given RC was never contained in the training data of the model used to determine its probability. 5.3 Explanatory power of phrasal predictability We use the same statistical procedures as in (Jaeger, 2006, Chapter 4) to put the predictions of the information-density hypothesis to a more stringent test. We evaluate the explanatory power of phrasal predictability in logistic regression models of syntactic reduction that include all the control variables otherwise known to influence relativizer omission (Section 3). To avoid confounds due to clusters of data points from the same speaker, the model was bootstrapped (10, 000 iterations) with random replacement of speaker clusters.10 Phrasal predictability of the RC (based on the full feature set listed in Section 5.2) was entered into this model as a covariate to test whether RC predictability co-determines syntactic reduction after other factors are controlled for. Phrasal predictability makes a significant contribution to the relativizer omission model (χ2(1) = 54.3; p < 0.0001). This demonstrates that phrasal predictability has explanatory power in this case of syntactic reduction. 5.4 Surface and structural conditioning of phrasal predictability The structural predictability model puts us in a position to ask whether empirically observed patterns of syntactic reduction give evidence for speakers’ use of some types of cues but not others. In particular, there is a question of whether predictability based on surface cues alone (the NGRAM features of Section 5.2) provides a complete description of information-density effects on speakers’ choices in syntactic reduction. We tested this by building a syntactic-reductionmodel containing two predictability covariates: one using NGRAM features alone, and one using all other (i.e., structural, or all-but-NGRAM) feature types listed in Section 5.2. We can then test whether the parameter weight in the reduction model for each predictability measure differs significantly from zero. It turns out that both predictability measures matter: all-but-NGRAM predictability is highly significant (χ2(1) = 23.55, p < 0.0001), but NGRAM predictability is also significant (χ2(1) = 5.28, p < 0.025). While NGRAM and all-but-NGRAM probabilities are strongly correlated (r2 = 0.70), they evidently exhibit enough differences to contribute non-redundant information in the reduction model. We interpret this as evidence that speakers may be using both surface and structural cues for phrasal predictability estimation in utterance structuring. 6 Conclusion Using a case study in syntactic reduction, we have argued that information-density optimization— the tendency to maximize the uniformity of upcoming-event probabilities at each part of a sentence—plays an important role in speakers’ choices about structuring their utterances. This question has been previously addressed in the context of phonetic reduction of highly predictable words and syllables (Aylett and Turk, 2004; Bell et al., 2003), but not in the case of word reduction. Using a stochastic tree-based model of incremental utterance production combined with a logistic regression model of syntactic reduction, we have found evidence that when speakers have the choice between using or omitting an optional function word that marks the onset of a phrase, they use the function word more often when the phrase it introduces is less predictable. We have found evidence that speakers may be using both phrasal and structural information to calculate upcoming-event predictabilities. The overall distribution of syntactic reduction has the effect of smoothing the information profile of the sentence: when the function word is not omitted, the information density of the immediately following words is reduced. The fact that our case study involves the omission of a single word with little to no impact on utterance meaning made the data particularly amenable to analysis, but we believe that this method is potentially applicable to a wider range of variable linguistic phenomena, such as word ordering and lexical choice. More generally, we believe that the ensuing view of constraints on situated linguistic communication as limits on the information-transmission capacity of the environment, or on information-processing capacity of human language processing faculties, can serve as a useful framework for the study of 10Our data comes from approximately 350 speakers contributing 1 to 40 RCs (MEAN= 10, MEDIAN= 8, SD= 8.5) to the data set. Ignoring such clusters in the modeling process would cause the models to be overly optimistic. Post-hoc tests conducted on the models presented here revealed no signs of over-fitting, which means that the model is likely to generalize beyond the corpus to the population of American English speakers. The significance levels reported in this paper are based on a normal-theory interpretation of the unbootstrapped model parameter estimate, using a bootstrapped estimate of the parameter’s standard error. language use. On this view, syntactic reduction is available to the speaker as a pressure valve to regulate information density when it is dangerously high. Equivalently, the presence of a function word can be interpreted as a signal to the comprehender to expect the unexpected, a rational exchange of time for reduced information density, or a meaningful delay (Jaeger, 2005). More generally, reduction at different levels of linguistic form (phonetic detail, detail of referring expressions, as well as omission of words, as in the case examined here) provides a means for speakers to smooth the information-density profile of their utterances (Aylett and Turk, 2004; Genzel and Charniak, 2002). This raises important questions about the specific motivations of speakers’ choices: are these choices made for the sake of facilitating production, or as part of audience design? Finally, this view emphasizes the connection between grammatical optionality and communicative optimality. The availability of more than one way to express a given meaning grants speakers the choice to select the optimal alternative for each communicative act. Acknowledgments This work has benefited from audience feedback at the Language Evolution and Computation research group at the University of Edinburgh, and at the Center for Research on Language at UC San Diego. The idea to derive estimates of RC predictability based on multiple cues originated in discussion with T. Wasow, P. Fontes, and D. Orr. RL’s work on this paper was supported by an ESRC postdoctoral fellowship at the School of Informatics at the University of Edinburgh (award PTA-026-27-0944). FJ’s work was supported by a research assistantship at the Linguistics Department, Stanford University (sponsored by T. Wasow and D. Jurafsky) and a post-doctoral fellowship at the Department of Psychology, UC San Diego (V. Ferreira’s NICHD grant R01 HD051030). References Arnold, J. E., Wasow, T., Asudeh, A., and Alrenga, P. (2004). Avoiding attachment ambiguities: The role of constituent ordering. Journal of Memory and Language, 51:55–70. Aylett, M. (2000). Stochastic Suprasegmentals: Relationships between Redundancy, Prosodic Structure and Care of Articulation in Spontaneous Speech. PhD thesis, University of Edinburgh. Aylett, M. and Turk, A. (2004). The Smooth Signal Redundancy Hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech, 47(1):31–56. Bell, A., Jurafsky, D., Fosler-Lussier, E., Girand, C., Gregory, M., and Gildea, D. (2003). Effects of disfluencies, predictability, and utterance position on word form variation in English conversation. Journal of the Acoustical Society of America, 113(2):1001–1024. Berger, A. L., Pietra, S. A. D., and Pietra, V. J. D. (1996). A Maximum Entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Collins, M. (2003). Head-driven statistical models for natural language parsing. Computational Linguistics, 29:589–637. Della Pietra, S., Della Pietra, V., and Lafferty, J. (1997). Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393. Genzel, D. and Charniak, E. (2002). Entropy rate constancy in text. In Proceedings of ACL. Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL, volume 2, pages 159–166. Jaeger, T. F. (2005). Optional that indicates production difficulty: evidence from disfluencies. In Proceedings of Disfluency in Spontaneous Speech Workshop. Jaeger, T. F. (2006). Redundancy and Syntactic Reduction in Spontaneous Speech. PhD thesis, Stanford University, Stanford, CA. Keller, F. (2004). The entropy rate principle as a predictor of processing effort: An evaluation against eyetracking data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 317–324, Barcelona. Levy, R. (2005). Probabilistic Models of Word Order and Syntactic Discontinuity. PhD thesis, Stanford University. Levy, R. (2006). Expectation-based syntactic comprehension. Ms., University of Edinburgh. Roark, B. (2001). Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249–276. Roland, D., Elman, J. L., and Ferreira, V. S. (2006). Why is that? Structural prediction and ambiguity resolution in a very large corpus of English sentences. Cognition, 98:245–272. Wasow, T., Jaeger, T. F., and Orr, D. (in press). Lexical variation in relativizer frequency. In Wiese, H. and Simon, H., editors, Proceedings of the Workshop on Expecting the unexpected: Exceptions in Grammar at the 27th Annual Meeting of the German Linguistic Association, University of Cologne, Germany. DGfS.
|
2006
|
159
|
2,987
|
The Robustness-Performance Tradeoff in Markov Decision Processes Huan Xu, Shie Mannor Department of Electrical and Computer Engineering McGill University Montreal, Quebec, Canada, H3A2A7 xuhuan@cim.mcgill.ca shie@ece.mcgill.ca Abstract Computation of a satisfactory control policy for a Markov decision process when the parameters of the model are not exactly known is a problem encountered in many practical applications. The traditional robust approach is based on a worstcase analysis and may lead to an overly conservative policy. In this paper we consider the tradeoff between nominal performance and the worst case performance over all possible models. Based on parametric linear programming, we propose a method that computes the whole set of Pareto efficient policies in the performancerobustness plane when only the reward parameters are subject to uncertainty. In the more general case when the transition probabilities are also subject to error, we show that the strategy with the “optimal” tradeoff might be non-Markovian and hence is in general not tractable. 1 Introduction In many decision problems the parameters of the problem are inherently uncertain. This uncertainty, termed parameter uncertainty, can be the result of estimating the parameters from a finite sample or a specification of the parameters that itself includes uncertainty. The standard approach in decision making to circumvent the adverse effect of the parameter uncertainty is to find a solution that performs best under the worst possible parameters. This approach, termed the “robust” approach, has been used in both single stage ([1]) and multi-stage decision problems (e.g., [2]). In robust optimization problems, it is usually assumed that the constraint parameters are uncertain. By requiring the solution to be feasible to all possible parameters within the uncertainty set, Soyester ([1]) solved the column-wise independent uncertainty case, and Ben-Tal and Nemirovski ([3]) solved the row-wise independent case. In robust MDP problems, there may be two different types of parameter uncertainty, namely, the reward uncertainty and the transition probability uncertainty. Under the assumption that the uncertainty is state-wise independent (an assumption made by all papers to date, to the best of our knowledge), the optimality principle holds and this problem can be decomposed as a series of step by step mini-max problems solved by backward induction ([2, 4, 5]). The above cited results focus on worst-case analysis. This implies that the vector of nominal parameters (the parameters used as an approximation of the true one regardless of the uncertainty) is not treated in a special way and is just an element of the set of feasible parameters. The objective of the worst-case analysis is to eliminate the possibility of disastrous performance. There are several disadvantages to this approach. First, worst-case analysis may lead to an overly conservative solution, i.e., a solution which provides mediocre performance under all possible parameters. Second, the desirability of the solution highly depends on the precise modeling of the uncertainty set which is often based on some ad-hoc criterion. Third, it may happen that the nominal parameters are close to the real parameters, so that the performance of the solution under nominal parameters may provide important information for predicting the performance under the true parameters. Finally, there is a certain tradeoff relationship between the worst-case performance and the nominal performance, that is, if the decision maker insists on maximizing one criterion, the other criterion may decrease dramatically. On the other hand, relaxing both criteria may lead to a well balanced solution with both satisfactory nominal performance and also reasonable robustness to parameter uncertainty. In this paper we capture the Robustness-Performance (RP) tradeoff explicitly. We use the worstcase behavior of a solution as the function representing its robustness, and formulate the decision problem as an optimization of both the robustness criterion and the performance under nominal parameters simultaneously. Here, “simultaneously” is achieved by optimizing the weighted sum of the performance criterion and the robustness criterion. To the best of our knowledge, this is the first attempt to address the overly conservativeness of worst-case analysis in robust MDP. Instead of optimizing the weighted sum of the robustness and performance for some specific weights, we show how to efficiently find the solutions for all possible weights. We prove that the set of these solutions is in fact equivalent to the set of all Pareto efficient solutions in the robustness-performance space. Therefore, we solve the tradeoff problem without choosing a specific tradeoff parameter, and leave the subjective decision of determining the exact tradeoff to the decision maker. Instead of arbitrarily claiming that a certain solution is a good tradeoff, our algorithm computes the whole tradeoff relationship so that the decision maker can choose the most desirable solution according to her preference, which is usually complicated and an explicit form is not available. Our approach thus avoids the tuning of tradeoff parameters, where generally no good a-priori method exists. This is opposed to certain relaxations of the worst-case robust optimization approach like [6] (for single stage only) where some explicit tradeoff parameters have to be chosen. Unlike risk sensitive learning approaches [7, 8, 9] which aim to tune a strategy online, our approach compute a robust strategy offline without trial and error. The paper is organized as follows. Section 2 is devoted to the RP tradeoff for Linear Programming. In Section 3 and Section 4 we discuss the RP tradeoff for MDP with uncertain rewards, and uncertain transition probabilities, respectively. In Section 5 we present a computational example. Some concluding remarks are offered in Section 6. 2 Parametric linear programming and RP tradeoffs in optimization In this section, we briefly recall Parametric Linear Programming (PLP) [10, 11, 12], and show how it can be used to find the whole set of Pareto efficient solutions for RP tradeoffs in Linear Programming. This serves as the base for the discussion of RP tradeoffs in MDPs. 2.1 Parametric Linear Programming A Parametric Linear Programming is the following set of infinitely many optimization problems: For all λ ∈[0, 1]: Minimize: λc(1)⊤x + (1 −λ)c(2)⊤x (1) Subject to: Ax = b x ≥0. We call c(1)⊤x the first objective, and c(2)⊤x the second objective. We assume that the Linear Program (LP) is feasible and bounded for both objectives. Although there are uncountably many possible λ, Problem (1) can be solved by a simplex-like algorithm. Here, “solve” means that for each λ, we find at least one optimal solution. An outline of the PLP algorithm is described in Algorithm 1, which is essentially a tableau simplex algorithm while the entering variable is determined in a specific way. See [10] for a precise description. Algorithm 1. 1. Find a basic feasible optimal solution for λ = 0. If multiple solutions exist, choose one among those with minimal c(1)⊤x. 2. Record current basic feasible solution. Check the reduced cost (i.e., the zero row in the simplex table) of the first objective, denoted as ¯c(1) j . If none of them is negative, end. 3. Among all columns with negative ¯c(1) j , choose the one with largest ratio |¯c(1) j /¯c(2) j | as the entering variable. 4. Pivot the base, go to 2. This algorithm is based on the observation that for any λ, there exists an optimal basic feasible solution. Hence, by finding a suitable subset of all vertices of the feasible region, we can solve the PLP. Furthermore, we can find this subset by sequentially pivoting among neighboring extreme points like the simplex algorithm does. This algorithm terminates after finitely many iterations. It is also known that the optimal value for PLP is a continuous piecewise linear function of λ. The theoretical computational cost is exponential, although practically it works well. Such property is shared by all simplex based algorithm. A detailed discussion on PLP can be found in [10, 11, 12]. 2.2 RP tradeoffs in Linear Programming Consider the following LP: NOMINAL PROBLEM : Minimize: c⊤x (2) Subject to: Ax ≤b Here A ∈Rn×m, x ∈Rm, b ∈Rn, c ∈Rm. Suppose that the constraint matrix A is only a guess of the unknown true parameter Ar which is known to belonging to set A (we call A the uncertainty set). We assume that A is constraint-wise independent and polyhedral for each of the constraints. That is, A = Qn i=1 Ai, and for each i, there exists a matrix T(i) and a vector v(i) such that Ai = a(i)⊤|T(i)a(i) ≤v(i) . To quantify how a solution x behaves with respect to the parameter uncertainty, we define the following criterion to be minimized as its robustness measure (more accurately, non-robustness measure). p(x) ≜sup ˜ A∈A
h ˜Ax −b i+
1 = sup ˜ A∈A n X i=1 max ˜a(i)⊤x −bi, 0 = n X i=1 max (" sup ˜a(i):T (i)˜a(i)≤v(i) ˜a(i)⊤x # −bi, 0 ) . (3) Here [·]+ stands for the positive part of a vector, ˜a(i)⊤is the ith row of the matrix ˜A, and bi is the ith element of b. In words, the function p(x) is the largest possible sum of constraint violations. Using the weighted sum of the performance and robustness objective as the minimizing objective, we formulate the explicit tradeoff between robustness and performance as: GENERAL PROBLEM : λ ∈[0, 1] Minimize: λc⊤x + (1 −λ)p(x) Subject to: Ax ≤b. Here A ∈Rn×m, x ∈Rm, b ∈Rn, c ∈Rm. (4) By duality theorem, for a given x, sup˜a(i):T (i)˜a(i)≤v(i) ˜a(i)⊤x equals to the optimal value of the following LP on y(i): Minimize: v(i)⊤y(i) Subject to: T(i)⊤y(i) = x y(i) ≥0. Thus, by adding slack variables, we rewrite GENERAL PROBLEM as the following PLP and solve it using Algorithm 1: GENERAL PROBLEM (PLP) : λ ∈[0, 1] Minimize: λc⊤x + (1 −λ)1⊤z Subject to: Ax ≤b, T(i)⊤y(i) = x, v(i)⊤y(i) −bi ≤zi, z ≥0, y(i) ≥0; i = 1, 2, · · · , n. (5) Here, 1 stands for a vector of ones of length n, zi is the ith element of z, and x, y(i), z are the optimization variables. 3 The robustness-performance tradeoff for MDPs with uncertain rewards A (finite) MDP is defined as a 5-tuple < T, S, As, p(·|s, a), r(s, a) > where: T is the (possibly infinite) set of decision stages; S is the state set; As is the action set of state s; p(·|s, a) is the transition probability; and r(s, a) is the expected reward of state s with action a ∈As. We use r to denote the vector combining the reward for all state-action pairs and rs to denote the vector combining all reward of state s. Thus, r(s, a) = rs(a). Both S and As are assumed finite. Both p and r are time invariant. In this section, we consider the case where r is not known exactly. More specifically, we have a nominal parameter r(s, a) which is believed to be a reasonably good guess of the true reward. The reward r is known to belong to a bounded set R. We further assume that the uncertainty set R is state-wise independent and a polytope for each state. That is, R = Q s∈S Rs, and for each s ∈S, there exists a matrix Cs and a vector ds such that Rs = {rs|Csrs ≥ds}. We assume that for different visits of one state, the realization of the reward need not be identical and may take different values within the uncertainty set. The set of admissible control policies for the decision maker is the set of randomized history dependent policies, which we denote by ΠHR. In the following three subsections we discuss different standard reward criteria: cumulative reward with a finite horizon, discounted reward with infinite horizon, and limiting average reward with infinite horizon under a unichain assumption. 3.1 Finite horizon case In the finite horizon case (T = {1, · · · , N}), we assume without loss of generality that each state belongs to only one stage, which is equivalent to the assumption of non-stationary reward realization, and use Si to denote the set of states at the ith stage. We also assume that the first stage consists of only one state s1, and that there are no terminal rewards. We define the following two functions as the performance measure and the robustness measure of a policy π ∈ΠHR: P(π) ≜Eπ{ N−1 X i=1 r(si, ai)}, R(π) ≜min r∈R Eπ{ N−1 X i=1 r(si, ai)}. (6) The minimum is attainable, since R is compact and the total expected reward is a continuous function of r. We say that a strategy π is Pareto efficient if it obtains the maximum of P(π) among all strategies that have a certain value of R(π). The following result is straightforward; the proof can be found in the full version of the paper. Proposition 1. 1. If π∗is a Pareto efficient strategy, then there exists a λ ∈[0, 1] such that π∗∈arg maxπ∈ΠHR{λP(π) + (1 −λ)R(π)}. 2. If π∗∈arg maxπ∈ΠHR{λP(π) + (1 −λ)R(π)} for some λ ∈(0, 1). Then π∗is a Pareto efficient strategy. For 0 ≤t ≤N, s ∈St, and λ ∈[0, 1] define: Pt(π, s) ≜Eπ (N−1 X i=t r(si, ai)|st = s ) Rt(π, s) ≜min r∈R Eπ (N−1 X i=t r(si, ai)|st = s ) cλ t (s) ≜max π∈ΠHR {λPt(π, s) + (1 −λ)Rt(π, s)} . (7) We set PN ≡RN ≡cN ≡0, and note that cλ 1(s1) is the optimal RP tradeoff with weight λ. The following theorem shows that the principle of optimality holds for c. The proof is omitted since it follows similarly to standard backward induction in finite horizon robust decision problems. Theorem 1. For s ∈St, t < N, let ∆s be the probability simplex on As, then cλ t (s) = max q∈∆s n min rs∈Rs λ P a∈As r(s, a)q(a) + (1 −λ) P a∈As r(s, a)q(a) + P s′∈St+1 P a∈As p(s′|s, a)q(a)cλ t+1(s′) o . We now consider the maximin problem in each state and show how to find the solutions for all λ in one pass. We also prove that cλ t (s) is piecewise linear in λ. Let St+1 = {s1, · · · , sk}. Assume for all j ∈{1, · · · , k}, cλ t+1(sj) are continuous piece-wise linear functions. Thus, we can divide [0, 1] into finite (say n) intervals [0, λ1], · · · [λn−1, 1] such that in each interval, all ct+1 functions are linear. That is, there exist constants lj i and mj i such that cλ t+1(sj) = lj i λ+mj i, for λ ∈[λi−1, λi]. By the duality theorem, we have that cλ t (s) equals to the optimal value of the following LP on y and q. Maximize: (1 −λ)d⊤ s y + λr⊤ s q + k X j=1 X a∈As p(sj|s, a)q(a)cλ t+1(sj) Subject to: C⊤ s y = q, 1⊤q = 1, q, y ≥0. (8) Observe that the feasible set is the same for all λ. Substituting cλ t+1(sj) and rearranging, it follows that for λ ∈[λi−1, λi] the objective function equals to (1 −λ) n P a∈As hPk j=1 p(sj|s, a)mj i i q(a) + d⊤ s y o +λ n P a∈As h r(s, a) + Pk j=1 p(sj|s, a)(lj i + mj i) i q(a) o . Thus, for λ ∈[λi−1, λi], from the optimal solution for λi−1, we can solve for all λ using Algorithm 1. Furthermore, we need not to re-initiate for each interval, since the optimal solution for the end of ith interval is also the optimal solution for the begin of the next interval. It is obvious that the resulting cλ t (s) is also continuous, piecewise linear. Thus, since cN = 0, the assumption of continuous and piecewise linear value functions holds by backward induction. 3.2 Discounted reward infinite horizon case In this section we address the RP tradeoff for infinite horizon MDPs with a discounted reward criterion. For a fixed λ, the problem is equivalent to a zero-sum game, with the decision maker trying to maximize the weighted sum and Nature trying to minimize it by selecting an adversarial reward realization. A well known result in discounted zero-sum stochastic games states that, even if non-stationary policies are admissible, a Nash equilibrium in which both players choose a stationary policy exists; see Proposition 7.3 in [13]. Given an initial state distribution α(s), it is also a known result [14] that there exists a one-toone correspondence relationship between the state-action frequencies P∞ i=1 γi−1E(1si=s,ai=a) for stationary strategies and vectors belonging to the following polytope X: X a∈As′ x(s′, a) − X s∈S X a∈As γp(s′|s, a)x(s, a) = α(s′), x(s, a) ≥0, ∀s, ∀a ∈As. (9) Since it suffices to consider a stationary policy for Nature, the tradeoff problem becomes: Maximize: inf r∈R X s∈S X a∈As [λr(s, a)x(s, a) + (1 −λ)r(s, a)x(s, a)] Subject to: x ∈X. (10) By duality of LP, Equation (10) could be rewritten as the following PLP and solved by Algorithm 1. Maximize:λ X s∈S X a∈As r(s, a)x(s, a) + (1 −λ) X s∈S d⊤ s ys Subject to: X a∈As′ x(s′, a) − X s∈S X a∈As γp(s′|s, a)x(s, a) = α(s′), ∀s′, x(s, a) ≥0, ∀s, ∀a, C⊤ s ys = xs ∀s, ys ≥0, ∀s. (11) 3.3 Limiting average reward case (unichain) In the unichain case, the set of limiting average state-action frequency vectors (that is, all limit points of sequences 1 T PT n=1 Eπ[1sn=s,an=a] , for π ∈ΠHR) is the following polytope X: X a∈As′ x(s′, a) − X s∈S X a∈As p(s′|s, a)x(s, a) = 0, ∀s′ ∈S, X s∈S X a∈As x(s, a) = 1, x(s, a) ≥0, ∀s, ∀a ∈As. (12) As before, there exists an optimal maximin stationary policy. By a similar argument as for the discounted case, the tradeoff problem can be converted to the following PLP: Maximize:λ X s∈S X a∈As r(s, a)x(s, a) + (1 −λ) X s∈S d⊤ s ys Subject to: X a∈As′ x(s′, a) − X s∈S X a∈As p(s′|s, a)x(s, a) = 0, ∀s′, X s∈S X a∈As x(s, a) = 1, C⊤ s ys = xs, ∀s, ys ≥0, ∀s, x(s, a) ≥0, ∀s, ∀a. (13) 4 The RP tradeoff in MDPs with uncertain transition probabilities In this section we provide a counterexample which demonstrates that the weighted sum criterion in the most general case, i.e., the uncertain transition probability case, may lead to non-Markovian optimal policies. In the finite horizon MDP shown in the Figure 1, S = {s1, s2, s3, s4, s5, t1, t2, t3, t4}; As1 = {a(1, 1)}; As2 = {a(2, 1)}; As3 = {a(3, 1)}; As4 = {a(4, 1)} and As5 = {a(5, 1), a(5, 2)}. Rewards are only available at the final stage, and are perfectly known. The nominal transition probabilities are p (s2|s1, a(1, 1)) = 0.5, p (s4|s2, a(2, 1)) = 1, and p (t3|s5, a(5, 2)) = 1. The set of possible realization is p (s2|s1, a(1, 1)) ∈{0.5}, p (s4|s2, a(2, 1)) ∈[0, 1], and p (t3|s5, a(5, 2)) ∈ [0, 1]. Observe that the worst parameter realization is p(s4|s2, a(2, 1)) = p(t3|s5, a(5, 2)) = 0. We look for the strategy that maximizes the sum of the nominal reward and the worst-reward (i.e., λ = 0.5). Since multiple actions only exist in state s5, a strategy is determined by the action chosen on s5. Let the probability of choosing action a(5, 1) and a(5, 2) be p and 1 −p, respectively. Consider the history “s1 →s2”. In this case, with the nominal transition probability, this trajectory will reach t1 with a reward of 10, regardless of the choice of p. The worst transition is that action a(2, 1) leads to s5 and action a(5, 2) leads to t4, hence the expected reward is 5p + 4(1 −p). Therefore the optimal p equals to 1, i.e., the optimal action is to choose a(5, 1) deterministically. s1 s2 s3 s4 s5 t1 r=10 t2 r=5 a(1,1) a(2,1) a(3,1) a(4,1) a(5,2) a(5,1) 0.5 (0.5) 0.5 (0.5) 1 (0) 0 (1) t3 r=8 t4 r=4 1 (0) 0 (1) Figure 1: Example of non-Markovian best strategy Consider the history “s1 →s3”. In this case, the nominal reward is 5p + 8(1 −p), and the worst case reward is 5p + 4(1 −p). Thus p = 0 optimize the weighted sum, i.e., the optimal strategy is to choose a(5, 2). The unique optimal strategy for this example is thus non-Markovian. This non-Markovian property implies a possibility that past actions affect the choice of future actions, and hence could render the problem intractable. The optimal strategy is non-Markovian because we are taking expectation over two different probability measures, hence the smoothing property of conditional expectation cannot be used in finding the optimal strategy. 5 A computational example We apply our algorithm to a T-stage machine maintenance problem. Let S ≜{1, · · · , n} denote the state space for each stage. In state h, the decision maker can choose either to replace the machine which will lead to state 1 deterministically, or to continue running, which with probability p will lead to state h + 1. If the machine is in state n, then the decision maker has to replace it. The replacing cost is perfectly known to be cr, and the nominal running cost in state h is ch. We assume that the realization of the running cost lies in the interval [ch −δh, ch + δh]. We set ch = √ h −1 and δh = 2h/n. The objective is to minimize the total cost, in a risk-averse attitude. Figure 2(a) shows the tradeoff of this MDP. For each solution found, we sample the reward 300 times according to a uniform distribution. We normalize the cost for each simulation, i.e., we divide the cost by the smallest expected nominal cost. Denoting the normalized cost of the ith simulation for strategy j as si(j), we use the following function to compare the solutions: vj(α) = α sP300 i=1 |si(j)|α 300 . Note that α = 1 is the mean of the simulation cost, whereas larger α puts higher penalty on deviation representing a risk-averse decision maker. Figure 2(b) shows that, the solutions that focus on nominal parameters (i.e., λ close to 1) achieve good performance for small α, but worse performance for large α. That is, if the decision maker is risk neutral, then the solutions based on nominal parameters are good. However, these solutions are not robust and are not good choices for risk-averse decision makers. Note that, in this example, the nominal cost is the expected cost for each stage, i.e., the parameters are exactly formulated. Even in such case, we see that risk-averse decision makers can benefit from considering the RP tradeoff. 6 Concluding remarks In this paper we proposed a method that directly addresses the robustness versus performance tradeoff by treating the robustness as an optimization objective. Based on PLP, for MDPs where only (a) (b) 16.79 17 17.2 17.4 17.6 17.8 18 18.2 18.4 18.6 18.76 25.68 26 26.5 27 27.5 28 28.5 29 29.28 Norminal Performance Worst Case Performance λ=1 λ=0 0 0.2 0.4 0.6 0.8 1 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 λ Normalized Modified Mean α=1 α=10 α=100 α=1000 Figure 2: The machine maintenance problem: (a) the PR tradeoff; (b) normalized mean of the simulation for different values of α. rewards are uncertain, we presented an efficient algorithm that computes the whole set of optimal RP tradeoffs for MDPs with finite horizon, infinite horizon discounted reward, and limiting average reward (unichain). For MDPs with uncertain transition probabilities, we showed an example where the solution may be non-Markovian and hence may in general be intractable. The main advantage of the presented approach is that it addresses robustness directly. This frees the decision maker from the need to make probabilistic assumptions on the problems parameters. It also allows the decision maker to determine the desired robustness-performance tradeoff based on observing the whole curve of possible tradeoffs rather than guessing a single value. References [1] A. L. Soyster. Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res., 1973. [2] A. Bagnell, A. Ng, and J. Schneider. Solving uncertain markov decision processes. Technical Report CMU-RI-TR-01-25, Carnegie Mellon University, August 2001. [3] A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Oper. Res. Lett., 25(1):1– 13, August 1999. [4] C. C. White III and H. K. El-Deib. Markov decision process with imprecise transition probabilities. Oper. Res., 42(4):739–748, July 1992. [5] A. Nilim and L. El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Oper. Res., 53(5):780–798, September 2005. [6] D. Bertsimas and M. Sim. The price of robustness. Oper. Res., 52(1):35–53, January 2004. [7] M. Heger. Consideration of risk in reinforcement learning. In Proc. 11th International Conference on Machine Learning, pages 105–111. Morgan Kaufmann, 1994. [8] R. Neuneier and O. Mihatsch. Risk sensitive reinforcement learning. In Advances in Neural Information Processing Systems 11, pages 1031–1037, Cambridge, MA, USA, 1999. MIT Press. [9] P. Geibel. Reinforcement learning with bounded risk. In Proc. 18th International Conf. on Machine Learning, pages 162–169. Morgan Kaufmann, San Francisco, CA, 2001. [10] D. Bertsimas and J. N. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. [11] M. Ehrgott. Multicriteria Optimization. Springer-Verlag Berlin Heidelberg, 2000. [12] K. G. Murty. Linear Programming. John Wiley & Sons, 1983. [13] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [14] M. L. Puterman. Markov Decision Processes. John Wiley & Sons, INC, 1994.
|
2006
|
16
|
2,988
|
Learning Structural Equation Models for fMRI Amos J. Storkey School of Informatics University of Edinburgh Enrico Simonotto Division of Psychiatry University of Edinburgh Heather Whalley Division of Psychiatry University of Edinburgh Stephen Lawrie Division of Psychiatry University of Edinburgh Lawrence Murray School of Informatics University of Edinburgh David McGonigle Centre for Functional Imaging Studies University of Edinburgh Abstract Structural equation models can be seen as an extension of Gaussian belief networks to cyclic graphs, and we show they can be understood generatively as the model for the joint distribution of long term average equilibrium activity of Gaussian dynamic belief networks. Most use of structural equation models in fMRI involves postulating a particular structure and comparing learnt parameters across different groups. In this paper it is argued that there are situations where priors about structure are not firm or exhaustive, and given sufficient data, it is worth investigating learning network structure as part of the approach to connectivity analysis. First we demonstrate structure learning on a toy problem. We then show that for particular fMRI data the simple models usually assumed are not supported. We show that is is possible to learn sensible structural equation models that can provide modelling benefits, but that are not necessarily going to be the same as a true causal model, and suggest the combination of prior models and learning or the use of temporal information from dynamic models may provide more benefits than learning structural equations alone. 1 Introduction Structural equation modelling (SEM) is a technique widely used in the behavioural sciences. It has also appeared as a standard approach for analysis of what has become known as effective connectivity in the functional magnetic resonance imaging (fMRI) literature and is still in common use despite the increasing interest in dynamical methods such as dynamic causal models [6]. Simply put, effective connectivity analysis involves looking at the possible causal influences between brain regions given measurements of the activity of those regions. Structural equation models are a Gaussian modelling tool, and are similar to Gaussian belief networks. In fact Gaussian belief networks can be seen as a subset of valid structural equation models. However structural equation models do not have the same acyclicity constraints as belief networks. It should be noted that the graphical form used in this paper is at odds with traditional SEM representations, and consistent with that used for belief networks, as those will be more familiar to the expected audience. Within the fMRI context, the use of structural equation generally takes the following form. First certain regions of interests (commonly called seeds) are chosen according to some understanding of what brain regions might be of interest or of importance. Then neurobiological knowledge is used to propose a connectivity model. This connectivity model states what regions are connected to what other regions, and the direction of the connectivity. This connectivity model is used to define a structural equation model. The parameters of this model are then typically estimated using maximum likelihood methods, and then comparison of connection parameters is made across subject classes. In this paper we consider what can be done when it is hard to specify connectivity a priori, and ask how much we can achieve by learning network structures from the fMRI data itself. The novel developments of this paper include the examination of various generative representations for structural equation models which allow straightforward comparisons with belief networks and other models such as dynamic causal models. We implement Bayesian Information Criterion approximations to the evidence and use this in a Metropolis-Hastings sampling scheme for learning structural equation models. These models are then applied to toy data, and to fMRI data, which allows the examination of the types of assumptions typically made. 1.1 Related Work: Structural Equation Models Structural equation models and path analysis have a long history. The methods were introduced in the context of genetics in [20], and in econometrics in [7]. They have been used extensively in the social sciences in a variety of ways. Linear Gaussian structural equation models can be split into the case of path analysis [20], where the all the variables are directly measurable and structural equation models with latent variables [1], where latent variable models are allowed. Factor analysis is another special case of this latter. Furthermore structural equation models can also be characterised by the inclusion of exogenous influences. Structural equation models have been analysed and understood in Bayesian terms before. They form a part of the causal modelling framework of Pearl [11], and have been discussed within that context, as well as a number of others [11, 4, 13, 10]. Approaches to learning structural equation models have not played a significant part in fMRI methods. One approach is described in [3] where they use a genetic algorithm approach for the search. In [21], the authors look at learning Bayesian networks but do not consider cyclic networks. For dynamic causal models (rather than structural equation models) the issue of model comparison was dealt with in [12], but large scale structure learning was not considered. In fMRI literature, SEMs have generally been used to model ‘effective connectivity’, or rather modelling the causal relationships between different brain regions. They were first applied to imaging data by [9], and there have been many further applications [2, 5, 14]. The first analysis on data from schizophrenia studies was detailed in [15]. In fact it seems SEMs have been the most widely used model for connectivity analyses in neuroimaging. In all of the studies cited above the underlying structure was presumed known or presumed to be one of a small number of possibilities. There has been some discussion of how best to obtain reasonable structures from neuro-anatomical data, but this approach is currently used only very rarely. 2 Why Learn SEMs? The presumption in much fMRI connectivity analysis is that we can obtain models for activity dependence from neuro-anatomical sources. The problem with this is that it fails to account for the fact that connectivity analysis is usually done with a limited number of regions. It is highly possible that a connection from one region to another is mediated via a third region, which is not included in the SEM model. The strength of that mediation is unknown from neuro-anatomical data and is generally ignored: most connectivity models focus only on direct anatomical connections, with the accompanying implicit assumption that there are no other regions involved in the network under study, or that these regions would contribute only minimally to the model. Furthermore, just because regions are physically connected does not mean there is any actual functional influence in a particular context. Hence it has to be accepted that neuro-anatomically derived connectivity is a first guess at best. It is not the purpose of this paper to propose that anatomical connectivity be ignored, but instead it asks what happens if we go to the other extreme: can we say something about connectivity from the data? In reality anatomical connectivity models are needed, and can be used to provide good priors for the connections and even for the relative connection strengths. Statistically there are huge equivalences in structural equation models that will not be determined by the data alone. 3 Understanding Structural Equation Models In this section two generative views of structural equation modelling are presented. The idea behind structural equation modelling is that it represents causal dependence between different variables. The fact that cyclic structures are allowed in structural equation models could be seen as an implicit assumption of some underlying dynamic which the structural equation model is an equilibrium representation of. Indeed that is commonly how effective connectivity models are interpreted in an fMRI context. Two linear models, both of which produce a structural equation model prior, are presented here. Though these models have the same statistical properties, they have different generative motivations and different non-linear extensions, so they are both potentially instructive. 3.1 The Traditional Model The standard SEM view is that the core SEM structure is a covariance produced by the solution to a set of linear equations x = Ax + ω with Gaussian term ω. This does not have any direct generative elucidation, but can instead be thought of as relating to a deterministic dynamical system subject to uncertain fixed input. Suppose we have a dynamical system xt+1 = Axt +ω, subject to some input ω, where we presume the system input is unknown and Gaussian distributed. To generate from the model, we sample ω, run the dynamical system to its fixed point, and use that fixed point as a sample of x. This fixed point is given by x = (I −A)−1ω which produces the standard SEM covariance structure for x. This requires A to be a contraction map to obtain stable fixed points. All the other aspects of the general form of SEM are either inputs to or measurements from this system. 3.2 Average Activity Of A Gaussian Dynamic Bayesian Network An alternative and potentially appealing view is that the the SEM represents the distribution of the long term activity of the nodes in a Gaussian dynamic Bayesian network (Kalman filter). Suppose we have xt = Axt−1 + ωt, where ωt are IID Gaussian variables, and x0, x1, . . . is a series of real variables. This defines a Markov chain, and is the evolution equation of a Gaussian dynamic Bayesian network. Suppose we are at the equilibrium distribution of this Markov chain. Then setting ˜x = (1/ √ N) PN t=1 xt for large N, we can use the Kalman filter to see that (1/ √ N) PN t=1 xt = (1/ √ N)[A(x0 −xN) + A PN i=1 xt] + (1/ √ N) PN t=1 ωt. Presuming A is a contraction map, (1/ √ N)[A(x0 −xN)] becomes negligibly small and so ˜x ≈A˜x + ω where ω is distributed identically to ωt due to the fact that the variance of a sum of Gaussians is the sum of the variances. The approximation becomes an equality in the large N limit. Again this is the required form for obtaining the covariance of the SEM. This interpretation says that if we have some latent system running as a Gaussian dynamic Bayesian network, but our measuring equipment is only capable of capturing longer term averages of the network activity then our measurements are distributed according to an SEM. This generative interpretation is appealing in the context of fMRI acquisition. Note in both of these interpretations that it is important that A is a contraction. By formulating the generative framework we see it is important to restrict the form of connectivity model in this way. 4 Model Structure The standard formalism for Structural Equation Models is now outlined. A structural equation model for observational variables y, latent variables, x and sometimes for latent input variables φ and observations of the input variables z is given by the following equations x = (I −A)−1(Rφ + ω), y = Bx + σ and z = Cφ + δ (1) where σ, ω, φ and δ are Gaussian, and A is presumed to be zero diagonal. For for S = I −A, the resulting covariance for the observed variables (y, z) is given by BS−1(RKφRT + Kω)[S−1]T + Kσ BS−1RKφCT CKφRT [S−1]T BT CKφCT + Kδ . (2) where Kω is the covariance of ω, Kσ the covariance of σ etc. There are a number of common simplifications to this framework. The first case involves presuming no inputs and a fully visible system. Hence we marginalise the observations of the input variables z, set Kδ = ∞, C = 0 ,R = 0, B = 1, σ = 0. Then the covariance K1 of y is K1 = (I −A)−1Kω[(I −A)−1]T . The next simplest case would involve presuming once again that there are no inputs but that in fact the observations are stochastic functions of the latent variables. This involves setting Kδ = ∞, C = 0, R = 0, B = 1. We then have K2 = (I −A)−1Kω[(I −A)−1]T + Kσ. If we view the observations as noisy versions of the latent variables then Kσ is diagonal. This will be the most general case considered in this paper. Adding any of the remaining components is not particularly demanding as it simply uses a conditional rather than unconditional model. Suppose we denote by K the covariance corresponding to the required model. For most of this paper we presume K = K2. We then have the following probability for the whole data Y = {y1, y2, . . . , yN}. P(Y |K, ¯y) = Y j 1 (2π)m/2|K|1/2 exp µ −1 2(yj −¯y)T K−1(yj −¯y) ¶ (3) where the observable model mean is ¯y = ¯x + ¯σ and the latent mean is ¯x = (I −A)−1 ¯ω, and where σ and ω, along with elements of the matrix A and covariances Kω and Kσ, are parameters. 5 Priors, Maximum Posterior and Bayesian Information Criterion The previous section outlines the basic model of the data given the various parameters. In this section we provide prior distributions for the parameters of the structural equation model. Independent Gaussian priors are put on the parameters: P(Aij|T) = T 1/2 (2π)1/2 exp µ −1 2T(Aij −¯Aij)2 ¶ (4) with regularisation parameter T. For the purposes of this paper we take ¯Aij = 0, presume we have no particular a priori bias towards positive or negative connections and a uniform prior over structures. An independent prior over connections seems reasonable as two separate connections between different brain regions would have no a priori reason to be related. Any relationship is due to functional purpose and is therefore a posteriori. The use of a uniform prior over all structures is an extreme position, which we have taken in this paper to contrast with using only one structure. In reality we would want to use neurobiologically guided priors over structures. Inverse gamma priors were also specified for Kω and Kσ originally, along with a prior for the mean ¯ω. It was found that these typically had no effects on the experiments and were dropped for simplicity. Hence Kω and Kσ will be optimised without regularisation, and ¯ω is set to zero. T is chosen by 10 fold cross-validation from a set of 10 possible values. We can calculate all the relevant derivatives for the SEM straightforwardly, and adapt the parameters to maximize the posterior of the structural equation model. In this paper we use a conjugate gradient approach. By adding a Bayesian Information Criterion term [16], (−0.5m log N) for m parameters and N data points, to the log posterior at the maximum posterior solution, we can obtain an approximation of the evidence P(Y |M) where M encodes the structural information we are interested in and consists of indicator variables Mij indicating a connection for node j to node i. This will enable us to sample from an approximate posterior distribution of structures to find a sample which best represents the data. 6 Sampling From SEMs In order to represent the posterior distribution over network structures, we resort to a sampling approach. Because there are no acyclicity constraints, MCMC proposals are simpler than the comparable situation for belief networks in that no acyclicity checking needs to be done for the proposals. A simple proposal scheme is to randomly generate a swap matrix MS which is XORed with M. We choose highly sparse swap matrices, but to reduce the possibility of transitioning randomly about the larger graphs, without ever considering smaller networks we introduce a bias towards removing connections rather than adding connections in generating the swap matrix. This means the proposal is no longer symmetric, and so a corresponding Hastings factor needs to be included in the acceptance probability, so the result is still a sample from the original posterior. 7 Tests On A Toy Problem We tested the approach on a toy problem with 8 variables. We sampled 800 data points from y = (I−A)−1ϵ+ρ for ϵ Gaussian with unit diagonal covariance, ρ Gaussian with 0.2 diagonal covariance and with A given by 0 0 0 −0.26 0 0 −0.03 0 0 0 0.47 0 −0.36 0.55 0 0 0 0 0 0 0.34 0 0 0 0 0 −0.36 0 0 −0.03 −0.08 0.25 0 0 0.27 0 0 0 −0.25 0 −0.17 0.49 0 0 −0.18 0 0 0 0.31 0.42 −0.13 0 0 0 0 −0.22 0.385 0 0 0 0.16 0.5 0 0 This connectivity matrix is represented graphically in Figure 11. In modelling this we used T = 10. This prior ensures that any A that is not of very low prior probability is a contraction. Also contraction constraints were added to the optimisation. Priors on other parameters were set to be broad. An annealed sampling procedure was used for the first 4000 samples from the MetropolisHastings Monte-Carlo procedure. After that a further 4000 samples burn-in was used. The next 4000 samples were used for analysis. We assess what edges are common in the samples, and what the highest posterior sampled graphs are. Figure 1b provides illustrations of edges which are present in more than 0.15 of the samples. It can be seen that many of the critical edges are there in most samples (indeed some are always there). Those which are missing in both cases tend to be either low magnitude connections or are due to directional confusion. (a) (b) (c) (d) Figure 1: (a) Graphical structure of the ground truth model for the toy data, and (b) Edges present in more than 0.15 of the cases, (c) the highest posterior structure from the sample (d) a random sample. The graphs for the maximum posterior sample and a random sample are shown in Figure 1. We can see that again in the maximum posterior sample, there is a misplaced edge (the edge from 5 to 6 is replaced by one from 5 to 1) and a number are missing or have exchanged direction. The samples generally have likelihoods which are very similar to the likelihood for the true model. We can conclude from this that we can gain some information from learning SEM structures, but as with learning any graphical models there are many symmetries and equivalences, so it is vital not to infer too much from the learnt structures. 8 Tests On fMRI Data The approach of this paper was tested on two different fMRI datasets. The first dataset (D1) was taken from a dataset that had previously used to examine inter-session variance in a single subject [8, 17]. We used the auditory paced finger-tapping task; briefly, a single subject tapped his right index finger, paced by an auditory tone (1.5Hz). Each activation epoch was alternated with a rest epoch, in which the pacing tone was delivered to control for auditory activation. Thirteen blocks were collected per session (seven rest and six active). Each block was 24s/6 scans long, making 78 scans in total for each of 33 sessions. The subject maintained fixation on a cross that was backprojected onto a transparent screen by a LCD video projector as in previous experiments. The subject was a healthy 23 year old right-handed male. The data were acquired on a Siemens MAGNETOM Vision (Siemens, Erlangen, Germany) at 2T. Each BOLD-EPI volume scan consisted of 48 transverse slices (inplane matrix 64x64; voxel size 3x3x3mm; TE=40ms; TR=4.1s). A T1weighted high-resolution MRI of the subject (1 x 1 x 1.5mm resolution) was acquired to facilitate anatomical localisation of the functional data. The data were processed with statistical parametric mapping (SPM) software SPM5 (Wellcome Department of Cognitive Neurology; www.fil.ion.ucl.ac.uk/spm). After removal of the first two volumes to account for T1 saturation effects, cerebral volumes were realigned to correct for both within- and between-session subject motion). The data were filtered with a 128s high-pass filter, and an AR(1)-model was used to account for serial correlation in the data Experimental effects were estimated using session design matrices modeling the hemodynamically convolved time-course of the active movement condition, and 6 subject movement parameters. Note that no spatial smoothing was applied to this dataset, to attempt to preserve single-voxel timeseries. Seeds were selected from significantly active voxels identified using a random effects analysis in SPM5 (ones-sample t-test across 33 sessions; p < 0.05 FWE corrected for multiple comparsions). For comparison with previous extant work, the most significant voxel in each cluster was chosen as a seed, giving 13 seeds representing 13 separate anatomical regions. When it was obvious that a given cluster encompassed more than one distinct anatomical region, seeds were also selected for other regions covered by the cluster. 2000 data points were used for training, the remaining 574 were reserved as a test set. The second dataset (D2) was from a long term study of subjects who are at genetically enhanced risk of schizophrenia. Imaging was carried out on 90 subjects at the Brain Imaging Research Centre for Scotland (Edinburgh, Scotland, UK) on a GE 1.5 T Signa scanner. A high resolution structural scan was acquired using a 3D T1-weighted sequence (TI = 600 ms). Functional data was acquired using an EPI sequence. A total of 204 volumes were acquired. The first four volumes of each acquisition were discarded. Preliminary analysis was carried out using SPM99. Data were first realigned to correct for head movement, normalized to the standard EPI template and smoothed. The resulting data consists of a image-volume series of 200 time points for each of the remaining 90 patients. The voxel time courses were temporally filtered. In order to reduce task related effects, we modelled the task conditions with standard block effects (Hayling), all convolved with canonical hemodynamical response functions, and fitted a general linear model (which also included regressors for the estimated head movement) to the time filtered data; the residuals of this procedure were used as the data for all the work described in this paper. The full data set was split into two halves, a training and a test set. Data from 45 of the subjects was used for training and 45 for testing. For an effective connectivity analysis, a number of brain regions (seeds) were chosen on the basis of the results of a functional connectivity study [19] and taking regard of areas which may be of particular clinical interest. In total 14 regions were chosen, along with their 14 cross-hemisphere counterparts. Hence we are interested in learning a 28 by 28 connectivity matrix. 8.1 Learning SEM Structure For both datasets a similar procedure to the toy example was followed for learning structure for the fMRI data. The stability of the log posterior along with estimations of cross-correlation against lag were used as heuristics to determine convergence prior to obtaining 10000 sample points. Assuming a fully visible path analysis (covariance K1) model, where no measurement noise is included is typical in fMRI analysis (e.g. [15] for a Schizophrenia studies), we found that samples from the posterior of this model were in fact so highly connected that displaying them would infeasible. For D2 a connectivity of 350 of the 752 total possible connections was typical. However note that only 376 connections are needed to fully specify a general covariance. Hence we can assume that in this situation the data is not suggesting any particular structure in the data which is reasonably amenable to path analysis. We can generalise the path analysis model by making the region activities latent variables, and allow the measurement variables to be noisy versions of those regions. In SEM terms this is equivalent to assuming the covariance structure given by K2. A repeat of the whole procedure with this covariance results in much smaller structures. We focus on this approach. For dataset D1, we sample posterior structures given the training data with T = 100. There is notable variation in these structures although some key links (eg Left Motor Cortex (L M1) to Left Posterior Parietal Cortex (L PPC) are included in most samples. In addition an a priori connectivity (a) (b) Figure 2: Structure for (a) the hand specified model (b) the highest posterior sample. (a) (b) (c) Figure 3: Graphical structure for (a) the highest posterior structure from the sample (b) random sample. (c) a sample from the two tier model. The regions are Inferior Frontal Gyrus, Medial Frontal Gyrus, Ant. Cingulate, Frontal Operculum, Superior Frontal Gyrus, Middle Frontal Gyrus, Superior Temporal Gyrus, Middle Temporal Gyrus, Insula, Thalamus, Amygdala Hippocampal Region, Cuneus/Precuneus, Inferior Parietal Lobule and Posterior Cerebellum. structure is proposed for the regions in the study, taking into account the task involved. This was obtained by using knowledge of neuroanatomical connectivity drawn from studies using tract-tracing in non-human primates. It was produced independent of the connectivity analysis and without knowledge of its results, but taking into account the seed locations and their corresponding activities. Note that this is a simple finger tapping motor task with seeds corresponding to the associated regions. Though not trivial, we would expect the specification to be easier and more accurate here than for more complicated cognitive tasks, due to the high number of papers using this task in functional neuroimaging. Task D1 is also of note due to its focus on repeated scanning in a single individual, thus negating any problems in seed selection that may arise from inter-subject spatial variance. These two cases described above are specified as different hypothesised models. We denote the hand-specified structure MH and we select the maximum a posteriori sample ML (for ”Learnt Model”) as a potential alternative. The two structures are illustrated in Figure 2. The maximum a posteriori parameters are then estimated for the two models using the same conjugate gradient procedure on the same dataset. These two models are then used predictively on the remaining unseen test data. We compute the predictive log-likelihoods for each model. We find that the best predictive log-likelihoods for each approach are the same (3SF) for both models. They are also the same as the predictive likelihood using the full sample covariance, which given the large data sizes used is well specified. Both these models perform better than other random models with equivalent numbers of connections. In reality learnt models are going to be used in new situations and situations with less data. One test of the appropriateness of a model is to assess its predictive capability when trained on little data. By estimating the model parameters on 100 data points, instead of 2000, we find that the learnt model performs very slightly better than the hand specified model (log odds ratio of 63 on a 574 point test set), and both perform better than the full covariance (log odds of 292). This indicates that both MH and ML are providing salient reduced representations which capture useful characteristics of the data. We also ran tests on D2. Maximum posterior samples and a random sample are illustrated in Figure 3. Note that although these samples appear to still be quite highly connected, they in fact have about 130 connections. Even so this is significantly greater than the idealised connectivity structures typically used in most studies. One further approach is to assume a fully connected structure, but where the connectivity is in two categories. We have priors on connectivity with the same values of Tij as before for the strong connections and much larger values for the weaker connections. When this is added to the form of the model (where we make the incorrect but practical assumption that the BIC assumption still holds for the stronger connections) we obtain even simpler structures. Following this procedure we find that models of the form of 3c are typical samples from the posterior where only the larger connections are shown. Again connections such as those between the Cuneus/Precuneus and the Superior Frontal Gyrus, the Thalamic connections, and some of the cross-hemispheric connections are amongst those that would be expected. This approach is related to recent work on the use of sparse priors for effective connectivity [18]. 9 Future Directions This work demonstrates that if we learn structural equation models from data, we find there is little evidence for the simple forms of path analysis model which is in common use in the fMRI literature. We suggest that learning connectivity can be a reasonable complement to current procedures where prior specification is hard. Learning on its own does discover useful parameterised representations, but these parameterisations are not the same as reasonable prior specifications. This is unsurprising due to the statistical equivalence of many SEM structures. It should be expected that combining learnt structures with prior anatomical models will help in the specification of more accurate connectivity assumptions, as it will reduce the number of equivalence and focus on more reasonable structural forms. Furthermore future comparisons can be made using a sample of reasonable models instead of a single a priori chosen model. We would also expect that the major gains in learning models with come from the focus on dynamical networks which do not suffer from specificity problems. Even if the level of temporal information is small, any temporal information provides handles for inferring causality that are unavailable with static equilibrium models. References [1] K. A. Bollen. Structural Equations with Latent Variables. John Wiley and Sons, 1989. [2] C. Buchel, J.T. Coull, and K.J. Friston. The preedictive value of changes in effective connectivity for human learning. Science, 283:1528–1541, 1999. [3] E. Bullmore, B. Howitz, G. Honey, M. Brammer, S. Williams, and T. Sharma. How good is good enough in path analysis of fMRI data? Neuroimage, 11:289–301, 2000. [4] D. Dash. Restructing dynamic causal systems in equilibrium. In Proc. Uncertainty in AI 2005, 2005. [5] K.J. Friston and C. Buchel. Attentional modulation of effective connectivity from V2 to V5/MT in humans. Proceedings of the National Academy of Sciecnes, 97:7591–7596, 2000. [6] K.J. Friston, L. Harrison, and W.D. Penny. Dynamic causal modelling. NeuroImage, 19:1273–1302, 2003. [7] T. Haavelmo. The statistical implications of a system of simultaneous equations. Econometrica, 11:1–12, 1943. [8] D. McGonigle, A. Howseman, B. Athwal, K.J. Friston, R. Frackowiak, and A. Holmes. Variability in fmri: An examination of intersession differences. Neuroimage, 11:708–734, 2000. [9] A. R. McIntosh and F. Gozales-Lima. Structural equation modelling and its application to network analysis in functional brain imaging. Human Brain Mapping, 2:2–22, 1994. [10] C. Glymour P. Spirtes and R. Scheines. Causation, Prediction and Search. MIT Press, 2 edition, 2001. [11] J. Pearl. Causality. Cambridge University Press, 2000. [12] W.D. Penny, K.E. Stephan, A. Mechelli, and K.J. Friston. Comparing dynamic causal models. Neuroimage, 22:1157–1172, 2004. [13] T. Richardson. A discovery algorithm for directed cyclic graphs. In Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence, 1996. [14] J. Rowe, K.E. Stephan, K. Friston, R. Frackowiak, A. Lees, and R. Passingham. Attention to action in Parkinsons disease. Brain, 125:276–289, 2002. [15] R. Schlosser, T. Gesierich, B. Kauffman, G. Vucurevic, S. Hunsche, J. Gawehn, and P. Stoeter. Altered effective connectivity during working memory performance in schizophrenia: a study with fMRI and structural equation modeling. Neuroimage, 19:751–763, 2003. [16] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461–464, 1978. [17] S.M.Smith, C.F. Beckmann, N. Ramnani, M.W. Woolrich, P.R. Bannister, M. Jenkinson, P.M. Matthews, and D. McGonigle. Variability in fMRI: A re-examination of intersession differences. Human Brain Mapping, 24:248–257, 2005. [18] P.A. Valdes Sosa, J.M. Sanchez-Bornot, A. Lage-Castellanos, M. Vega-Hernandez, J. Bosch Bayard, L. Melie-Garcia, and E. Canales-Rodriguez. Estimating brain functional connectiivty with sparse multivariate autoregression. Philosophical Transactions of the Royal Society onf London B Biological Sciences, 360:969–981, 2005. [19] H.C. Whalley, E. Simonotto, I. Marshall, D.G.C Owens, N.H. Goddard, E.C. Johnstone, and S.M. Lawrie. Functional disconnectivity in subjects at high genetic risk of schizophrenia. Brain, 128:2097–2108, 2005. [20] S. Wright. Correlation and causation. Journal of Agricultural Research, 20:557–585, 1921. [21] X. Zheng and J. C. Rajapakse. Learning functional structure from fMR images. Neuroimage, 31:1601– 1613, 2006.
|
2006
|
160
|
2,989
|
Simplifying Mixture Models through Function Approximation Kai Zhang James T. Kwok Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {twinsen, jamesk}@cse.ust.hk Abstract Finite mixture model is a powerful tool in many statistical learning problems. In this paper, we propose a general, structure-preserving approach to reduce its model complexity, which can bring significant computational benefits in many applications. The basic idea is to group the original mixture components into compact clusters, and then minimize an upper bound on the approximation error between the original and simplified models. By adopting the L2 norm as the distance measure between mixture models, we can derive closed-form solutions that are more robust and reliable than using the KL-based distance measure. Moreover, the complexity of our algorithm is only linear in the sample size and dimensionality. Experiments on density estimation and clustering-based image segmentation demonstrate its outstanding performance in terms of both speed and accuracy. 1 Introduction In many statistical learning problems, it is useful to obtain an estimate of the underlying probability density given a set of observations. Such a density model can facilitate discovery of the underlying data structure in unsupervised learning, and can also yield, asymptotically, optimal discriminant procedures [7]. In this paper, we focus on the finite mixture model, which describes the distribution by a mixture of simple parametric functions φ(·)’s as f(x) = Pn j=1 αjφ(x, θj). Here, θj is the parameter for the jth component, and the mixing parameters αj’s satisfy Pn j=1 αj = 1. The most common parametric form of φ is the Gaussian, leading to the well-known Gaussian mixtures. The mixture model has been widely used in clustering and density estimation, where the model parameters can be estimated by the standard Expectation-Maximization (EM) algorithm. However, the EM can be prohibitively expensive on large problems [12]. On the other hand, note that in many learning processes using mixture models (such as particle filtering [6] and non-parametric belief propagation [13]), the computational requirement is also very demanding due to the large number of components involved in the model. In this situation, our interest is more on reducing the number of components for prospective computational saving. Previous works typically employ spatial data structures, such as the kd-tree [8, 9], for acceleration. Recently, [5] proposes to reduce a large Gaussian mixture into a smaller one by minimizing a KL-based distance between the two mixtures. This has been applied with success on hierarchical clustering of scenery images and handwritten digits. In this paper, we propose a new algorithm for simplifying a given finite mixture model while preserving its component structures, with application to nonparametric density estimation and clustering. The idea is to minimize an upper bound on the approximation error between the original and simplified mixture models. By adopting the L2 norm as the error criterion, we can derive closed-form solutions that are more robust and reliable than using the KL-based distance measures. At the same time, our algorithm can be applied to general Gaussian kernels, and the complexity is only linear in the sample size and dimensionality. The rest of the paper is organized as follows. In Section 2 we describe the proposed approach in detail, and illustrate its advantages compared with the existing ones. In Section 3, we report experimental results on simplifying the Parzen window estimator, and color image segmentation through the mean shift clustering procedure. Section 4 gives some concluding remarks. 2 Approximation Algorithm Given a mixture model f(x) = n X j=1 αjφj(x), (1) we assume that the jth component φj(x) is of the form φj(x) = |Hj|−1/2KHj (x −xj) , (2) with weight αj, center xj and covariance matrix Hj. Here, KH(x) = K(H−1/2x) where K(x) is the kernel that is bounded and has compact support. Note that for radially symmetric kernels, it suffices to define K by the profile k such that K(x) = k(∥x∥2). With this notion, the gradient of the kernel function, KH(x), can be conveniently written as ∂xKH(x) = k′(r)∂xr = 2k′(r)H−1x, where r = xH−1x. Our task is to approximate f with a simpler mixture model g(x) = m X i=1 wigi(x), (3) with m ≪n, where each component gi also takes the form gi(x) = | ˜Hi|−1/2K ˜Hi (x −ti) , (4) with weight wi, center ti, and covariance matrix ˜Hi. Note that direct approximation of f by g is not feasible, because they involve a large number of components. Given a distance measure D(·, ·) between functions, the approximation error E = D(f, g) = D n X j=1 αjφj, m X i=1 wigi (5) is usually difficult to optimize. However, the problem can be very much simplified by minimizing an upper bound of E. Consider the L2 distance D(φ, φ′) = R (φ(x) −φ′(x))2 dx, and suppose that the mixture components {φj}n j=1 are divided into disjoint clusters S1, . . . , Sm. Then, it is easy to see that the approximation error E is bounded by E = Z n X j=1 αjφj(x) − m X i=1 wigi(x) 2 dx ≤m m X i=1 Z wigi(x) − X j∈Si αjφj(x) 2 dx. Denote this upper bound by E = m Pm i=1 Ei, where Ei = Z wigi(x) − X j∈Si αjφj(x) 2 dx. (6) Note that E is the sum of the “local” approximation errors Ei’s. Hence, if we can find a good representative wigi for each cluster by minimizing the local approximation error Ei, the overall approximation performance can also be guaranteed. This suggests partitioning the original mixture components into compact clusters, wherein approximation can then be done much more easily. Our basic algorithm proceeds as follows: 1. (Section 2.1.1) Partition the set of mixture components (φj’s) into m clusters where m ≪n. Let Si be the set that indexes all components belonging to the ith cluster. 2. (Section 2.1.2) For each cluster, approximate the local mixture model P j∈Si αjφj by a single component wigi, where gi is defined in (4). 3. The simplified model g is obtained by g(x) = Pm i=1 wigi(x). These steps will be discussed in more detail in the following sections. 2.1 Procedure 2.1.1 Partitioning of Components In this section, we consider how to group similar components into the same cluster, so that the subsequent local approximation can be more accurate. A useful algorithm for this task is the classic vector quantization (VQ) [4], where one iterates between partitioning a set of vectors and finding the best prototype for each partition until the distortion error converges. By defining a distance D(·, ·) between mixture components φj’s, we can partition the mixture components in a similar way. However, vector quantization is sensitive to the initial partitioning. So we first introduce a simple but highly efficient partitioning method called sequential sampling (SS): 1. Randomly select a φj and add it to the set of representatives R. 2. For all the components (j = 1, 2, . . ., n), do the following • Compute the distance D (φj, Ri), where Ri ∈R. • Once if D (φj, Ri) ≤r, where r is a predefined threshold, assign φj to the representative Ri, and then process the next component. • If D (φj, Ri) > r for all Ri ∈R, add φj as a new representative of R. 3. Terminate when all the components have been processed. This procedure partitions the components by choosing those φj’s that are enough far away as representatives, with a user-defined resolution r. So it is less sensitive to initialization. In practice, we will first initialize by sequential sampling, and then perform the iterative VQ procedure to further refine the partition, i.e., find the best representative Ri for each cluster, reassign each component φj to the closest representative Rπ(j), and iterate until the error P j αjD(φj, Rπ(j)) converges. 2.1.2 Local Approximation In this part, we consider how to obtain a good representative, wigi in (4), for each local cluster Si. The task is to determine the unknown variables wi, ti and ˜Hi associated with gi. Using the L2 norm, the upper bound (6) of the local approximation error can be written as Ei = Z wigi(x) − X j∈Si αjφj(x) 2 dx = w2 i CK |2 ˜Hi|1/2 −wi X j∈Si 2CKαjk(rij) |Hj + ˜Hi|1/2 + ci. Here, CK = R k(x′x)dx is a kernel-dependent constant, ci = R (P j∈Si αjφ2 j(x))2dx is a datadependent constant (irrelevant to the unknown variables), and rij = (ti−xj)′(Hj + ˜Hi)−1(ti−xj). Here we have assumed that k(a) · k(b) = k(a + b), which is valid for the Gaussian and negative exponential kernels. Without this assumption, solutions can still be obtained but are less compact. To minimize Ei w.r.t. wi, ti and ˜Hi, one can set the corresponding partial derivatives of Ei to zero. However, this leads to a nonlinear system that is quite difficult to solve. Here, we decouple the relations among these three parameters. First, observe that Ei is a quadratic function of wi. Therefore, given ˜Hi and ti, the minimum value of Ei can be easily obtained as E min i = | ˜Hi| 1 2 X j∈Si αjk(rij) Hj + ˜Hi −1/2 2 . (7) The remaining task is to minimize E min i w.r.t. ti and ˜Hi. By setting ∂tiE min i = 0, we have ti = M−1 i X j∈Si αjk′ (rij) (Hj + ˜Hi)−1xj |Hj + ˜Hi|1/2 , (8) where Mi = X j∈Si αjk′ (rij) (Hj + ˜Hi)−1 |Hj + ˜Hi|1/2 . This is an iterative contraction mapping. If ˜Hi is fixed, we can obtain ti by starting with an initial t(0) i , and then iterate (8) until convergence. Now, to solve ˜Hi, we set ∂˜HiE min i = 0 and obtain ˜Hi = P−1 i X j∈Si αj( ˜Hi + Hj)−1 |Hj + ˜Hi|1/2 k(rij)Hj + 4(−k′(rij))(xj −ti)(xj −ti)′( ˜Hi + Hj)−1 ˜Hi , (9) where Pi = X j∈Si ( ˜Hi + Hj)−1 |Hj + ˜Hi|1/2 αjk(rij). In summary, we first initialize t(0) i = P j∈Si αjxj/(P j∈Si αj), ˜H(0) i = P j∈Si αj Hj + (t(0) i −xj)(t(0) i −xj)′ /(P j∈Si αj), and then iterate (8) and (9) until convergence. The converged values of ti and ˜Hi are substituted into ∂wiEi = 0 to obtain wi as wi = |2 ˜Hi| 1 2 X j∈Si αjk(rij) |Hj + ˜Hi|1/2 . (10) 2.2 Complexity In the partitioning step, sequential sampling has a complexity of O(dmn), where n is original model size, m is the number of clusters, and d the dimension. By using a hierarchical scheme [2], this can be reduced to O(dn log(m)). The VQ takes O(dnm) time. In the local approximation step, the complexity is l Pm i=1 nid3 = lnd3, where l is the maximum number of iterations needed. In practice, we can enforce a diagonal structure on the covariance matrix ˜Hi’s while still obtaining a closed-form solution. Hence, the complexity becomes linear in the dimension d instead of cubic. Summing up these three terms, the overall complexity is O(dn log(m)+dnm+lnd) = O(dn(m+ l)), which is linear in both the data size and dimension (in practice m and l are quite small). 2.3 Remarks In this section, we discuss some interesting properties of the approximation scheme proposed in Section 2.1.2. To have better intuitions, we examine the special case of a Parzen window density estimator [11], where all φj’s have the same weights and bandwidths (Hj = H for j = 1, 2, . . . , n). Equation (9) then reduces to ˜Hi = H + 4 ˜Hi( ˜Hi + H)−1Vi, (11) where Vi = P j∈Si αj(−k′(rij))(xj −ti)(xj −ti)′ P j∈Si αjk(rij) . It shows that the bandwidth ˜Hi of gi can be decomposed into two parts: the bandwidth H of the original kernel density estimator, and the covariance Vi of the local cluster Si with an adjusting matrix Γi = 4 ˜Hi( ˜Hi + H)−1. As an illustration, consider the 1-D case where H = h2, ˜Hi = h2 i . Then γi = 4h2 i h2+h2 i , and h2 i = h2 + γiVi. Since Vi ≥0 and γi ≥2, we can see that h2 i ≥h2 + Vi. Moreover, hi is closely related to the spread of the local cluster. If all the points in Si are located at the same position (i.e., Vi = 0), then h2 i = h2. Otherwise, the larger the spread of the local cluster, the larger is hi. In other words, the bandwidths ˜Hi’s are adaptive to the local data distribution. Related works in simplifying the mixture models (such as [5]) simply choose ˜Hi = H + Cov[Si]. In comparison, our covariance term Vi is more reliable in that it incorporates distance-based weighting. Interestingly, this is somewhat similar to the bandwidth matrix used in the manifold Parzen windows [14], which is designed for handling sparse, high-dimensional data more robustly. Note that our choice of ˜Hi is derived rigorously by minimizing the L2 approximation error. Therefore, this coincidence naturally indicates the robustness of the L2-norm based distance measures. Moreover, note that the adjusting matrix Γi changes not only the scale of the bandwidth, but also its eigen-structures in an iterative manner. This will be very beneficial in multivariate cases. Second, in determining the center of gi, (8) can be reduced to ti = P j∈Si αjk′ H+ ˜Hi (xj −ti) xj P j∈Si αjk′ H+ ˜Hi (xj −ti) . (12) This can be regarded a mean-shift procedure [1] in the d-dimensional space with kernel K. It is easy to verify that this iterative procedure is indeed locating the peak of the density function pi(x) = |H + ˜Hi|−1 2 P j∈Si KH+ ˜Hi (x −xj). Note, on the other hand, that what we want to approximate originally is the local density fi(x) = |H|−1 2 P j∈Si KH (x −xj). In the 1-D case (with H = h2, and ˜Hi = h2 i ), the bandwidth of pi (i.e., h2 + h2 i ) is larger than that of fi (i.e., h2). It appears intriguing that on fitting a kernel density fi(x) estimated on the sample set {xj}j∈Si, one needs to locate the maximum of another density function pi(x), instead of the maximum of fi(x) itself or simply, the mean of the sample set {xj}j∈Si as chosen in [5]. Indeed, these three choices coincide when the distribution of Si is symmetric and uni-modal, but will differ otherwise. Intuitively, when the data is asymmetric, the center ti should be biased towards the heavier side of the data distribution. The maximum of fi(x) thus fails to meet this requirement. On the other hand, the mean of Si, though biased towards the heavier side, still lacks an accurate control on the degree of bias. In comparison, our method provides a principled way of selecting the center. Note that pi(x) has a larger bandwidth than the original fi(x). Therefore, its maximum will move towards the heavier side of the distribution compared with that of fi(x), with the degree of bias automatically controlled by the mean shift iterations in (12). Here, we give an illustration on the performance of the three center selection schemes. Figure 1(a) shows the histogram of a local cluster Si, whose Parzen window estimator (fi) is asymmetric. Figure 1(b) plots the correspondingapproximation error Ei (6) at different bandwidths hi (the remaining parameter, wi, is set to the optimal value by (10) ). As can be seen, the approximation error of our method is consistently lower than those of the other two. Moreover, the resultant optimum is also much lower. 1.5 2 2.5 3 0 5 10 15 20 25 30 35 x histogram (a) The histogram of a local cluster Si and its density fi. 0.05 0.1 0.15 0.2 0.25 0.3 0 1 2 3 4 5 6 7 8 9 10 hi 2 approximation error local maximu local mean our method (b) Approximation error. Figure 1: Approximation of an asymmetric density using different center selection schemes. 3 Experiments In this section, we perform experiments to evaluate the performance of our mixture simplification scheme. We focus on the Parzen window estimator which, on given a set of samples S = {xi}n i=1 in Rd, can be written as ˆf(x) = 1 n|H|−1 2 Pn j=1 KH (x −xj) . Note that the Parzen window estimator is a limiting form of the mixture model, where the number of components equals the data size and can be quite huge. In Section 3.1, we use the proposed approach to reduce the number of components in the kernel density estimator, and compare its performance with the algorithm in [5]. Then, in Section 3.2, we perform color image segmentation by running the mean shift clustering algorithm on the simplified density model. 3.1 Simplifying Nonparametric Density Models In this section, we reduce the number of kernels in the Parzen window estimator by using the proposed approach and the method in [5]. Experiments are performed on a 1-D set with 1800 samples drawn from the Gaussian mixture 8 18N(−2.6, 0.09) + 6 18N(−0.8, 0.36) + 4 18N(1.7, 0.64), where N(µ, σ2) denotes the normal distribution with mean µ and variance σ2. The Gaussian kernel with fixed bandwidth h = 0.3 is used for density estimation. To make the problem more challenging, we choose m = 5, i.e., only 5 kernels are used to approximate the density. The k-means algorithm is used for initialization. As can be seen from Figure 2(b), the third Gaussian component has been broken into two by the method in [5]. In comparison, our result in Figure 2(c) is more reliable. −4 −3 −2 −1 0 1 2 3 4 0 20 40 60 80 100 120 140 160 180 x (a) Histogram. −4 −3 −2 −1 0 1 2 3 4 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 x (b) Result by [5]. −4 −3 −2 −1 0 1 2 3 4 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 x (c) Our result. Figure 2: Approximate the Parzen window estimator by simplifying mixture models. Green: Parzen window estimator; black: simplified mixture model; blue-dashed: components of the mixture model. To have a quantitative evaluation, we randomly generate the 3-Gaussian data 100 times, and compare the two algorithms (ours and [5]) using the following error criteria: 1) the L2 error (5); 2) the standard KL distance; 3) the local KL-distance used in [5]. The local KL-distance between two mixtures, f = Pn j=1 αjφj and g = Pm i=1 wigi, is defined as d(f, g) = n X j=1 αjKL(φj||gπ(j)), where π(j) is the function that maps each component φj to the closest representative component gπ(j) such that π(j) = arg mini=1,2,...,m KL(φj||gi). Results are plotted in Figure 3, where for clarity we order the results in increasing error obtained by [5]. We can see that under the L2 norm, the error of our algorithm is significantly lower than that of [5]. Quantitatively, our error is only about 36.61% of that by [5]. On using the standard KL-distance, our error is about 87.34% of that by [5], where the improvement is less significant. This is because the KL-distance is sensitive to the tail of the distribution, i.e., a small difference in the low-density regions may induce a huge KL-distance. As for the local KL-distance, our error is about 99.35% of that by [5]. 3.2 Image Segmentation The Parzen window estimator can be used to reveal important clustering information, namely that its modes (or local maxima) correspond to dominant clusters in the data. This property is utilized in the 0 20 40 60 80 100 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 −5 number of tests L2−norm error method in [5] our method (a) The L2 distance error. 0 20 40 60 80 100 0 1 2 3 4 5 6 7 x 10 −3 x KL distance method in [5] our method (b) Standard KL-distance. 0 20 40 60 80 100 2800 2900 3000 3100 3200 3300 3400 3500 3600 number of tests local KL error method in [5] our method (c) Local KL-distance defined by [5] Figure 3: Quantitative comparison of the approximation errors. mean shift clustering algorithm [1, 3], where every data point is moved along the density gradient until it reaches the nearest local density maximum. The mean shift algorithm is robust, and can identify arbitrarily-shaped clusters in the feature space. Recently, mean shift is applied in color image segmentation and has proven to be quite successful [1]. The idea is to identify homogeneous image regions through clustering in a properly selected feature space (such as color, texture, or shape). However, mean shift can be quite expensive due to the large number of kernels involved in the density estimator. To reduce the computational requirement, we first reduce the density estimator ˆf(x) to a simpler model g(x) using our simplification scheme, and then apply the iterative mean shift procedure on the simplified model g(x). Experiments are performed on a number of benchmark images1 used in [1]. We use the Gaussian kernel with bandwidth h = 20. The partition parameter is r = 25. For comparison, we also implement the standard mean shift and its fast version using kd-trees (using the ANN library [10]). The codes are written in C++ and run on a 2.26GHz Pentium-III machine. As the “true” segmentation of an image is subjective, so only a visual comparison is intended here. Table 1: Total wall time (in seconds) on various segmentation tasks, and the number of components in g(x). mean shift our method image data size standard kd-tree # components time consumption squirrel 60,192 (209×288) 1215.8 11.94 81 0.18 hand 73,386 (243×302) 1679.7 12.92 120 0.35 house 48,960 (192×255) 1284.5 5.16 159 0.22 lake 262,144 (512×512) 3343.0 85.65 440 3.67 Segmentation results are shown in Figures 4. The rows, from top to bottom, are: the original image, segmentation results by standard mean shift and our approach. We can see that our results are closer to those by the standard mean shift (applied on the original density estimator), with the number of components (Table 1) dramatically smaller than the data size n. This demonstrates the success of our approximation scheme in maintaining the structure of the data distribution using highly compact models. Our algorithm is also much faster than the standard mean shift and its fast version using kdtrees. The reason is that kd-trees only facilitates range searching but does not reduce the expensive computations associated with the large number of kernels. 4 Conclusion Finite mixture is a powerful model in many statistical learning problems. However, the large model size can be a major hindrance in many applications. In this paper, we reduce the model complexity by first grouping the components into compact clusters, and then perform local function approximation that minimizes an upper bound of the approximation error. Our algorithm has low complexity, and demonstrates more reliable performance compared with methods using KL-based distances. 1http://www.caip.rutgers.edu/∼comanici/MSPAMI/msPamiResults.html Figure 4: Image segmentation by standard mean shift (2nd row), and ours (bottom). References [1] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, 2002. [2] T. Feder and D. Greene. Optimal algorithms for approximate clustering. In Proceedings of ACM Symposium on Theory of Computing, pages 434–444, 1988. [3] K. Fukunaga and L. Hostetler. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on Information Theory, 21:32–40, 1975. [4] A. Gersho and R.M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Press, Boston, 1992. [5] J. Goldberger and S. Roweis. Hierarchical clustering of a mixture model. In Advances in Neural Information Processing Systems 17, pages 505–512. 2005. [6] B. Han, D. Comaniciu, Y. Zhu, and L. Davis. Incremental density approximation and kernel-based Bayesian filtering for object tracking. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, pages 638–644, 2004. [7] A.J. Izenman. Recent developments in nonparametric density estimation. Journal of the American Statistical Association, 86(413):205–224, 1991. [8] T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, and A.Y. Wu. An efficient kmeans clustering algorithm: Analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7):881–892, 2002. [9] A.W. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In Advances in Neural Information Processing Systems 11, pages 543–549, 1998. [10] D.M. Mount and S. Arya. ANN: A library for approximate nearest neighbor searching. In Proceedings of Center for Geometric Computing Second Annual Fall Workshop Computational Geometry (available from http://www.cs.umd.edu/∼mount/ANN), 1997. [11] E. Parzen. On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065–1075, 1962. [12] K. Popat and R.W. Picard. Cluster-based probability model and its application to image and texture processing. IEEE Transactions on Image Processing, 6(2):268–284, 1997. [13] E.B. Sudderth, A. Torralba, W.T. Freeman, and A.S. Willsky. Describing visual scenes using transformed Dirichlet processes. In Advances in Neural Information Processing Systems 19, 2006. [14] P. Vincent and Y. Bengio. Manifold Parzen windows. In Advances in Neural Information Processing Systems 15, 2003.
|
2006
|
161
|
2,990
|
Sparse Representation for Signal Classification Ke Huang and Selin Aviyente Department of Electrical and Computer Engineering Michigan State University, East Lansing, MI 48824 {kehuang, aviyente}@egr.msu.edu Abstract In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1 Introduction Sparse representations of signals have received a great deal of attentions in recent years. The problem solved by the sparse representation is to search for the most compact representation of a signal in terms of linear combination of atoms in an overcomplete dictionary. Recent developments in multi-scale and multi-orientation representation of signals, such as wavelet, ridgelet, curvelet and contourlet transforms are an important incentive for the research on the sparse representation. Compared to methods based on orthonormal transforms or direct time domain processing, sparse representation usually offers better performance with its capacity for efficient signal modelling. Research has focused on three aspects of the sparse representation: pursuit methods for solving the optimization problem, such as matching pursuit [1], orthogonal matching pursuit [2], basis pursuit [3], LARS/homotopy methods [4]; design of the dictionary, such as the K-SVD method [5]; the applications of the sparse representation for different tasks, such as signal separation, denoising, coding, image inpainting [6, 7, 8, 9, 10]. For instance, in [6], sparse representation is used for image separation. The overcomplete dictionary is generated by combining multiple standard transforms, including curvelet transform, ridgelet transform and discrete cosine transform. In [7], application of the sparse representation to blind source separation is discussed and experimental results on EEG data analysis are demonstrated. In [8], a sparse image coding method with the wavelet transform is presented. In [9], sparse representation with an adaptive dictionary is shown to have state-of-the-art performance in image denoising. The widely used shrinkage method for image desnoising is shown to be the first iteration of basis pursuit that solves the sparse representation problem [10]. In the standard framework of sparse representation, the objective is to reduce the signal reconstruction error with as few number of atoms as possible. On the other hand, discriminative analysis methods, such as LDA, are more suitable for the tasks of classification. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we propose the method of sparse representation for signal classification (SRSC), which modifies the standard sparse representation framework for signal classification. We first show that replacing the reconstruction error with discrimination power in the objective function of the sparse representation is more suitable for the tasks of classification. When the signal is corrupted, the discriminative methods may fail because little information is contained in discriminative analysis to successfully deal with noise, missing data and outliers. To address this robustness problem, the proposed approach of SRSC combines discrimination power, signal reconstruction and sparsity in the objective function for classification. With the theoretical framework of SRSC, our objective is to achieve a sparse and robust representation of corrupted signals for effective classification. The rest of this paper is organized as follows. Section 2 reviews the problem formulation and solution for the standard sparse representation. Section 3 discusses the motivations for proposing SRSC by analyzing the reconstructive methods and discriminative methods for signal classification. The formulation and solution of SRSC are presented in Section 4. Experimental results with synthetic and real data are shown in Section 5 and Section 6 concludes the paper with a summary of the proposed work and discussions. 2 Sparse Representation of Signal The problem of finding the sparse representation of a signal in a given overcomplete dictionary can be formulated as follows. Given a N × M matrix A containing the elements of an overcomplete dictionary in its columns, with M > N and usually M >> N, and a signal y ∈RN, the problem of sparse representation is to find an M × 1 coefficient vector x, such that y = Ax and ∥x∥0 is minimized, i.e., x = min x′ ∥x′∥0 s.t. y = Ax. (1) where ∥x∥0 is the ℓ0 norm and is equivalent to the number of non-zero components in the vector x. Finding the solution to equation (1) is NP hard due to its nature of combinational optimization. Suboptimal solutions to this problem can be found by iterative methods like the matching pursuit and orthogonal matching pursuit. An approximate solution is obtained by replacing the ℓ0 norm in equation (1) with the ℓ1 norm, as follows: x = min x′ ∥x′∥1 s.t. y = Ax. (2) where ∥x∥1 is the ℓ1 norm. In [11], it is proved that if certain conditions on the sparsity is satisfied, i.e., the solution is sparse enough, the solution of equation (1) is equivalent to the solution of equation (2), which can be efficiently solved by basis pursuit using linear programming. A generalized version of equation (2), which allows for certain degree of noise, is to find x such that the following objective function is minimized: J1(x; λ) = ∥y −Ax∥2 2 + λ ∥x∥1 (3) where the parameter λ > 0 is a scalar regularization parameter that balances the tradeoff between reconstruction error and sparsity. In [12], a Bayesian approach is proposed for learning the optimal value for λ. Except for the intuitive interpretation as obtaining a sparse factorization that minimizes signal reconstruction error, the problem formulated in equation (3) has an equivalent interpretation in the framework of Bayesian decision as follows [13]. The signal y is assumed to be generated by the following model: y = Ax + ε (4) where ε is white Gaussian noise. Moreover, the prior distribution of x is assumed to be superGaussian: p(x) ∼exp −λ M i=1 |xi|p (5) where p ∈[0, 1]. This prior has been shown to encourage sparsity in many situations, due to its heavy tails and sharp peak. Given this prior, maximum a posteriori (MAP) estimation of x is formulated as xMAP = arg max x p(x|y) = arg min x [−log p(y|x) −log p(x)] = arg min x (∥y −Ax∥2 2 + λ ∥x∥p) (6) when p = 0, equation (6) is equivalent to the generalized form of equation (1); when p = 1, equation (6) is equivalent to equation (2). 3 Reconstruction and Discrimination Sparse representation works well in applications where the original signal y needs to be reconstructed as accurately as possible, such as denoising, image inpainting and coding. However, for applications like signal classification, it is more important that the representation is discriminative for the given signal classes than a small reconstruction error. The difference between reconstruction and discrimination has been widely investigated in literature. It is known that typical reconstructive methods, such as principal component analysis (PCA) and independent component analysis (ICA), aim at obtaining a representation that enables sufficient reconstruction, thus are able to deal with signal corruption, i.e., noise, missing data and outliers. On the other hand, discriminative methods, such as LDA [14], generate a signal representation that maximizes the separation of distributions of signals from different classes. While both methods have broad applications in classification, the discriminative methods have often outperformed the reconstructive methods for the classification task [15, 16]. However, this comparison between the two types of method assumes that the signals being classified are ideal, i.e., noiseless, complete(without missing data) and without outliers. When this assumption does not hold, the classification may suffer from the nonrobust nature of the discriminative methods that contains insufficient information to successfully deal with signal corruptions. Specifically, the representation provided by the discriminative methods for optimal classification does not necessarily contain sufficient information for signal reconstruction, which is necessary for removing noise, recovering missing data and detecting outliers. This performance degradation of discriminative methods on corrupted signals is evident in the examples shown in [17]. On the other hand, reconstructive methods have shown successful performance in addressing these problems. In [9], the sparse representation is shown to achieve state-of-the-art performance in image denoising. In [18], missing pixels in images are successfully recovered by inpainting method based on sparse representation. In [17, 19], PCA method with subsampling effectively detects and excludes outliers for the following LDA analysis. All of these examples motivate the design of a new signal representation that combines the advantages of both reconstructive and discriminative methods to address the problem of robust classification when the obtained signals are corrupted. The proposed method should generate a representation that contain discriminative information for classification, crucial information for signal reconstruction and preferably the representation should be sparse. Due to the evident reconstructive properties [9, 18], the available efficient pursuit methods and the sparsity of representation, we choose the sparse representation as the basic framework for the SRSC and incorporate a measure of discrimination power into the objective function. Therefore, the sparse representation obtained by the proposed SRSC contains both crucial information for reconstruction and discriminative information for classification, which enable a reasonable classification performance in the case of corrupted signals. The three objectives: sparsity, reconstruction and discrimination may not always be consistent. Therefore, weighting factors are introduced to adjust the tradeoff among these objectives, as the weighting factor λ in equation (3). It should be noted that the aim of SRSC is not to improve the standard discriminative methods like LDA in the case of ideal signals, but to achieve comparable classification results when the signals are corrupted. A recent work [17] that aims at robust classification shares some common ideas with the proposed SRSC. In [17], PCA with subsampling proposed in [19] is applied to detect and exclude outliers in images and the rest of pixels are used for calculating LDA. 4 Sparse Representation for Signal Classification In this section, the SRSC problem is formulated mathematically and a pursuit method is proposed to optimize the objective function. We first replace the term measuring reconstruction error with a term measuring discrimination power to show the different effects of reconstruction and discrimination. Further, we incorporate measure of discrimination power in the framework of standard sparse representation to effectively address the problem of classifying corrupted signals. The Fisher’s discrimination criterion [14] used in the LDA is applied to quantify the discrimination power. Other well-known discrimination criteria can easily be substituted. 4.1 Problem Formulation Given y = Ax as discussed in Section 2, we view x as the feature extracted from signal y for classification. The extracted feature should be as discriminative as possible between the different signal classes. Suppose that we have a set of K signals in a signal matrix Y = [y1, y2, ..., yK] with the corresponding representation in the overcomplete dictionary as X = [x1, x2, ..., xK], of which Ki samples are in the class Ci, for 1 ≤i ≤C. Mean mi and variance s2 i for class Ci are computed in the feature space as follows: mi = 1 Ki x∈Ci x , s2 i = 1 Ki x∈Ci ∥x −mi∥2 2 (7) The mean of all samples are defined as: m = 1 K K i=1 xi. Finally, the Fisher’s discrimination power can then be defined as: F(X) = SB SW = C i=1 Ki(mi −m)(mi −m)T 2 2 C i=1 s2 i . (8) The difference between the sample means SB = C i=1 Ki(mi −m)(mi −m)T 2 2 can be interpreted as the ‘inter-class distance’ and the sum of variance SW = C i=1 s2 i can be similarly interpreted as the ‘inner-class scatter’. Fisher’s criterion is motivated by the intuitive idea that the discrimination power is maximized when the spatial distribution of different classes are as far away as possible and the spatial distribution of samples from the same class are as close as possible. Replacing the reconstruction error with the discrimination power, the objective function that focuses only on classification can be written as: J2(X, λ) = F(X) −λ K i=1 ∥xi∥0 (9) where λ is a positive scalar weighting factor chosen to adjust the tradeoff between discrimination power and sparsity. Maximizing J2(X, λ) generates a sparse representation that has a good discrimination power. When the discrimination power, reconstruction error and sparsity are combined, the objective function can be written as: J3(X, λ1, λ2) = F(X) −λ1 K i=1 ∥xi∥0 −λ2 K i=1 ∥yi −Axi∥2 2 (10) where λ1 and λ2 are positive scalar weighting factors chosen to adjust the tradeoff between the discrimination power, sparsity and the reconstruction error. Maximizing J3(X, λ1, λ2) ensures that a representation with discrimination power, reconstruction property and sparsity is extracted for robust classification of corrupted signals. In the case that the signals are corrupted, the two terms K i=1 ∥xi∥0 and K i=1 ∥yi −Axi∥2 2 robustly recover the signal structure, as in [9, 18]. On the other hand, the inclusion of the term F(X) requires that the obtained representation contains discriminative information for classification. In the following discussions, we refer to the solution of the objective function J3(X, λ1, λ2) as the features for the proposed SRSC. 4.2 Problem Solution Both the objective function J2(X, λ) defined in equation (9) and the objective function J3(X, λ1, λ2) defined in equation (10) have similar forms to the objective function defined in the standard sparse representation, as J1(x; λ) in equation (3). However, the key difference is that the evaluation of F(X) in J2(X, λ) and J3(X, λ1, λ2) involves not only a single sample, as in J1(x; λ), but also all other samples. Therefore, not all the pursuit methods, such as basis pursuit and LARS/Homotopy methods, that are applicable to the standard sparse representation method can be directly applied to optimize J2(X, λ) and J3(X, λ1, λ2). However, the iterative optimization methods employed in the matching pursuit and the orthogonal matching pursuit provide a direct reference to the optimization of J2(X, λ) and J3(X, λ1, λ2). In this paper, we propose an algorithm similar to the orthogonal matching pursuit and inspired by the simultaneous sparse approximation algorithm described in [20, 21]. Taking the optimization of J3(X, λ1, λ2) as example, the pursuit algorithm can be summarized as follows: 1. Initialize the residue matrix R0 = Y and the iteration counter t = 0. 2. Choose the atom from the dictionary, A, that maximizes the objective function: g = argmaxg∈AJ3(gT Rt, λ1, λ2) (11) 3. Determine the orthogonal projection matrix Pt onto the span of the chosen atoms, and compute the new residue. Rt = Y −PtY (12) 4. Increment t and return to Step 2 until t is less than or equal to a pre-determined number. The pursuit algorithm for optimizing J2(X, λ) also follows the same steps. Detailed analysis of this pursuit algorithm can be found in [20, 21]. 5 Experiments Two sets of experiments are conducted. In Section 5.1, synthesized signals are generated to show the difference between the features extracted by J1(X, λ) and J2(X, λ), which reflects the properties of reconstruction and discrimination. In Section 5.2, classification on real data is shown. Random noise and occlusion are added to the original signals to test the robustness of SRSC. 5.1 Synthetic Example Two simple signal classes, f1(t) and f2(t), are constructed with the Fourier basis. The signals are constructed to show the difference between the reconstructive methods and discriminative methods. f1(t) = g1 cos t + h1 sin t (13) f2(t) = g2 cos t + h2 sin t (14) 0 20 40 60 80 100 10 11 12 13 14 15 16 17 18 19 20 sample index coefficient amplitude selected by J1 f1 f2 0 20 40 60 80 100 0 1 2 3 4 5 6 7 8 9 10 sample index coefficient amplitude selected by J2 f1 f2 Figure 1: Distributions of projection of signals from two classes with the first atom selected by: J1(X, λ) (the left figure) and J2(X, λ) (the right figure). The scalar g1 is uniformly distributed in the interval [0, 5], and the scalar g2 is uniformly distributed in the interval [5, 10]. The scalar h1 and h2 are uniformly distributed in the interval [10, 20]. Therefore, most of the energy of the signal can be described by the sine function and most of the discrimination power is in the cosine function. The signal component with most energy is not necessary the component with the most discrimination power. Construct a dictionary as {sin t, cos t}, optimizing the objective function J1(X, λ) with the pursuit method described in Section 4.2 selects sin t as the first atom. On the other hand, optimizing the objective function J2(X, λ) selects cos t as the first atom. In the simulation, 100 samples are generated for each class and the pursuit algorithm stops at the first run. The projection of the signals from both classes to the first atom selected by J1(X, λ) and J2(X, λ) are shown in Fig.1. The difference shown in the figures has direct impact on the classification. 5.2 Real Example Classification with J1, J2 and J3(SRSC) is also conducted on the database of USPS handwritten digits [22]. The database contains 8-bit grayscale images of “0” through “9” with a size of 16 × 16 and there are 1100 examples of each digit. Following the conclusion of [23], 10-fold stratified cross validation is adopted. Classification is conducted with the decomposition coefficients (’ X’ in equation (10)) as feature and support vector machine (SVM) as classifier. In this implementation, the overcomplete dictionary is a combination of Haar wavelet basis and Gabor basis. Haar basis is good at modelling discontinuities in signal and on the other hand, Gabor basis is good at modelling continuous signal components. In this experiment, noise and occlusion are added to the signals to test the robustness of SRSC. First, white Gaussian noise with increasing level of energy, thus decreasing level of signal-to-noise ratio (SNR), are added to each image. Table 1 summarizes the classification error rates obtained with different SNR. Second, different sizes of black squares are overlayed on each image at a random location to generate occlusion (missing data). For the image size of 16 × 16, black squares with size of 3 × 3, 5 × 5, 7 × 7, 9 × 9 and 11 × 11 are overlayed as occlusion. Table 2 summarizes the classification error rates obtained with occlusion. Results in Table 1 and Table 2 show that in the case that signals are ideal (without missing data and noiseless) or nearly ideal, J2(X, λ) is the best criterion for classification. This is consistent with the known conclusion that discriminative methods outperform reconstructive methods in classification. However, when the noise is increased or more data is missing (with larger area of occlusion), the accuracy based on J2(X, λ) degrades faster than the accuracy base on J1(X, λ). This indicates Table 1: Classification error rates with different levels of white Gaussian noise Noiseless 20db 15db 10db 5db J1(Reconstruction) 0.0855 0.0975 0.1375 0.1895 0.2310 J2(Discrimination) 0.0605 0.0816 0.1475 0.2065 0.2785 J3(SRSC) 0.0727 0.0803 0.1025 0.1490 0.2060 Table 2: Classification error rates with different sizes of occlusion no occlusion 3 × 3 5 × 5 7 × 7 9 × 9 11 × 11 J1(Reconstruction) 0.0855 0.0930 0.1270 0.1605 0.2020 0.2990 J2(Discrimination) 0.0605 0.0720 0.1095 0.1805 0.2405 0.3305 J3(SRSC) 0.0727 0.0775 0.1135 0.1465 0.1815 0.2590 that the signal structures recovered by the standard sparse representation are more robust to noise and occlusion, thus yield less performance degradation. On the other hand, the SRSC demonstrates lower error rate by the combination of the reconstruction property and the discrimination power in the case that signals are noisy or with occlusions. 6 Discussions In summary, sparse representation for signal classification(SRSC) is proposed. SRSC is motivated by the ongoing researches in the area of sparse representation in the signal processing area. SRSC incorporates reconstruction properties, discrimination power and sparsity for robust classification. In current implementation of SRSC, the weight factors are empirically set to optimize the performance. Approaches to determine optimal values for the weighting factors are being conducted, following the methods similar to that introduced in [12]. It is interesting to compare SRSC with the relevance vector machine (RVM) [24]. RVM has shown comparable performance to the widely used support vector machine (SVM), but with a substantially less number of relevance/support vectors. Both SRSC and RVM incorporate sparsity and reconstruction error into consideration. For SRSC, the two terms are explicitly included into objective function. For RVM, the two terms are included in the Bayesian formula. In RVM, the “dictionary” used for signal representation is the collection of values from the “kernel function”. On the other hand, SRSC roots in the standard sparse representation and recent developments of harmonic analysis, such as curvelet, bandlet, contourlet transforms that show excellent properties in signal modelling. It would be interesting to see how RVM works by replacing the kernel functions with these harmonic transforms. Another difference between SRSC and RVM is how the discrimination power is incorporated. The nature of RVM is function regression. When used for classification, RVM simply changes the target function value to class membership. For SRSC, the discrimination power is explicitly incorporated by inclusion of a measure based on the Fisher’s discrimination. The adjustment of weighting factor in SRSC (in equation (10)) may give some flexibility for the algorithm when facing various noise levels in the signals. A thorough and systemic study of connections and difference between SRSC and RVM would be an interesting topic for the future research. References [1] S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. on Signal Processing, vol. 41, pp. 3397–3415, 1993. [2] Y. Pati, R. Rezaiifar, and P. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” in 27th Annual Asilomar Conference on Signals, Systems, and Computers, 1993. [3] S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999. [4] I. Drori and D. Donoho, “Solution of L1 minimization problems by LARS/Homotopy methods,” in ICASSP, 2006, vol. 3, pp. 636–639. [5] M. Aharon, M. Elad, and A. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Trans. On Signal Processing, to appear. [6] J. Starck, M. Elad, and D. Donoho, “Image decomposition via the combination of sparse representation and a variational approach,” IEEE Trans. on Image Processing, vol. 14, no. 10, pp. 1570–1582, 2005. [7] Y. Li, A. Cichocki, and S. Amari, “Analysis of sparse representation and blind source separation,” Neural Computation, vol. 16, no. 6, pp. 1193–1234, 2004. [8] B. Olshausen, P. Sallee, and M. Lewicki, “Learning sparse image codes using a wavelet pyramid architecture,” in NIPS, 2001, pp. 887–893. [9] M. Elad and M. Aharon, “Image denoising via learned dictionaries and sparse representation,” in CVPR, 2006. [10] M. Elad, B. Matalon, and M. Zibulevsky, “Image denoising with shrinkage and redundant representation,” in CVPR, 2006. [11] D. Donoho and X. Huo, “Uncertainty principles and ideal atomic decomposition,” IEEE Trans. on Information Theory, vol. 47, no. 7, pp. 2845–2862, 2001. [12] Y. Lin and D. Lee, “Bayesian L1-Norm sparse learning,” in ICASSP, 2006, vol. 5, pp. 605–608. [13] D. Wipf and B. Rao, “Sparse bayesian learning for basis selection,” IEEE Trans. on Signal Processing, vol. 52, no. 8, pp. 2153–2164, 2004. [14] R. Duda, P. Hart, and D. Stork, Pattern classification (2nd ed.), Wiley-Interscience, 2000. [15] P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997. [16] A. Martinez and A. Kak, “PCA versus LDA,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228–233, 2001. [17] S. Fidler, D. Skocaj, and A. Leonardis, “Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 337–350, 2006. [18] M. Elad, J. Starck, P. Querre, and D.L. Donoho, “Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),” Journal on Applied and Computational Harmonic Analysis, vol. 19, pp. 340–358, 2005. [19] A. Leonardis and H. Bischof, “Robust recognition using eigenimages,” Computer Vision and Image Understanding, vol. 78, pp. 99–118, 2000. [20] J. Tropp, A. Gilbert, and M. Strauss, “Algorithms for simultaneous sparse approximation. part I: Greedy pursuit,” Signal Processing, special issue on Sparse approximations in signal and image processing, vol. 86, no. 4, pp. 572–588, 2006. [21] J. Tropp, A. Gilbert, and M. Strauss, “Algorithms for simultaneous sparse approximation. part II: Convex relaxation,” Signal Processing, special issue on Sparse approximations in signal and image processing, vol. 86, no. 4, pp. 589–602, 2006. [22] USPS Handwritten Digit Database, “available at: http://www.cs.toronto.edu/ roweis/data.html,” . [23] R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in IJCAI, 1995, pp. 1137–1145. [24] M. Tipping, “Sparse bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001.
|
2006
|
162
|
2,991
|
Similarity by Composition Oren Boiman Michal Irani Dept. of Computer Science and Applied Math The Weizmann Institute of Science 76100 Rehovot, Israel Abstract We propose a new approach for measuring similarity between two signals, which is applicable to many machine learning tasks, and to many signal types. We say that a signal S1 is “similar” to a signal S2 if it is “easy” to compose S1 from few large contiguous chunks of S2. Obviously, if we use small enough pieces, then any signal can be composed of any other. Therefore, the larger those pieces are, the more similar S1 is to S2. This induces a local similarity score at every point in the signal, based on the size of its supported surrounding region. These local scores can in turn be accumulated in a principled information-theoretic way into a global similarity score of the entire S1 to S2. “Similarity by Composition” can be applied between pairs of signals, between groups of signals, and also between different portions of the same signal. It can therefore be employed in a wide variety of machine learning problems (clustering, classification, retrieval, segmentation, attention, saliency, labelling, etc.), and can be applied to a wide range of signal types (images, video, audio, biological data, etc.) We show a few such examples. 1 Introduction A good measure for similarity between signals is necessary in many machine learning problems. However, the notion of “similarity” between signals can be quite complex. For example, observing Fig. 1, one would probably agree that Image-B is more “similar” to Image-A than Image-C is. But why...? The configurations appearing in image-B are different than the ones observed in Image-A. What is it that makes those two images more similar than Image-C? Commonly used similarity measures would not be able to detect this type of similarity. For example, standard global similarity measures (e.g., Mutual Information [12], Correlation, SSD, etc.) require prior alignment or prior knowledge of dense correspondences between signals, and are therefore not applicable here. Distance measures that are based on comparing empirical distributions of local features, such as “bags of features” (e.g., [11]), will not suffice either, since all three images contain similar types of local features (and therefore Image-C will also be determined similar to Image-A). In this paper we present a new notion of similarity between signals, and demonstrate its applicability to several machine learning problems and to several signal types. Observing the right side of Fig. 1, it is evident that Image-B can be composed relatively easily from few large chunks of Image-A (see color-coded regions). Obviously, if we use small enough pieces, then any signal can be composed of any other (including Image-C from Image-A). We would like to employ this idea to indicate high similarity of Image-B to Image-A, and lower similarity of Image-C to Image-A. In other words, regions in one signal (the “query” signal) which can be composed using large contiguous chunks of data from the other signal (the “reference” signal) are considered to have high local similarity. On the other hand, regions in the query signal which can be composed only by using small fragmented pieces are considered locally dissimilar. This induces a similarity score at every point in the signal based on the size of its largest surrounding region which can be found in the other signal (allowing for some distortions). This approach provides the ability to generalize and infer about new configurations in the query signal that were never observed in the reference signal, while preserving Image-A: The “reference” signal: (Image-A) Image-C: Image-B: The “query” signal: (Image-B) Figure 1: Inference by Composition – Basic concept. Left: What makes “Image-B” look more similar to “Image-A” than “Image-C” does? (None of the ballet configurations in “Image-B” appear in “Image-A”!) Right: Image-B (the “query”) can be composed using few large contiguous chunks from ImageA (the “reference”), whereas it is more difficult to compose Image-C this way. The large shared regions between B and A (indicated by colors) provide high evidence to their similarity. structural information. For instance, even though the two ballet configurations observed in Image-B (the “query” signal) were never observed in Image-A (the “reference” signal), they can be inferred from Image-A via composition (see Fig. 1), whereas the configurations in Image-C are much harder to compose. Note that the shared regions between similar signals are typically irregularly shaped, and therefore cannot be restricted to predefined regularly shaped partitioning of the signal. The shapes of those regions are data dependent, and cannot be predefined. Our notion of signal composition is“geometric” and data-driven. In that sense it is very different from standard decomposition methods (e.g., PCA, ICA, wavelets, etc.) which seek linear decomposition of the signal, but not geometric decomposition. Other attempts to maintain the benefits of local similarity while maintaining global structural information have recently been proposed [8]. These have been shown to improve upon simple “bags of features”, but are restricted to preselected partitioning of the image into rectangular sub-regions. In our previous work [5] we presented an approach for detecting irregularities in images/video as regions that cannot be composed from large pieces of data from other images/video. Our approach was restricted only to detecting local irregularities. In this paper we extend this approach to a general principled theory of “Similarity by Composition”, from which we derive local and global similarity and dissimilarity measures between signals. We further show that this framework extends to a wider range of machine learning problems and to a wider variety of signals (1D, 2D, 3D, .. signals). More formally, we present a statistical (generative) model for composing one signal from another. Using this model we derive information-theoretic measures for local and global similarities induced by shared regions. The local similarities of shared regions (“local evidence scores”) are accumulated into a global similarity score (“global evidence score”) of the entire query signal relative to the reference signal. We further prove upper and lower bounds on the global evidence score, which are computationally tractable. We present both a theoretical and an algorithmic framework to compute, accumulate and weight those gathered “pieces of evidence”. Similarity-by-Composition is not restricted to pairs of signals. It can also be applied to compute similarity of a signal to a group of signals (i.e., compose a query signal from pieces extracted from multiple reference signals). Similarly, it can be applied to measure similarity between two different groups of signals. Thus, Similarity-by-Composition is suitable for detection, retrieval, classification, and clustering. Moreover, it can also be used for measuring similarity or dissimilarity between different portions of the same signal. Intra-signal dissimilarities can be used for detecting irregularities or saliency, while intra-signal similarities can be used as affinity measures for sophisticated intra-signal clustering and segmentation. The importance of large shared regions between signals have been recognized by biologists for determining similarities between DNA sequences, amino acid chains, etc. Tools for finding large repetitions in biological data have been developed (e.g., “BLAST” [1]). In principle, results of such tools can be fed into our theoretical framework, to obtain similarity scores between biological data sequences in a principled information theoretic way. The rest of the paper is organized as follows: In Sec. 2 we derive information-theoretic measures for local and global “evidence” (similarity) induced by shared regions. Sec. 3 describes an algorithmic framework for computing those measures. Sec. 4 demonstrates the applicability of the derived local and global similarity measures for various machine learning tasks and several types of signal. 2 Similarity by Composition – Theoretical Framework We derive principled information-theoretic measures for local and global similarity between a “query” Q (one or more signals) and a “reference” ref (one or more signals). Large shared regions between Q and ref provide high statistical evidence to their similarity. In this section we show how to quantify this statistical evidence. We first formulate the notion of “local evidence” for local regions within Q (Sec. 2.1). We then show how these pieces of local evidence can be integrated to provide “global evidence” for the entire query Q (Sec. 2.2). 2.1 Local Evidence Let R ⊆Q be a connected region within Q. Assume that a similar region exists in ref. We would like to quantify the statistical significance of this region co-occurrence, and show that it increases with the size of R. To do so, we will compare the likelihood that R was generated by ref, versus the likelihood that it was generated by some random process. More formally, we denote by Href the hypothesis that R was “generated” by ref, and by H0 the hypothesis that R was generated by a random process, or by any other application-dependent PDF (referred to as the “null hypothesis”). Href assumes the following model for the “generation” of R: a region was taken from somewhere in ref, was globally transformed by some global transformation T, followed by some small possible local distortions, and then put into Q to generate R. T can account for shifts, scaling, rotations, etc. In the simplest case (only shifts), T is the corresponding location in ref. We can compute the likelihood ratio: LR(R) = P(R|Href) P(R|H0) = P T P(R|T, Href)P(T|Href) P(R|H0) (1) where P(T|Href) is the prior probability on the global transformations T (shifts, scaling, rotations), and P(R|T, Href) is the likelihood that R was generated from ref at that location, scale, etc. (up to some local distortions which are also modelled by P(R|T, Href) – see algorithmic details in Sec. 3). If there are multiple corresponding regions in ref, (i.e., multiple Ts), all of them contribute to the estimation of LR(R). We define the Local Evidence Score of R to be the log likelihood ratio: LES(R|Href) = log2(LR(R)). LES is referred to as a “local evidence score”, because the higher LES is, the smaller the probability that R was generated by random (H0). In fact, P( LES(R|Href) > l | H0) < 2−l, i.e., the probability of getting a score LES(R) > l for a randomly generated region R is smaller than 2−l (this is due to LES being a log-likelihood ratio [3]). High LES therefore provides higher statistical evidence that R was generated from ref. Note that the larger the region R ⊆Q is, the higher its evidence score LES(R|Href) (and therefore it will also provide higher statistical evidence to the hypothesis that Q was composed from ref). For example, assume for simplicity that R has a single identical copy in ref, and that T is restricted to shifts with uniform probability (i.e., P(T|Href) = const), then P(R|Href) is constant, regardless of the size of R. On the other hand, P(R|H0) decreases exponentially with the size of R. Therefore, the likelihood ratio of R increases, and so does its evidence score LES. LES can also be interpreted as the number of bits saved by describing the region R using ref, instead of describing it using H0: Recall that the optimal average code length of a random variable y with probability function P(y) is length(y) = −log(P(y)). Therefore we can write the evidence score as LES(R|Href) = length(R|H0) −length(R|Href). Therefore, larger regions provide higher saving (in bits) in the description length of R. A region R induces “average savings per point” for every point q ∈R, namely, LES(R|Href ) |R| (where |R| is the number of points in R). However, a point q ∈R may also be contained in other regions generated by ref, each with its own local evidence score. We can therefore define the maximal possible savings per point (which we will refer to in short as PES = “Point Evidence Score”): PES(q|Href) = max R⊆Q s.t. q∈R LES(R|Href) |R| (2) For any point q ∈Q we define R[q] to be the region which provides this maximal score for q. Fig. 1 shows such maximal regions found in Image-B (the query Q) given Image-A (the reference ref). In practice, many points share the same maximal region. Computing an approximation of LES(R|Href), PES(q|Href), and R[q] can be done efficiently (see Sec 3). 2.2 Global Evidence We now proceed to accumulate multiple local pieces of evidence. Let R1, ..., Rk ⊆Q be k disjoint regions in Q, which have been generated independently from the examples in ref. Let R0 = Q\ ∪k i=1 Ri denote the remainder of Q. Namely, S = {R0, R1, ..., Rk} is a segmentation/division of Q. Assuming that the remainder R0 was generated i.i.d. by the null hypothesis H0, we can derive a global evidence score on the hypothesis that Q was generated from ref via the segmentation S (for simplicity of notation we use the symbol Href also to denote the global hypothesis): GES(Q|Href, S) = log P(Q|Href, S) P(Q|H0) = log P(R0|H0) kQ i=1 P(Ri|Href) kQ i=0 P(Ri|H0) = k X i=1 LES(Ri|Href) Namely, the global evidence induced by S is the accumulated sum of the local evidences provided by the individual segments of S. The statistical significance of such an accumulated evidence is expressed by: P( GES(Q|Href, S) > l | H0) = P( Pk i=1 LES(Ri|Href) > l | H0) < 2−l. Consequently, we can accumulate local evidence of non-overlapping regions within Q which have similar regions in ref for obtaining global evidence on the hypothesis that Q was generated from ref. Thus, for example, if we found 5 regions within Q with similar copies in ref, each resulting with probability less than 10% of being generated by random, then the probability that Q was generated by random is less than (10%)5 = 0.001% (and this is despite the unfavorable assumption we made that the rest of Q was generated by random). So far the segmentation S was assumed to be given, and we estimated GES(Q, Href, S). In order to obtain the global evidence score of Q, we marginalize over all possible segmentations S of Q: GES(Q|Href) = log P(Q|Href) P(Q|H0) = log X S P(S|Href)P(Q|Href, S) P(Q|H0) (3) Namely, the likelihood P(S|Href) of a segmentation S can be interpreted as a weight for the likelihood ratio score of Q induced by S. Thus, we would like P(S|Href) to reflect the complexity of the segmentation S (e.g., its description length). From a practical point of view, in most cases it would be intractable to compute GES(Q|Href), as Eq. (3) involves summation over all possible segmentations of the query Q. However, we can derive upper and lower bounds on GES(Q|Href) which are easy to compute: Claim 1. Upper and lower bounds on GES: max S { logP(S|Href) + X Ri∈S LES(Ri|Href) } ≤ GES(Q|Href) ≤ X q∈Q PES(q|Href) (4) proof: See Appendix www.wisdom.weizmann.ac.il/˜vision/Composition.html. Practically, this claim implies that we do not need to scan all possible segmentations. The lower bound (left-hand side of Eq. (4) ) is achieved by the segmentation of Q with the best accumulated evidence score, P Ri∈S LES(Ri|Href) = GES(Q|Href, S), penalized by the length of the segmentation description logP(S|Href) = −length(S). Obviously, every segmentation provides such a lower (albeit less tight) bound on the total evidence score. Thus, if we find large enough contiguous regions in Q, with supporting regions in ref (i.e., high enough local evidence scores), and define R0 to be the remainder of Q, then S = R0, R1, ..., Rk can provide a reasonable lower bound on GES(Q|Href). As to the upper bound on GES(Q|Href), this can be done by summing up the maximal point-wise evidence scores PES(q|Href) (see Eq. 2) from all the points in Q (right-hand side of Eq. (4)). Note that the upper bound is computed by finding the maximal evidence regions that pass through every point in the query, regardless of the region complexity length(R). Both bounds can be estimated quite efficiently (see Sec. 3). 3 Algorithmic Framework The local and global evidence scores presented in Sec. 2 provide new local and global similarity measures for signal data, which can be used for various learning and inference problems (see Sec. 4). In this section we briefly describe the algorithmic framework used for computing PES, LES, and GES to obtain the local and global compositional similarity measures. Assume we are given a large region R ⊂Q and would like to estimate its evidence score LES(R|Href). We would like to find similar regions to R in ref, that would provide large local evidence for R. However, (i) we cannot expect R to appear as is, and would therefore like to allow for global and local deformations of R, and (ii) we would like to perform this search efficiently. Both requirements can be achieved by breaking R into lots of small (partially overlapping) data patches, each with its own patch descriptor. This information is maintained via a geometric “ensemble” of local patch descriptors. The search for a similar ensemble in ref is done using efficient inference on a star graphical model, while allowing for small local displacement of each local patch [5]. For example, in images these would be small spatial patches around each pixel contained in the larger image region R, and the displacements would be small shifts in x and y. In video data the region R would be a space-time volumetric region, and it would be broken into lots of small overlapping space-time volumetric patches. The local displacements would be in x, y, and t (time). In audio these patches would be short time-frame windows, etc. In general, for any n-dimensional signal representation, the region R would be a large n-dimensional region within the signal, and the patches would be small n-dimensional overlapping regions within R. The local patch descriptors are signal and application dependent, but can be very simple. (For example, in images we used a SIFT-like [9] patch descriptor computed in each image-patch. See more details in Sec. 4). It is the simultaneous matching of all these simple local patch descriptors with their relative positions that provides the strong overall evidence score for the entire region R. The likelihood of R, given a global transformation T (e.g., location in ref) and local patch displacements ∆li for each patch i in R (i = 1, 2, ..., |R|), is captured by the following expression: P(R|T, {∆li}, Href) = 1/Z Q i e −|∆di|2 2σ2 1 e −|∆li|2 2σ2 2 , where {∆di} are the descriptor distortions of each patch, and Z is a normalization factor. To estimate P(R|T, Href) we marginalize over all possible local displacements {∆li} within a predefined limited radius. In order to compute LES(R|Href) in Eq. (1), we need to marginalize over all possible global transformations T. In our current implementation we used only global shifts, and assumed uniform distributions over all shifts, i.e., P(T|Href) = 1/|ref|. However, the algorithm can accommodate more complex global transformations. To compute P(R|Href), we used our inference algorithm described in [5], modified to compute likelihood (sum-product) instead of MAP (maxproduct). In a nutshell, the algorithm uses a few patches in R (e.g., 2-3), exhaustively searching ref for those patches. These patches restrict the possible locations of R in ref, i.e., the possible candidate transformations T for estimating P(R|T, Href). The search of each new patch is restricted to locations induced by the current list of candidate transformations T. Each new patch further reduces this list of candidate positions of R in ref. This computation of P(R|Href) is efficient: O(|db|) + O(|R|) ≈O(|db|), i.e., approximately linear in the size of ref. In practice, we are not given a specific region R ⊂Q in advance. For each point q ∈Q we want to estimate its maximal region R[q] and its corresponding evidence score LES(R|Href) (Sec. 2.1). In (a) (b) (c) Figure 2: Detection of defects in grapefruit images. Using the single image (a) as a “reference” of good quality grapefruits, we can detect defects (irregularities) in an image (b) of different grapefruits at different arrangements. Detected defects are highlighted in red (c). (a) Input1: Output1: (b) Input2: Output2: Figure 3: Detecting defects in fabric images (No prior examples). Left side of (a) and (b) show fabrics with defects. Right side of (a) and (b) show detected defects in red (points with small intraimage evidence LES). Irregularities are measured relative to other parts of each image. order to perform this step efficiently, we start with a small surrounding region around q, break it into patches, and search only for that region in ref (using the same efficient search method described above). Locations in ref where good initial matches were found are treated as candidates, and are gradually ‘grown’ to their maximal possible matching regions (allowing for local distortions in patch position and descriptor, as before). The evidence score LES of each such maximally grown region is computed. Using all these maximally grown regions we approximate PES(q|Href) and R[q] (for all q ∈Q). In practice, a region found maximal for one point is likely to be the maximal region for many other points in Q. Thus the number of different maximal regions in Q will tend to be significantly smaller than the number of points in Q. Having computed PES(q|Href) ∀q ∈Q, it is straightforward to obtain an upper bound on GES(Q|Href) (right-hand side of Eq. (4)). In principle, in order to obtain a lower bound on GES(Q|Href) we need to perform an optimization over all possible segmentations S of Q. However, any good segmentation can be used to provide a reasonable (although less tight) lower bound. Having extracted a list of disjoint maximal regions R1, ..., Rk, we can use these to induce a reasonable (although not optimal) segmentation using the following heuristics: We choose the first segment to be the maximal region with the largest evidence score: ˜ R1 = argmaxRiLES(Ri|Href). The second segment is chosen to be the largest of all the remaining regions after having removed their overlap with ˜ R1, etc. This process yields a segmentation of Q: S = { ˜ R1, ..., ˜Rl} (l ≤k). Reevaluating the evidence scores LES( ˜Ri|Href) of these regions, we can obtain a reasonable lower bound on GES(Q|Href) using the left-hand side of Eq. (4). For evaluating the lower bound, we also need to estimate log P(S|Href) = −length(S|Href). This is done by summing the description length of the boundaries of the individual regions within S. For more details see appendix in www.wisdom.weizmann.ac.il/˜vision/Composition.html. 4 Applications and Results The global similarity measure GES(Q|Href) can be applied between individual signals, and/or between groups of signals (by setting Q and ref accordingly). As such it can be employed in machine learning tasks like retrieval, classification, recognition, and clustering. The local similarity measure LES(R|Href) can be used for local inference problems, such as local classification, saliency, segmentation, etc. For example, the local similarity measure can also be applied between different (a) (b) (c) Figure 4: Image Saliency and Segmentation. (a) Input image. (b) Detected salient points, i.e., points with low intra-image evidence scores LES (when measured relative to the rest of the image). (c) Image segmentation – results of clustering all the non-salient points into 4 clusters using normalized cuts. Each maximal region R[q] provides high evidence (translated to high affinity scores) that all the points within it should be grouped together (see text for more details). portions of the same signal (e.g., by setting Q to be one part of the signal, and ref to be the rest of the signal). Such intra-signal evidence can be used for inference tasks like segmentation, while the absence of intra-signal evidence (local dissimilarity) can be used for detecting saliency/irregularities. In this section we demonstrate the applicability of our measures to several of these problems, and apply them to three different types of signals: audio, images, video. For additional results as well as video sequences see www.wisdom.weizmann.ac.il/˜vision/Composition.html 1. Detection of Saliency/Irregularities (in Images): Using our statistical framework, we define a point q ∈Q to be irregular if its best local evidence score LES(R[q]|Href) is below some threshold. Irregularities can be inferred either relative to a database of examples, or relative to the signal itself. In Fig. 2 we show an example of applying this approach for detecting defects in fruit. Using a single image as a “reference” of good quality grapefruits (Fig. 2.a, used as ref), we can detect defects (irregularities) in an image of different grapefruits at different arrangements (Fig. 2.b, used as the query Q). The algorithm tried to compose Q from as large as possible pieces of ref. Points in Q with low LES (i.e., points whose maximal regions were small) were determined as irregular. These are highlighted in ”red” in Fig. 2.c, and correspond to defects in the fruit. Alternatively, local saliency within a query signal Q can also be measured relative to other portions of Q, e.g., by trying to compose each region in Q using pieces from the rest of Q. For each point q ∈Q we compute its intra-signal evidence score LES(R[q]) relative to the other (non-neighboring) parts of the image. Points with low intra-signal evidence are detected as salient. Examples of using intra-signal saliency to detect defects in fabric can be found in Fig. 3. Another example of using the same algorithm, but for a completely different scenario (a ballet scene) can be found in Fig. 4.b. We used a SIFT-like [9] patch descriptor, but computed densely for all local patches in the image. Points with low gradients were excluded from the inference (e.g., floor). 2. Signal Segmentation (Images): For each point q ∈Q we compute its maximal evidence region R[q]. This can be done either relative to a different reference signal, or relative Q itself (as is the case of saliency). Every maximal region provides evidence to the fact that all points within the region should be clustered/segmented together. Therefore, the value LES(R[q]|Href)) is added to all entries (i, j) in an affinity matrix, ∀qi∀qj ∈R[q]. Spectral clustering can then be applied to the affinity matrix. Thus, large regions which appear also in ref (in the case of a single image – other regions in Q) are likely to be clustered together in Q. This way we foster the generation of segments based on high evidential co-occurrence in the examples rather than based on low level similarity as in [10]. An example of using this algorithm for image segmentation is shown in Fig. 4.c. Note that we have not used explicitly low level similarity in neighboring point, as is customary in most image segmentation algorithms. Such additional information would improve the segmentation results. 3. Signal Classification (Video – Action Classification): We have used the action video database of [4], which contains different types of actions (“run”, “walk”, “jumping-jack”, “jump-forward-ontwo-legs”, “jump-in-place-on-two-legs”, “gallop-sideways”, “wave-hand(s)”,“bend”) performed by nine different people (altogether 81 video sequences). We used a leave-one-out procedure for action classification. The number of correct classifications was 79/81 = 97.5%. These sequences contain a single person in the field of view (e.g., see Fig. 5.a.). Our method can handle much more complex scenarios. To illustrate the capabilities of our method we further added a few more sequences (e.g., see Fig. 5.b and 5.c), where several people appear simultaneously in the field of view, with partial (a) (b) (c) Figure 5: Action Classification in Video. (a) A sample ‘walk’ sequence from the action database of [4]. (b),(c) Other more complex sequences with several walking people in the field of view. Despite partial occlusions, differences in scale, and complex backgrounds, these sequences were all classified correctly as ’walk’ sequences. For video sequences see www.wisdom.weizmann.ac.il/˜vision/Composition.html occlusions, some differences in scale, and more complex backgrounds. The complex sequences were all correctly classified (increasing the classification rate to 98%). In our implementation, 3D space-time video regions were broken into small spatio-temporal video patches (7 × 7 × 4). The descriptor for each patch was a vector containing the absolute values of the temporal derivatives in all pixels of the patch, normalized to a unit length. Since stationary backgrounds have zero temporal derivatives, our method is not sensitive to the background, nor does it require foreground/background separation. Image patches and fragments have been employed in the task of class-based object recognition (e.g., [7, 2, 6]). A sparse set of informative fragments were learned for a large class of objects (the training set). These approaches are useful for recognition, but are not applicable to non-class based inference problems (such as similarity between pairs of signals with no prior data, clustering, etc.) 4. Signal Retrieval (Audio – Speaker Recognition): We used a database of 31 speakers (male and female). All the speakers repeated three times a five-word sentence (2-3 seconds long) in a foreign language, recorded over a phone line. Different repetitions by the same person slightly varied from one another. Altogether the database contained 93 samples of the sentence. Such short speech signals are likely to pose a problem for learning-based (e.g., HMM, GMM) recognition system. We applied our global measure GES for retrieving the closest database elements. The highest GES recognized the right speaker 90 out of 93 cases (i.e., 97% correct recognition). Moreover, the second best GES was correct 82 out of 93 cases (88%). We used a standard mel-frequency cepstrum frame descriptors for time-frames of 25 msec, with overlaps of 50%. Acknowledgments Thanks to Y. Caspi, A. Rav-Acha, B. Nadler and R. Basri for their helpful remarks. This work was supported by the Israeli Science Foundation (Grant 281/06) and by the Alberto Moscona Fund. The research was conducted at the Moross Laboratory for Vision & Motor Control at the Weizmann Inst. References [1] S. Altschul, W. Gish, W. Miller, E. Myers, and D. Lipman. Basic local alignment search tool. JMolBiol, 215:403–410, 1990. [2] E. Bart and S. Ullman. Class-based matching of object parts. In VideoRegister04, page 173, 2004. [3] A. Birnbaum. On the foundations of statistical inference. J. Amer. Statist. Assoc, 1962. [4] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In ICCV05. [5] O. Boiman and M. Irani. Detecting irregularities in images and in video. In ICCV05, pages I: 462–469. [6] P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. IJCV, 61, 2005. [7] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In CVPR03. [8] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR06. [9] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004. [10] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 22(8):888–905, August 2000. [11] J. Sivic, B. Russell, A. Efros, A. Zisserman, and W. Freeman. Discovering objects and their localization in images. In ICCV05, pages I: 370–377. [12] P. Viola and W. Wells, III. Alignment by maximization of mutual information. In ICCV95, pages 16–23.
|
2006
|
163
|
2,992
|
An Information Theoretic Framework for Eukaryotic Gradient Sensing Joseph M. Kimmel∗and Richard M. Salter† joekimmel@uchicago.edu, rms@cs.oberlin.edu Computer Science Program Oberlin College Oberlin, Ohio 44074 Peter J. Thomas‡ peter.j.thomas@case.edu Departments of Mathematics, Biology and Cognitive Science Case Western Reserve University Cleveland, Ohio 44106 Abstract Chemical reaction networks by which individual cells gather and process information about their chemical environments have been dubbed “signal transduction” networks. Despite this suggestive terminology, there have been few attempts to analyze chemical signaling systems with the quantitative tools of information theory. Gradient sensing in the social amoeba Dictyostelium discoideum is a well characterized signal transduction system in which a cell estimates the direction of a source of diffusing chemoattractant molecules based on the spatiotemporal sequence of ligand-receptor binding events at the cell membrane. Using Monte Carlo techniques (MCell) we construct a simulation in which a collection of individual ligand particles undergoing Brownian diffusion in a three-dimensional volume interact with receptors on the surface of a static amoeboid cell. Adapting a method for estimation of spike train entropies described by Victor (originally due to Kozachenko and Leonenko), we estimate lower bounds on the mutual information between the transmitted signal (direction of ligand source) and the received signal (spatiotemporal pattern of receptor binding/unbinding events). Hence we provide a quantitative framework for addressing the question: how much could the cell know, and when could it know it? We show that the time course of the mutual information between the cell’s surface receptors and the (unknown) gradient direction is consistent with experimentally measured cellular response times. We find that the acquisition of directional information depends strongly on the time constant at which the intracellular response is filtered. 1 Introduction: gradient sensing in eukaryotes Biochemical signal transduction networks provide the computational machinery by which neurons, amoebae or other single cells sense and react to their chemical environments. The precision of this chemical sensing is limited by fluctuations inherent in reaction and diffusion processes involving a ∗Current address: Computational Neuroscience Graduate Program, The University of Chicago. †Oberlin Center for Computation and Modeling, http://occam.oberlin.edu/. ‡To whom correspondence should be addressed. http://www.case.edu/artsci/math/thomas/thomas.html; Oberlin College Research Associate. finite quantity of molecules [1, 2]. The theory of communication provides a framework that makes explicit the noise dependence of chemical signaling. For example, in any reaction A + B →C, we may view the time varying reactant concentrations A(t) and B(t) as input signals to a noisy channel, and the product concentration C(t) as an output signal carrying information about A(t) and B(t). In the present study we show that the mutual information between the (known) state of the cell’s surface receptors and the (unknown) gradient direction follows a time course consistent with experimentally measured cellular response times, reinforcing earlier claims that information theory can play a role in understanding biochemical cellular communication [3, 4]. Dictyostelium is a soil dwelling amoeba that aggregates into a multicellular form in order to survive conditions of drought or starvation. During aggregation individual amoebae perform chemotaxis, or chemically guided movement, towards sources of the signaling molecule cAMP, secreted by nearby amoebae. Quantitive studies have shown that Dictyostelium amoebae can sense shallow, static gradients of cAMP over long time scales (∼30 minutes), and that gradient steepness plays a crucial role in guiding cells [5]. The chemotactic efficiency (CE), the population average of the cosine between the cell displacement directions and the true gradient direction, peaks at a cAMP concentration of 25 nanoMolar, similar to the equilibrium constant for the cAMP receptor (the Keq is the concentration of cAMP at which the receptor has a 50% chance of being bound or unbound, respectively). For smaller or larger concentrations the CE dropped rapidly. Nevertheless over long times cells were able (on average) to detect gradients as small as 2% change in [cAMP] per cell length. At an early stage of development when the pattern of chemotactic centers and spirals is still forming, individual amoebae presumably experience an inchoate barrage of weak, noisy and conflicting directional signals. When cAMP binds receptors on a cell’s surface, second messengers trigger a chain of subsequent intracellular events including a rapid spatial reorganization of proteins involved in cell motility. Advances in fluorescence microscopy have revealed that the oriented subcellular response to cAMP stimulation is already well underway within two seconds [6, 7]. In order to understand the fundamental limits to communication in this cell signaling process we abstract the problem faced by a cell to that of rapidly identifying the direction of origin of a stimulus gradient superimposed on an existing mean background concentration. We model gradient sensing as an information channel in which an input signal – the direction of a chemical source – is noisily transmitted via a gradient of diffusing signaling molecules; and the “received signal” is the spatiotemporal pattern of binding events between cAMP and the cAMP receptors [8]. We neglect downstream intracellular events, which cannot increase the mutual information between the state of the cell and the direction of the imposed extracellular gradient [9]. The analysis of any signal transmission system depends on precise representation of the noise corrupting transmitted signals. We develop a Monte Carlo simulation (MCell, [10, 11]) in which a simulated cell is exposed to a cAMP distribution that evolves from a uniform background to a gradient at low (1 nMol) average concentration. The noise inherent in the communication of a diffusionmediated signal is accurately represented by this method. Our approach bridges both the transient and the steady state regimes and allows us to estimate the amount of stimulus-related information that is in principle available to the cell through its receptors as a function of time after stimulus initiation. Other efforts to address aspects of cell signaling using the conceptual tools of information theory have considered neurotransmitter release [3] and sensing temporal signals [4], but not gradient sensing in eukaryotic cells. A typical natural habitat for social amoebae such as Dictyostelium is the complex anisotropic threedimensional matrix of the forest floor. Under experimental conditions cells typically aggregate on a flat two-dimensional surface. We approach the problem of gradient sensing on a sphere, which is both harder and more natural for the ameoba, while still simple enough for us analytically and numerically. Directional data is naturally described using unit vectors in spherical coordinates, but the ameobae receive signals as binding events involving intramembrane protein complexes, so we have developed a method for projecting the ensemble of receptor bindings onto coordinates in R3. In loose analogy with the chemotactic efficiency [5], we compare the projected directional estimate with the true gradient direction represented as a unit vector on S2. Consistent with observed timing of the cell’s response to cAMP stimulation, we find that the directional signal converges quickly enough for the cell to make a decision about which direction to move within the first two seconds following stimulus onset. 2 Methods 2.1 Monte Carlo simulations Using MCell and DReAMM [10, 11] we construct a spherical cell (radius R = 7.5µm, [12]) centered in a cubic volume (side length L = 30µm). N = 980 triangular tiles partition the surface (mesh generated by DOME1); each contained one cell surface receptor for cAMP with binding rate k+ = 4.4 × 107 sec−1M−1, first-order cAMP unbinding rate k−= 1.1 sec−1 [12] and Keq = k−/k+ = 25nMol cAMP. We established a baseline concentration of approximately 1nMol by releasing a cAMP bolus at time 0 inside the cube with zero-flux boundary conditions imposed on each wall. At t = 2 seconds we introduced a steady flux at the x = −L/2 wall of 1 molecule of cAMP per square micron per msec, adding signaling molecules from the left. Simultaneously, the x = +L/2 wall of the cube assumes absorbing boundary conditions. The new boundary conditions lead (at equilibrium) to a linear gradient of 2 nMol/30µm, ranging from ≈2.0 nMol at the flux source wall to ≈0 nMol at the absorbing wall (see Figure 1); the concentration profile approaches this new steady state with time constant of approximately 1.25 msec. Sampling boxes centered along the planes x = ±13.5µm measured the local concentration, allowing us to validate the expected model behavior. Figure 1: Gradient sensing simulations performed with MCell (a Monte Carlo simulator of cellular microphysiology, http://www.mcell.cnl.salk.edu/) and rendered with DReAMM (Design, Render, and Animate MCell Models, http://www.mcell.psc.edu/). The model cell comprised a sphere triangulated with 980 tiles with one cAMP receptor per tile. Cell radius R = 7.5µm; cube side L = 30µm. Left: Initial equilibrium condition, before imposition of gradient. [cAMP] ≈1nMol (c. 15,000 molecules in the volume outside the sphere). Right: Gradient condition after transient (c. 15,000 molecules; see Methods for details). 2.2 Analysis 2.2.1 Assumptions We make the following assumptions to simplify the analysis of the distribution of receptor activities at equilibrium, whether pre- or post-stimulus onset: 1. Independence. At equilibrium, the state of each receptor (bound vs unbound) is independent of the states of the other receptors. 2. Linear Gradient. At equilibrium under the imposed gradient condition, the concentration of ligand molecule varies linearly with position along the gradient axis. 3. Symmetry. 1http://nwg.phy.bnl.gov/∼bviren/uno/other/ (a) Rotational equivariance of receptor activities. In the absence of an applied gradient signal, the probability distribution describing the receptor states is equivariant with respect to arbitrary rotations of the sphere. (b) Rotational invariance of gradient direction. The imposed gradient seen by a model cell is equally likely to be coming from any direction; therefore the gradient direction vector is uniformly distributed over S2. (c) Axial equivariance about the gradient direction. Once a gradient direction is imposed, the probability distribution describing receptor states is rotationally equivariant with respect to rotations about the axis parallel with the gradient. Berg and Purcell [1] calculate the inaccuracy in concentration estimates due to nonindependence of adjacent receptors; for our parameters (effective receptor radius = 5nm, receptor spacing ∼1µm) the fractional error in estimating concentration differences due to receptor nonindependence is negligible (≲10−11) [1, 2]. Because we fix receptors to be in 1:1 correspondence with surface tiles, spherical symmetry and uniform distribution of the receptors are only approximate. The gradient signal communicated via diffusion does not involve sharp spatial changes on the scale of the distance between nearby receptors, therefore spherical symmetry and uniform identical receptor distribution are good analytic approximations of the model configuration. By rotational equivariance we mean that combining any rotation of the sphere with a corresponding rotation of the indices labeling the N receptors, {j = 1, · · · , N}, leads to a statistically indistinguishable distribution of receptor activities. This same spherical symmetry is reflected in the a priori distribution of gradient directions, which is uniform over the sphere (with density 1/4π). Spherical symmetry is broken by the gradient signal, which fixes a preferred direction in space. About this axis however, we assume the system retains the rotational symmetry of the cylinder. 2.2.2 Mutual information of the receptors In order to quantify the directional information available to the cell from its surface receptors we construct an explicit model for the receptor states and the cell’s estimated direction. We model the receptor states via a collection of random variables {Bj} and develop an expression for the entropy of {Bj}. Then in section 2.2.3 we present a method for projecting a temporally filtered estimated direction, ˆg, into three (rather than N) dimensions. Let the random variables {Bj} N j=1 represent the states of the N cAMP receptors on the cell surface; Bj = 1 if the receptor is bound to a molecule of cAMP, otherwise Bj = 0. Let ⃗xj ∈S2 represent the direction from the center of the center of the cell to the jth receptor. Invoking assumption 2 above, we take the equilibrium concentration of cAMP at ⃗x to be c(⃗x|⃗g) = a+b(⃗x·⃗g) where ⃗g ∈S2 is a unit vector in the direction of the gradient. The parameter a is the mean concentration over the cell surface, and b = R|⃗∇c| is half the drop in concentration from one extreme on the cell surface to the other. Before the stimulus begins, the gradient direction is undefined. It can be shown (see Supplemental Materials) that the entropy of receptor states given a fixed gradient direction ⃗g, H[{Bj}|⃗g], is given by an integral over the sphere: H[{Bj}|⃗g] ∼N Z π θ=0 Z 2π φ=0 Φ a + b cos(θ) a + b cos(θ) + Keq sin(θ) 4π dφ dθ (as N →∞). (1) On the other hand, if the gradient direction remains unspecified, the entropy of receptor states is given by H[{Bj}] ∼ NΦ Z π θ=0 Z 2π φ=0 a + b cos(θ) a + b cos(θ) + Keq sin(θ) 4π dφ dθ (as N →∞), (2) where Φ[p] = −(p log2(p) + (1 −p) log2(1 −p)) , 0 < p < 1 0, p = 0 or 1 denotes the entropy for a binary random variable with state probabilities p and (1 −p). In both equations (1) and (2), the argument of Φ is a probability taking values 0 ≤p ≤1. In (1) the values of Φ are averaged over the sphere; in (2) Φ is evaluated after averaging probabilities. Because Φ[p] is convex for 0 ≤p ≤1, the integral in equation 1 cannot exceed that in equation 2. Therefore the mutual information upon receiving the signal is nonnegative (as expected): MI[{Bj};⃗g] ∆= H[{Bj}] −H[{Bj}|⃗g] ≥0. The analytic solution for equation (1) involves the polylogarithm function. For the parameters shown in the simulation (a = 1.078 nMol, b = .512 nMol, Keq = 25 nMol), the mutual information with 980 receptors is 2.16 bits. As one would expect, the mutual information peaks when the mean concentration is close to the Keq of the receptor, exceeding 16 bits when a = 25, b = 12.5 and Keq = 25 (nMol). 2.2.3 Dimension reduction The estimate obtained above does not give tell us how quickly the directional information available to the cell evolves over time. Direct estimate of the mutual information from stochastic simulations is impractical because the aggregate random variables occupy a 980 dimensional space that a limited number of simulation runs cannot sample adequately. Instead, we construct a deterministic function from the set of 980 time courses of the receptors, {Bj(t)}, to an aggregate directional estimate in R3. Because of the cylindrical symmetry inherent in the system, our directional estimator ˆg is an unbiased estimator of the true gradient direction ⃗g. The estimator ˆg(t) may be thought of as representing a downstream chemical process that accumulates directional information and decays with some time constant τ. Let {⃗xj}N j=1 be the spatial locations of the N receptors on the cell’s surface. Each vector is associated with a weight wj. Whenever the jth receptor binds a cAMP molecule, wj is incremented by one; otherwise wj decays with time constant τ. We construct an instantaneous estimate of the gradient direction from the linear combination of receptor positions, ˆgτ(t) = PN j=1 wj(t)⃗xj. This procedure reflects the accumulation and reabsorption of intracellular second messengers released from the cell membrane upon receptor binding. Before the stimulus is applied, the weighted directional estimates ˆgτ are small in absolute magnitude, with direction uniformly distributed on S2. In order to determine the information gained as the estimate vector evolves after stimulus application, we wish to determine the change in entropy in an ensemble of such estimates. As the cell gains information about the direction of the gradient signal from its receptors, the entropy of the estimate should decrease, leading to a rise in mutual information. By repeating multiple runs (M = 600) of the simulation we obtain samples from the ensemble of direction estimates, given a particular stimulus direction, ⃗g. In the method of Kozachenko and Leonenko [13], adapted for the analysis of neural spike train data by Victor [14] (“KLV method”), the cumulative distribution function is approximated directly from the observed samples, and the entropy is estimated via a change of variables transformation (see below). This method may be formulated in vector spaces Rd for d > 1 ([13]), but it is not guaranteed to be unbiased in the multivariate case [15] and has not been extended to curved manifolds such as the sphere. In the present case, however, we may exploit the symmetries inherent in the model (Assumptions 3a-3c) to reduce the empirical entropy estimation problem to one dimension. Adapting the argument in [14] to the case of spherical data from a distribution with rotational symmetry about a given axis, we obtain an estimate of the entropy based on a series of observations of the angles {θ1, · · · , θM} between the estimates ˆgτ and the true gradient direction ⃗g (for details, see Supplemental Materials): H ∼1 M M X k=1 log2(λk) + log2(2(M −1)) + γ loge(2) + log2(2π) + log2(sin(θk)) (3) (as M →∞) where after sorting the θk in monotonic order, λk ∆= min(|θk −θk±1|) is the distance between each angle and its nearest neighbor in the sample, and γ is the Euler-Mascheroni constant. As shown in Figure 2, this approximation agrees with the analytic result for the uniform distribution, Hunif = log2(4π) ≈3.651. 3 Results Figure 3 shows the results of M = 600 simulation runs. Panel A shows the concentration averaged across a set of 1µm3 sample boxes, four in the x = −13.5µm plane and four in the x = +13.5µm Figure 2: Monte Carlo simulation results and information analysis. A: Average concentration profiles along two planes perpendicular to the gradient, at x = ±13.5µm. B: Estimated direction vector (x, y, and z components; x = dark blue trace) ˆgτ, τ = 500 msec. C: Entropy of the ensemble of directional vector estimates for different values of the intracellular filtering time constant τ. Given the directions of the estimates θk, φk on each of M runs, we calculate the entropy of the ensemble using equation (3). All time constants yield uniformly distributed directional estimates in the pre-stimulus period, 0 ≤t ≤2 (sec). After stimulus onset, directional estimates obtained with shorter time constants respond more quickly but achieve smaller gains in mutual information (smaller reductions in entropy). Filtering time constants τ range from lightest to darkest colors: 20, 50, 100, 200, 500, 1000, 2000 msec. plane. The initial bolus of cAMP released into the volume at t = 0 sec is not uniformly distributed, but spreads out evenly within 0.25 sec. At t = 2.0 sec the boundary conditions are changed, causing a gradient to emerge along a realistic time course. Consistent with the analytic solution for the mean concentration (not shown), the concentration approaches equilibrium more rapidly near the absorbing wall (descending trace) than at the imposed flux wall (ascending trace). Panel B shows the evolution of a directional estimate vector ˆgτ for a single run, with τ = 500 msec. During uniform conditions all vectors fluctuate near the origin. After gradient onset the variance increases and the x component (dark trace) becomes biased towards the gradient source (⃗g = [−1, 0, 0]) while the y and z components still have a mean of zero. Across all 600 runs the mean of the y and z components remains close to zero, while the mean of the x component systematically departs from zero shortly after stimulus onset (not shown). Hence the directional estimator is unbiased (as required by symmetry). See Supplemental Materials for the population average of ˆg. Panel C shows the time course of the entropy of the ensemble of normalized directional estimate vectors ˆgτ/|ˆgτ| over M = 600 simulations, for intracellular filtering time constants ranging from 20 msec to 2000 msec (light to dark shading), calculated using equation (3). Following stimulus onset, entropy decreases steadily, showing an increase in information available to the amoeba about the direction of the stimulus; the mutual information at a given point in time is the difference between the entropy at that time and before stimulus onset. For a cell with roughly 1000 receptors the mutual information has increased at most by ∼2 bits of information by one second (for τ = 500 msec), and at most by ∼3 bits of information by two seconds (for τ=1000 or 2000 msec), under our stimulation protocol. A one bit reduction in uncertainty is equivalent to identifying the correct value of the x component (positive versus negative) when the stimulus direction is aligned along the x-axis. Alternatively, note that a one bit reduction results in going from the uniform distribution on the sphere to the uniform distribution on one hemisphere. For τ ≤100 msec, the weighted average with decay time τ never gains more than one bit of information about the stimulus direction, even at long times. This observation suggestions that signaling must involve some chemical components with lifetimes longer than 100 msec. The τ = 200 msec filter saturates after about one second, at ∼1 bit of information gain. Longer lived second messengers would respond more slowly to changes from the background stimulus distribution, but would provide better more informative estimates over time. The τ = 500 msec estimate gains roughly two bits of information within 1.5 seconds, but not much more over time. Heuristically, we may think of a two bit gain in information as corresponding to the change from a uniform distribution to one covering uniformly covering one quarter of S2, i.e. all points within π/3 of the true direction. Within two seconds the τ = 1000 msec and τ = 2000 msec weighted averages have each gained approximately three bits of information, equivalent to a uniform distribution covering all points with 0.23π or 41o of the true direction. 4 Discussion & conclusions Clearly there is an opportunity for more precise control of experimental conditions to deepen our understanding of spatio-temporal information processing at the membranes of gradient-sensitive cells. Efforts in this direction are now using microfluidic technology to create carefully regulated spatial profiles for probing cellular responses [16]. Our results suggest that molecular processes relevant to these responses must have lasting effects ≥100 msec. We use a static, immobile cell. Could cell motion relative to the medium increase sensitivity to changes in the gradient? No: the Dictyostelium velocity required to affect concentration perception is on order 1cm sec−1[1], whereas reported velocities are on the order µm sec−1[5]. The chemotactic response mechanism is known to begin modifying the cell membrane on the edge facing up the gradient within two seconds after stimulus initiation [7, 6], suggesting that the cell strikes a balance between gathering data and deciding quickly. Indeed, our results show that the reported activation of the G-protein signaling system on the leading edge of a chemotactically responsive cell [7] rises at roughly the same rate as the available chemotactic information. Results such as these ([7, 6]) are obtained by introducing a pipette into the medium near the amoeba; the magnitude and time course of cAMP release are not precisely known, and when estimated the cAMP concentration at the cell surface is over 25 nMol by a full order of magnitude. Thomson and Kristan [17] show that for discrete probability distributions and for continuous distributions over linear spaces, stimulus discriminability may be better quantified using ideal observer analysis (mean squared error, for continuous variables) than information theory. The machinery of mean squared error (variance, expectation) do not carry over to the case of directional data without fundamental modifications [18]; in particular the notion of mean squared error is best represented by the mean resultant length 0 ≤ρ ≤1, the expected length of the vector average of a collection of unit vectors representing samples from directional data. A resultant with length ρ ≈1 corresponds to a highly focused probability density function on the sphere. In addition to measuring the mutual information between the gradient direction and an intracellular estimate of direction, we also calculated the time evolution of ρ (see Supplemental Materials.) We find that ρ rapidly approaches 1 and can exceed 0.9, depending on τ. We found that in this case at least the behavior of the mean resultant length and the mutual information are very similar; there is no evidence of discrepancies of the sort described in [17]. We have shown that the mutual information between an arbitrarily oriented stimulus and the directional signal available at the cell’s receptors evolves with a time course consistent with observed reaction times of Dictyostelium amoeba. Our results reinforce earlier claims that information theory can play a role in understanding biochemical cellular communication. Acknowledgments MCell simulations were run on the Oberlin College Beowulf Cluster, supported by NSF grant CHE0420717. References [1] Howard C. Berg and Edward M. Purcell. Physics of chemoreception. Biophysical Journal, 20:193, 1977. [2] William Bialek and Sima Setayeshgar. Physical limits to biochemical signaling. PNAS, 102(29):10040– 10045, July 19 2005. [3] S. Qazi, A. Beltukov, and B.A. Trimmer. Simulation modeling of ligand receptor interactions at nonequilibrium conditions: processing of noisy inputs by ionotropic receptors. Math Biosci., 187(1):93–110, Jan 2004. [4] D. J. Spencer, S. K. Hampton, P. Park, J. P. Zurkus, and P. J. Thomas. The diffusion-limited biochemical signal-relay channel. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [5] P.R. Fisher, R. Merkl, and G. Gerisch. Quantitative analysis of cell motility and chemotaxis in Dictyostelium discoideum by using an image processing system and a novel chemotaxis chamber providing stationary chemical gradients. J. Cell Biology, 108:973–984, March 1989. [6] Carole A. Parent, Brenda J. Blacklock, Wendy M. Froehlich, Douglas B. Murphy, and Peter N. Devreotes. G protein signaling events are activated at the leading edge of chemotactic cells. Cell, 95:81–91, 2 October 1998. [7] Xuehua Xu, Martin Meier-Schellersheim, Xuanmao Jiao, Lauren E. Nelson, and Tian Jin. Quantitative imaging of single live cells reveals spatiotemporal dynamics of multistep signaling events of chemoattractant gradient sensing in dictyostelium. Molecular Biology of the Cell, 16:676–688, February 2005. [8] Jan Wouter-Rappel, Peter. J Thomas, Herbert Levine, and William F. Loomis. Establishing direction during chemotaxis in eukaryotic cells. Biophys. J., 83:1361–1367, 2002. [9] T.M. Cover and J.A. Thomas. Elements of Information Theory. John Wiley, New York, 1990. [10] J. R. Stiles, D. Van Helden, T. M. Bartol, E.E. Salpeter, and M. M. Salpeter. Miniature endplate current rise times less than 100 microseconds from improved dual recordings can be modeled with passive acetylcholine diffusion from a synaptic vesicle. Proc. Natl. Acad. Sci. U.S.A., 93(12):5747–52, Jun 11 1996. [11] J. R. Stiles and T. M. Bartol. Computational Neuroscience: Realistic Modeling for Experimentalists, chapter Monte Carlo methods for realistic simulation of synaptic microphysiology using MCell, pages 87–127. CRC Press, Boca Raton, FL, 2001. [12] M. Ueda, Y. Sako, T. Tanaka, P. Devreotes, and T. Yanagida. Single-molecule analysis of chemotactic signaling in Dictyostelium cells. Science, 294:864–867, October 2001. [13] L.F. Kozachenko and N.N. Leonenko. Probl. Peredachi Inf. [Probl. Inf. Transm.], 23(9):95, 1987. [14] Jonathan D. Victor. Binless strategies for estimation of information from neural data. Physical Review E, 66:051903, Nov 11 2002. [15] Marc M. Van Hulle. Edgeworth approximation of multivariate differential entropy. Neural Computation, 17:1903–1910, 2005. [16] Loling Song, Sharvari M. Nadkarnia, Hendrik U. B¨odekera, Carsten Beta, Albert Bae, Carl Franck, Wouter-Jan Rappel, William F. Loomis, and Eberhard Bodenschatz. Dictyostelium discoideum chemotaxis: Threshold for directed motion. Euro. J. Cell Bio, 85(9-10):981–9, 2006. [17] Eric E. Thomson and William B. Kristan. Quantifying stimulus discriminability: A comparison of information theory and ideal observer analysis. Neural Computation, 17:741–778, 2005. [18] Kanti V. Mardia and Peter E. Jupp. Directional Statistics. John Wiley & Sons, West Sussex, England, 2000.
|
2006
|
164
|
2,993
|
Prediction on a Graph with a Perceptron Mark Herbster, Massimiliano Pontil Department of Computer Science University College London Gower Street, London WC1E 6BT, England, UK {m.herbster, m.pontil}@cs.ucl.ac.uk Abstract We study the problem of online prediction of a noisy labeling of a graph with the perceptron. We address both label noise and concept noise. Graph learning is framed as an instance of prediction on a finite set. To treat label noise we show that the hinge loss bounds derived by Gentile [1] for online perceptron learning can be transformed to relative mistake bounds with an optimal leading constant when applied to prediction on a finite set. These bounds depend crucially on the norm of the learned concept. Often the norm of a concept can vary dramatically with only small perturbations in a labeling. We analyze a simple transformation that stabilizes the norm under perturbations. We derive an upper bound that depends only on natural properties of the graph – the graph diameter and the cut size of a partitioning of the graph – which are only indirectly dependent on the size of the graph. The impossibility of such bounds for the graph geodesic nearest neighbors algorithm will be demonstrated. 1 Introduction We study the problem of robust online learning over a graph. Consider the following game for predicting the labeling of a graph. Nature presents a vertex vi1; the learner predicts the label of the vertex ˆy1 ∈{−1, 1}; nature presents a label y1; nature presents a vertex vi2; the learner predicts ˆy2; and so forth. The learner’s goal is minimize the total number of mistakes (|{t : ˆyt ̸= yt}|). If nature is adversarial, the learner will always mispredict; but if nature is regular or simple, there is hope that a learner may make only a few mispredictions. Thus, a methodological goal is to give learners whose total mispredictions can be bounded relative to the “complexity” of nature’s labeling. In this paper, we consider the cut size as a measure of the complexity of a graph’s labeling, where the size of the cut is the number of edges between disagreeing labels. We will give bounds which depend on the cut size and the diameter of the graph and thus do not directly depend on the size of the graph. The problem of learning a labeling of a graph is a natural problem in the online learning setting, as well as a foundational technique for a variety of semi-supervised learning methods [2, 3, 4, 5, 6]. For example, in the online setting, consider a system which serves advertisements on web pages. The web pages may be identified with the vertices of a graph and the edges as links between pages. The online prediction problem is then that, at a given time t the system may receive a request to serve an advertisement on a particular web page. For simplicity, we assume that there are two alternatives to be served: either advertisement “A” or advertisement “B”. The system then interprets the feedback as the label and then may use this information in responding to the next request to predict an advertisement for a requested web page. 1.1 Related work There is a well-developed literature regarding learning on the graph. The early work of Blum and Chawla [2] presented an algorithm which explicitly finds min-cuts of the label set. Bounds have been Input: {(vit, yt)}ℓ t=1 ⊆VM×{−1,1}. Initialization: w1 = 0; MA = ∅. for t = 1, . . . , ℓdo Predict: receive vit ˆyt = sign(e⊤ itwt) Update: receive yt if ˆyt = yt then wt+1 = wt else wt+1 = wt + ytvit MA = MA ∪{t} end Figure 1: Perceptron on set VM. + + + + + + Figure 2: Barbell + + + + + Figure 3:Barbell with concept noise Figure 4: Flower + + + + + + Figure 5: Octopus proven previously with smooth loss functions [6, 7] in a batch setting. Kernels on graph labelings were introduced in [3, 5]. This current work builds upon our work in [8]. There it was shown that, given a fixed labeling of a graph, the number of mistakes made by an algorithm similar to the kernel perceptron [9] with a kernel that was the pseudoinverse of the graph Laplacian, could be bounded by the quantity [8, Theorems 3.2, 4.1, and 4.2] 4ΦG(u)DGbal(u). (1) Here u ∈{−1, 1}n is a binary vector defining the labeling of the graph, ΦG(u) is the cut size1 defined as ΦG(u) := |{(i, j) ∈E(G) : ui ̸= uj}|, that is, the number of edges between positive and negative labels, DG is the diameter of the graph and bal(u) := 1 −1 n| P ui| −2 measures the label balance. This bound is interesting in that the mistakes of the algorithm could be bounded in terms of simple properties of a labeled graph. However, there are a variety of shortcomings in this result. First, we observe that the bound above assumed a fixed labeling of the graph. In practice, the online data sequence could contain multiple labels for a single vertex; this is the problem of label noise. Second, for an unbalanced set of labels the bound is vacuous, for example, if u = {1, 1, . . . , 1, −1} ∈IRn then bal(u) = n2. Third, consider the prototypical easy instance for the algorithm of two dense clusters connected by a few edges, for instance, two m-cliques connected by a single edge (a barbell graph, see Figure 2). If each clique is labeled with distinct labels then we have that 4ΦG(u)DGbal(u) = 4×1×3×1 = 12, which is independent of m. Now suppose that, say, the first clique contains one vertex which is labeled as the second clique (see Figure 3). Previously ΦG(u) = 1, but now ΦG(u) = m and the bound is vacuous. This is the problem of concept noise; in this example, a Θ(1) perturbation of labeling increases the bound multiplicatively by Θ(m). 1.2 Overview A first aim of this paper is to improve upon the bounds in [8], particularly, to address the three problems of label balance, label noise, and concept noise as discussed above. For this purpose, we apply the well-known kernel perceptron [9] to the problem of online learning on the graph. We discuss the background material for this problem in section 2, where we also show that the bounds of [1] can be specialized to relative mistake bounds when applied to, for example, prediction on the graph. A second important aim of this paper is to interpret the mistake bounds by an explanation in terms of high level graph properties. Hence, in section 3, we refine a diameter based bound of [8, Theorem 4.2] to a sharper bound based on the “resistance distance” [10] on a weighted graph; which we then closely match with a lower bound. In section 4, we introduce a kernel which is a simple augmentation of the pseudoinverse of the graph Laplacian and then prove a theorem on the performance of the perceptron with this kernel which solves the three problems above. We conclude in section 5, with a discussion comparing the mistake bounds for prediction on the graph with the halving algorithm [11] and the k-nearest neighbors algorithm. 2 Preliminaries In this section, we describe our setup for Hilbert spaces on finite sets and its specification to the graph case. We then recall a result of Gentile [1] on prediction with the perceptron and discuss a special case in which relative 0–1 loss (mistake) bounds are obtainable. 1Later in the paper we extend the definition of cut size to weighted graphs. 2.1 Hilbert spaces of functions defined on a finite set We denote matrices by capital bold letters and vectors by small bold case letters. So M denotes the n × n matrix (Mij)n i,j=1 and w the n-dimensional vector (wi)n i=1. The identity matrix is denoted by I. We also let 0 and 1 be the n-dimensional vectors all of whose components equal to zero and one respectively, and ei the i-th coordinate vector of IRn. Let IN be the set of natural numbers and INℓ:= {1, . . . , ℓ}. We denote a generic Hilbert space with H. We identify V := INn as the indices of a set of n objects, e.g. the vertices of a graph. A vector w ∈IRn can alternatively be seen as a function f : V →IR such that f(i) = wi, i ∈V . However, for simplicity we will use the notation w to denote both a vector in IRn or the above function. A symmetric positive semidefinite matrix M induces a semi-inner product on IRn which is defined as ⟨u, w⟩M := u⊤Mw, where “⊤” denotes transposition. The reproducing kernel [12] associated with the above semi-inner product is K = M+, where “+” denotes pseudoinverse. We also define the coordinate spanning set VM := {vi := M+ei : i = 1, . . . , n} (2) and let H(M) := span(VM). The restriction of the semi-inner product ⟨·, ·⟩M to H(M) is an inner product on H(M). The set VM acts as “coordinates” for H(M), that is, if w ∈H(M) we have wi = e ⊤ i M+Mw = v ⊤ i Mw = ⟨vi, w⟩M, (3) although the vectors {v1, . . . , vn} are not necessarily normalized and are linearly independent only if M is positive definite. We note that equation (3) is simply the reproducing kernel property [12] for kernel M+. When V indexes the vertices of an undirected graph G, a natural norm to use is that induced by the graph Laplacian. We explain this in detail now. Let A be the n × n symmetric weight matrix of the graph such that Aij ≥0, and define the edge set E(G) := {(i, j) : 0 < Aij, i < j}. The distance matrix ∆associated with G is the per-element inverse of the weight matrix, that is, ∆ij = 1 Aij (∆may have +∞as a matrix element). The graph Laplacian G is the n × n matrix defined as G := D −A where D = diag(d1, . . . , dn) and di is the weighted degree of vertex i, di = Pn j=1 Aij. The Laplacian is positive semidefinite and induces the semi-norm ∥w∥2 G := w ⊤Gw = X (i,j)∈E(G) Aij(wi −wj)2. (4) When the graph is connected, it follows from equation (4) that the null space of G is spanned by the constant vector 1 only. In this paper, we always assume that the graph G is connected. Where it is not ambiguous, we use the notation G to denote both the graph G and the graph Laplacian. 2.2 Online prediction of functions on a finite set with the perceptron Gentile [1] bounded the performance of the perceptron algorithm on nonseparable data with the linear hinge loss. Here, we apply his result to study the problem of prediction on a finite set with the perceptron (see Figure 1). In this case, the inputs are the coordinates in the set VM ⊂H(M) defined above. We additionally assume that matrix M is positive definite (not just positive semidefinite as in the previous subsection). This assumption, along with the fact that the inputs are coordinates, enables us to upper bound the hinge loss and hence we may give a relative mistake bound in terms of the complete set of base classifiers {−1, 1}n. Theorem 2.1. Let M be a symmetric positive definite matrix. If {(vit, yt)}ℓ t=1 ⊆VM × {−1, 1} is a sequence of examples, MA denotes the set of trials in which the perceptron algorithm predicted incorrectly and X = maxt∈MA ∥vit∥M, then the cumulative number of mistakes |MA| of the algorithm is bounded by |MA| ≤2|MA∩Mu| + ∥u∥2 MX2 2 + s 2|MA∩Mu|∥u∥2 MX2 + ∥u∥4 MX4 4 (5) for all u ∈{−1, 1}n, where Mu = {t ∈INℓ: uit ̸= yt}. In particular, if |Mu| = 0 then |MA| ≤∥u∥2 MX2. Proof. This bound follows directly from [1, Theorem 8] with p = 2, γ = 1, and w1 = 0. Since M is assumed to be symmetric positive definite, it follows that {−1, 1}n ⊂H(M). Thus, the hinge loss Lu,t := max(0, 1 −yt⟨u, vit⟩M) of any classifier u ∈{−1, 1}n with any example (vit, yt) is either 0 or 2, since |⟨u, vit⟩M| = 1 by equation (3). This allows us to bound the hinge loss term of [1, Theorem 8] directly with mistakes. We emphasize that our hypothesis on M does not imply linear separability since multiple instances of an input vector in the training sequence may have distinct target labels. Moreover, we note that, for deterministic prediction the constant 2 in the first term of the right hand side of equation (5) is optimal for an online algorithm as a mistake may be forced on every trial. 3 Interpretation of the space H(G) The bound for prediction on a finite set in equation (5) involves two quantities, namely the squared norm of a classifier u ∈{−1, 1}n and the maximum of the squared norms of the coordinates v ∈VM. In the case of prediction on the graph, recall from equation (4) that ∥u∥2 G := u⊤Gu = P (i,j)∈E(G) Aij(ui −uj)2. Therefore, we may identify this semi-norm with the weighted cut size ΦG(u) := 1 4∥u∥2 G (6) of the labeling induced when u ∈{−1, 1}n. In particular, with boolean weighted edges (Aij ∈ {0, 1}) the cut simply counts the number of edges spanning disagreeing labels. The norm ∥v −w∥G is a metric distance for v, w ∈span(VG) however, surprisingly, the square of the norm ∥vp −vq∥2 Gwhen restricted to graph coordinates vp, vq ∈VG is also a metric known as the resistance distance [10], rG(p, q) := (ep −eq) ⊤G+(ep −eq) = ∥vp −vq∥2 G. (7) It is interesting to note that the resistance distance between vertex p and vertex q is the effective resistance between vertices p and q, where the graph is the circuit and edge (i, j) is a resistor with the resistance ∆ij = A−1 ij . As we shall see, our bounds in section 4 depend on ∥vp∥2 G = ∥vp −0∥2 G. Formally, this is not an effective resistance between vertex p and another vertex “0”. The vector 0, informally however, is the center of the graph as 0 = P v∈VG v |VG| , since 1 is in the null space of H(G). In the following, we further characterize ∥vp∥2 G. First, we observe qualitatively that the more interconnected the graph the smaller the term ∥vp∥2 G (Corollary 3.1). Second, in Theorem 3.2 we quantitatively upper bound ∥vp∥2 G by the average (over q) of the effective resistance between vertex p and each vertex q in the graph (including q = p), which in turn may be upper bounded by the eccentricity of p. We proceed with the following useful lemma and theorem, as a basis for our later results. Lemma 3.1. Let x ∈H then ∥x∥−2 = minw∈H ∥w∥2 : ⟨w, x⟩= 1 . The proof is straightforward and we do not elaborate on the details. Theorem 3.1. If M and M′ are symmetric positive semidefinite matrices with span(VM) = span(VM′) and, for every w ∈span(VM), ∥w∥2 M ≤∥w∥2 M′ then n X i=1 aivi 2 M ≥ n X i=1 aiv′ i 2 M′ , where vi ∈VM, v′ i ∈VM′ and a ∈IRn. Proof. Let x = Pn i=1aivi and x′ = Pn i=1aiv′ i then ∥x∥−2 M = x ∥x∥2 M 2 M ≤ x′ ∥x′∥2 M′ 2 M ≤ x′ ∥x′∥2 M′ 2 M′ = ∥x′∥−2 M′ , where the first inequality follows since ⟨ x′ ∥x′∥2 M′ , x⟩ M = 1, hence x′ ∥x′∥2 M′ is a feasible solution to the minimization problem in the right hand side of Lemma 3.1. While the second one follows immediately from the assumption that ∥w∥2 M ≤∥w∥2 M′. As a corollary to the above theorem we have the following when M is a graph Laplacian. Corollary 3.1. Given connected graphs G and G′ with distance matrices ∆and ∆′ such that ∆ij ≤∆′ ij then for all p, q ∈V , we have that ∥vp∥2 G ≤∥v′ p∥2 G′ and rG(p, q) ≤rG′(p, q). The first inequality in the above corollary demonstrates that ∥vp∥2 G is nonincreasing in a graph that is strictly more connected. The second inequality is the well-known Rayleigh’s monotonicity law which states that if any resistance in a circuit is decreased then the effective resistance between any two points cannot increase. We define the geodesic distance between vertices p, q ∈V to be dG(p, q) := min |P(p, q)| where the minimum is taken with respect to all paths P(p, q) from p to q, with the path length defined as |P(p, q)| := P (i,j)∈E(P(p,q)) ∆ij. The eccentricity dG(p) of a vertex p ∈V is the geodesic distance on the graph between p and the furthest vertex on the graph to p, that is, dG(p) = maxq∈V dG(p, q) ≤DG, and DG is the (geodesic) diameter of the graph, DG := maxp∈V dG(p). A graph G is connected when DG < ∞. A tree is an n-vertex connected graph with n −1 edges. The following lemma, a well known result (see e.g. [10]), establishes that the resistance distance can be be equated with the geodesic distance when the graph is a tree. Lemma 3.2. If the graph T is a tree with graph Laplacian T then rT(p, q) = dT(p, q). The next theorem provides a quantitative relationship between ∥vp∥2 G and two measures of the connectivity of vertex p, namely its eccentricity and the mean of the effective resistances between vertex p and each vertex on the graph. Theorem 3.2. If G is a connected graph then ∥vp∥2 G ≤1 n n X q=1 rG(p, q) ≤dG(p). (8) Proof. Recall that rG(p, q) = ∥vp −vq∥2 G (see equation (7)) and use Pn q=1 vq = 0 to obtain that 1 n Pn q=1 ∥vp −vq∥2 G = v⊤ p Gvp + 1 n Pn q=1 v⊤ q Gvq which implies the left inequality in (8). Next, by Corollary 3.1, if T is the Laplacian of a tree T ⊆G then rG(p, q) ≤rT(p, q) for p, q ∈V . Therefore, from Lemma 3.2 we conclude that rG(p, q) ≤dT(p, q). Moreover, since T ⊆G can be any tree, we have that rG(p, q) ≤minT dT(p, q) where the minimum is over all trees T ⊆G. Since the geodesic path from p to q is necessarily contained in some tree T ⊆G it follows that minT dT(p, q) = dG(p, q) and, so, rG(p, q) ≤dG(p, q). Now the theorem follows by maximizing dG(p, q) over q and the definition of dG(p). We identify the resistance diameter of a graph G as RG := maxp,q∈V rG(p, q); thus, from the previous theorem, we may also conclude that max p∈V ∥vp∥2 G ≤RG ≤DG. (9) We complete this section by showing that there exists a family of graphs for which the above inequality is nearly tight. Specifically, we consider the “flower graph” (see Figure 4) obtained by connecting the first vertex of a chain with p −1 vertices to the root vertex of an m-ary tree of depth one. We index the vertices of this graph so that vertices 1 to p correspond to “stem vertices” and vertices p + 1 to p + m to “petals”. Clearly, this graph has diameter equal to p, hence our upper bound above establishes that ∥v1∥2 G ≤p. We now argue that as m grows this bound is almost tight. From Lemma 3.1 we have that ∥v1∥−2 G = minw∈H(G) ∥w∥2 G : ⟨w, v1⟩= 1 . We note that by symmetry, the solution ˆw = ( ˆwi : i ∈INp+m) to the problem above satisfies ˆwi = z if i ≥p + 1 since ˆw must take the same value on the petal vertices. Consequently, it follows that ∥v1∥−2 G = min n m(z −wp)2 + Pp−1 i=1 (wi −wi+1)2 : w1 = 1, Pp i=1 wi + mz = 0 o . We upper bound this minimum by choosing wi = p−i p−1 for 1 ≤i ≤p. Thus, w1 = 1 as it is required, wp = 0 and we compute z by the constraint set of the above minimization problem as z = −p 2m. A direct computation gives ∥v1∥−2 G ≤ 1 (p−1) + p2 4m from which using a first order Taylor expansion it follows that ∥v1∥2 G ≥(p −1) −(p−1)2p2 4m . Therefore, as m →∞the upper bound on ∥v1∥2 G (equation (8)) for the flower graph is matched by a lower bound with a gap of 1. 4 Prediction on the graph We define the following symmetric positive definite graph kernel, Kb c := G+ + b11 ⊤+ cI, (0 < b, 0 ≤c), (10) where Gb c = (Kb c)−1 is the matrix of the associated Hilbert space H(Gb c). In Lemma 4.1 below we prove the needed properties of H(Gb c) as a necessary step for the bound in Theorem 4.2. As we shall see, these properties moderate the consequences of label imbalance and concept noise. To prove Lemma 4.1, we use the following theorem which is a special case of [12, Thm I, §I.6]. Theorem 4.1. If M1 and M2 are n × n symmetric positive semidefinite matrices, and we set M := (M+ 1 + M+ 2 )+ then ∥w∥2 M = inf{∥w1∥2 M1 + ∥w2∥2 M2 : wi ∈H(Mi), w1 + w2 = w} for every w ∈H(M). Next, we define βu ∈[0, 1] as a measure of the balance of a labeling u ∈{−1, 1}n as βu := ( 1 n Pn i=1 ui)2. Note that for a perfectly balanced labeling βu = 0, while βu = 1 for a perfectly unbalanced one. Lemma 4.1. Given a vertex p with associated coordinates vp ∈VG and v′ p ∈VGb c we have that ∥v′ p∥2 Gb c = ∥vp∥2 G + b + c. (11) Moreover, if u, u′ ∈{−1, 1}n and where k := |{i : u′ i ̸= ui}| we have that ∥u′∥2 Gb c ≤∥u∥2 G + βu b + 4k c . (12) Proof. To prove equation (11) we recall equation (3) and note that ∥v′ p∥2 Gb c=⟨v′ p, vp+b1+cep⟩Gb c = ⟨v′ p, vp⟩Gb c + ⟨v′ p, b1+cep⟩Gb c = ∥vp∥2 G + b + c. To prove inequality (12) we proceed in two steps. First, we show that ∥u∥2 Gb 0 = ∥u∥2 G + βu b . (13) Indeed, we can uniquely decompose u as the sum of a vector in H(G) and one in H( 11⊤ n2b ) as u = (u −1 1 n Pn i=1 ui) + 1 1 n Pn i=1 ui. Therefore, by Theorem 4.1 we conclude that ∥u∥2 Gb 0 = ∥u −√βu1∥2 G + ∥√βu1∥2 11⊤ n2b = ∥u∥2 G + βu b , where ∥u −√βu1∥2 G = ∥u∥2 G since 1 ∈H⊥(G). Second, we show, for any symmetric positive definite matrix M, u, u′ ∈{−1, 1}n and c > 0, that ∥u′∥2 Mc ≤∥u∥2 M + 4k c , (14) where Mc := (M−1 + cI)−1 and k := |{i : u′ i ̸= ui}|. To this end, we decompose u′ as a sum of two elements of H(M) and H( 1 cI) as u′ = u + (u′ −u) and observe that ∥u′ −u∥2 1 c I = 4k c . By Theorem 4.1 it then follows that ∥u′∥2 Mc ≤∥u∥2 M+∥u′−u∥2 1 c I = ∥u∥2 M+ 4k c . Now inequality (12) follows by combining equations (13) and (14) with M = Gb 0. We can now state our relative mistake bound for online prediction on the graph. Theorem 4.2. Let G be a connected graph. If {(vit, yt)}ℓ t=1 ⊆VGb c × {−1, 1} is a sequence of examples and MA denotes the set of trials in which the perceptron algorithm predicted incorrectly, then the cumulative number of mistakes |MA| of the algorithm is bounded by |MA| ≤2|MA∩Mu| + Z 2 + r 2|MA∩Mu|Z + Z2 4 , (15) for all u,u′ ∈{−1, 1}n, where k=|{i:u′ i ̸= ui}|, βu′ =( 1 n Pn i=1 u′ i)2, Mu ={t ∈INℓ:uit ̸= yt}, and Z = 4ΦG(u′) + βu′ b + 4k c RG + b + c . In particular, if b = 1, c = 0, k = 0 and |Mu| = 0 then |MA| ≤(4ΦG(u) + βu)(RG + 1). (16) Proof. The proof follows by Theorem 2.1 with M = Gb c, then bounding ∥u∥2 Gb c and ∥vt∥2 Gb c via Lemma 4.1, and then using maxt∈MA ∥vit∥2 G ≤RG by equation (9). The upper bound of the theorem is more resilient to label imbalance, concept noise, and label noise than the bound in [8, Theorems 3.2, 4.1, and 4.2] (see equation (1)). For example, given the noisy barbell graph in Figure 3 but with k ≪n noisy vertices the bound (1) is O(kn) while the bound (15) with b = 1, c = 1, and |Mu| = 0 is O(k). A similar argument may be given for label imbalance. In the bound above, for easy interpretability, one may upper bound the resistance diameter RG by the geodesic diameter DG. However, the resistance diameter makes for a sharper bound in a number of natural situations. For example now consider (a thick barbell) two m-cliques (one labeled “+1”, one “-1”) with ℓedges (ℓ< m) between the cliques. We observe between any two vertices there are at least ℓedge-disjoint paths of length no more than five, therefore the resistance diameter is at most 5/ℓby the “resistors-in-parallel” rule while the geodesic diameter is 3. Thus, for “thick barbells” if we use the geodesic diameter we have a mistake bound of 16ℓ(substituting βu = 0, and RG ≤3 into (16)) while surprisingly with the resistance diameter the bound (substituting b = 1 4n, c = 0, |Mu| = 0, βu = 0, and RG ≤5/ℓinto (15)) is independent of ℓand is 20. 5 Discussion In this paper, we have provided a bound on the performance of the perceptron on the graph in terms of structural properties of the graph and its labeling which are only indirectly dependent on the number of vertices in the graph, in particular, they depend on the cut size and the diameter. In the following, we compare the perceptron with two other approaches. First, we compare the perceptron with the graph kernel K1 0 to the conceptually simpler k-nearest neighbors algorithm with either the graph geodesic distance or the resistance distance. In particular, we prove the impossibility of bounding performance of k-nearest neighbors only in terms of the diameter and the cut size. Specifically, we give a parameterized family of graphs for which the number of mistakes of the perceptron is upper bounded by a fixed constant independent of the graph size while k-nearest neighbors provably incurs mistakes linearly in the graph size. Second, we compare the perceptron with the graph kernel K1 0 with a simple application of the classical halving algorithm [11]. Here, we conclude that the upper bound for the perceptron is better for graphs with a small diameter while the halving algorithm’s upper bound is better for graphs with a large diameter. In the following, for simplicity we limit our discussion to binary-weighted graphs, noise-free data (see equation (16)) and upper bound the resistance diameter RG with the geodesic diameter DG (see equation (9)). 5.1 K-nearest neighbors on the graph We consider the k-nearest neighbors algorithms on the graph with both the resistance distance (see equation (7)) and the graph geodesic distance. The geodesic distance between two vertices is the length of the shortest path between the two vertices (recall the discussion in section 3). In the following, we use the emphasis distance to refer simultaneously to both distances. Now, consider the family of Oℓ,m,p of octopus graphs. An octopus graph (see Figure 5) consists of a “head” which is an ℓ-clique (C(ℓ)) with vertices denoted by c1, . . . , cℓ, and a set of m “tentacles” ({Ti}m i=1), where each tentacle is a line graph of length p. The vertices of tentacle i are denoted by {ti,0, ti,1, . . . , ti,p}; the ti,0 are all identified as one vertex r which acts as the root of the m tentacles. There is an edge (the body) connecting root r to the vertex c1 on the head. Thus, this graph has diameter D = max(p + 2, 2p) and there are ℓ+ mp + 1 vertices in total; an octopus Om,p is balanced if ℓ= mp + 1. Note that the distance of every vertex in the head to every other vertex in the graph is no more than p + 2, and every tentacle “tip” ti,p is distance 2p to other tips tj,p : j ̸= i. We now argue that k-nearest neighbors may incur mistakes linear in the number of tentacles. To this end, choose p ≥3 and suppose we have the following online data sequence {(c1, +1), (t1,p, −1), (c2, +1), (t2,p, −1), . . . , (cm, +1), (tm,p, −1)}. Note that k-nearest neighbors will make a mistake on every instance (ti,p, −1) and so, even assuming that it predicts correctly on (c1, +1) it will always make m mistakes. We now contrast this result with the performance of the perceptron with the graph kernel K1 0 (see equation (10)). By equation (16), the number of mistakes will be upper bounded by 10p + 5 because there is a cut of size 1 and the diameter is 2p. Thus, for balanced octopi Om,p with p ≥3, as m grows the number of mistakes of the kernel perceptron will be bounded by a fixed constant. Whereas distance k-nearest neighbors will incur mistakes linearly in m. 5.2 Halving algorithm We now compare the performance of our algorithm to the classical halving algorithm [11]. The halving algorithm operates by predicting on each trial as the majority of the classifiers in the concept class which have been consistent over the trial sequence. Hence, the number of mistakes of the halving algorithm is upper bounded by the logarithm of the cardinality of the concept class. Let Kk G = {u ∈{−1, 1}n : ΦG(u) = k} be the set all of all classifiers with a cut size equal to k on a fixed graph G. The cardinality of Kk G is upper bounded by n(n−1) k since any classifier (cut) in Kk G can be uniquely identified by a choice of k edges and 1 bit which determines the sign of the vertices in the same of partition (however we overcount as not every set of edges determines a classifier). The number of mistakes of the halving algorithm is upper bounded by O(k log n k ). For example, on a line graph with a cut size of 1 the halving algorithm has an upper bound of ⌈log n⌉while the upper bound for the number of mistakes of the perceptron as given in equation (16) is 5n + 5. Although the halving algorithm has a sharper bound on such large diameter graphs as the line graph, it unfortunately has a logarithmic dependence on n. This contrasts to the bound of the perceptron which is essentially independent of n. Thus, the bound for the halving algorithm is roughly sharper on graphs with a diameter ω(log n k ), while the perceptron bound is roughly sharper on graphs with a diameter o(log n k ). We emphasize that this analysis of upper bounds is quite rough and sharper bounds for both algorithms could be obtained for example, by including a term representing the minimal possible cut, that is, the minimum number of edges necessary to disconnect a graph. For the halving algorithm this would enable a better bound on the cardinality of Kk G (see [13]). While, for the perceptron the larger the connectivity of the graph, the weaker the diameter upper bound in Theorem 3.2 (see for example the discussion of “thick barbells” at the end of section 4). Acknowledgments We wish to thank the anonymous reviewers for their useful comments. This work was supported by EPSRC Grant GR/T18707/01 and by the IST Programme of the European Community, under the PASCAL Network of Excellence IST-2002-506778. References [1] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265–299, 2003. [2] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML 2002, pages 19–26. Morgan Kaufmann, San Francisco, CA, 2002. [3] R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In ICML 2002, pages 315–322. Morgan Kaufmann, San Francisco, CA, 2002. [4] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML 2003, pages 912–919, 2003. [5] A. Smola and R.I. Kondor. Kernels and regularization on graphs. In COLT 2003, pages 144–158, 2003. [6] M. Belkin, I. Matveeva, and P. Niyogi. Regularization and semi-supervised learning on large graphs. In COLT 2004, pages 624 – 638, Banff, Alberta, 2004. Springer. [7] T. Zhang and R. Ando. Analysis of spectral kernel design based semi-supervised learning. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, NIPS 18, pages 1601–1608. MIT Press, Cambridge, MA, 2006. [8] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML 2005, pages 305–312, New York, NY, USA, 2005. ACM Press. [9] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999. [10] D. Klein and M. Randi´c. Resistance distance. Journal of Mathematical Chemistry, 12(1):81–95, 1993. [11] J. M. Barzdin and R. V. Frievald. On the prediction of general recursive functions. Soviet Math. Doklady, 13:1224–1228, 1972. [12] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337–404, 1950. [13] D. Karger and C. Stein. A new approach to the minimum cut problem. JACM, 43(4):601–640, 1996.
|
2006
|
165
|
2,994
|
An Efficient Method for Gradient-Based Adaptation of Hyperparameters in SVM Models S. Sathiya Keerthi Yahoo! Research 3333 Empire Avenue Burbank, CA 91504 selvarak@yahoo-inc.com Vikas Sindhwani Department of Computer Science University of Chicago Chicago, IL 60637 vikass@cs.uchicago.edu Olivier Chapelle MPI for Biological Cybernetics Spemannstraße 38 72076 T¨ubingen olivier.chapelle@tuebingen.mpg.de Abstract We consider the task of tuning hyperparameters in SVM models based on minimizing a smooth performance validation function, e.g., smoothed k-fold crossvalidation error, using non-linear optimization techniques. The key computation in this approach is that of the gradient of the validation function with respect to hyperparameters. We show that for large-scale problems involving a wide choice of kernel-based models and validation functions, this computation can be very efficiently done; often within just a fraction of the training time. Empirical results show that a near-optimal set of hyperparameters can be identified by our approach with very few training rounds and gradient computations. . 1 Introduction Consider the general SVM classifier model in which, given n training examples {(xi, yi)}n i=1, the primal problem consists of solving the following problem: min (w,b) 1 2∥w∥2 + C n X i=1 l(oi, yi) (1) where l denotes a loss function over labels yi ∈{+1, −1} and the outputs oi on the training set. The machine’s output o for any example x is given as o = w · φ(x) −b = Pn j=1 αjyjk(x, xi) −b where the αi are the dual variables, b is the threshold parameter and, as usual, computations involving φ are handled using the kernel function: k(x, z) = φ(x) · φ(z). For example, the Gaussian kernel is given by k(x, z) = exp(−γ∥x −z∥2) (2) The regularization parameter C and kernel parameters such as γ comprise the vector h of hyperparameters in the model. h is usually chosen by optimizing a validation measure (such as the k-fold cross validation error) on a grid of values (e.g. a uniform grid in the (log C, log γ) space). Such a grid search is usually expensive. Particularly, when n is large, this search is so time-consuming that one usually resorts to either default hyperparameter values or crude search strategies. The problem becomes more acute when there are more than two hyperparameters. For example, for feature weighting/selection purposes one may wish to use the following ARD-Gaussian kernel: k(x, z) = exp(− X t γt∥xt −zt∥2) (3) where γt = weight on the tth feature. In such cases, a grid based search is ruled out. In Figure 1 (see section 5) we show contour plots of performance of an SVM on the log C −log γ plane for a realworld binary classification problem. These plots show that learning performance behaves “nicely” as a function of hyperparameters. Intuitively, as C and γ are varied one expects the SVM to smoothly transition from providing underfitting solutions to overfitting solutions. Given that this phenomenon seems to occur routinely on real-world learning tasks, a very appealing and principled alternative to grid search is to consider a differentiable version of the performance validation function and invoke non-linear gradient-based optimization techniques for adapting hyperparameters. Such an approach requires the computation of the gradient of the validation function with respect to h. Chapelle et al. (2002) give a number of possibilities for such an approach. One of their most promising methods is to use a differentiable version of the leave-one-out (LOO) error. A major disadvantage of this method is that it requires the expensive computation and storage of the inverse of a kernel sub-matrix corresponding to the support vectors. It is worth noting that, even if, on some large scale problems, the support vector set is of a manageable size at the optimal hyperparameters, the corresponding set can be large when the hyperparameter vector is away from the optimal; on many problems, such a far-off region in the hyperparameter space is usually traversed during the adaptation process! We highlight the contributions of this paper. (1) We consider differentiable versions of validation-set-based objective functions for model selection (such as k-fold error) and give an efficient method for computing the gradient of this function with respect to h. Our method does not require the computation of the inverse of a large kernel sub-matrix. Instead, it only needs a single linear system of equations to be solved, which can be done either by decomposition or conjugate-gradient techniques. In essence, the cost of computing the gradient with respect to h is about the same, and usually much lesser than the cost of solving (1) for a given h. (2) Our method is applicable to a wide range of validation objective functions and SVM models that may involve many hyperparameters. For example, a variety of loss functions can be used together with multiclass classification, regression, structured output or semi-supervised SVM algorithms. (3) Large-scale empirical results show that with BFGS optimization, just trying about 10-20 hyperparameter points leads to the determination of optimal hyperparameters. Moreover, even as compared to a fine grid search, the gradient procedure provides a more precise placement of hyperparameters leading to better generalization performance. The benefit in terms of efficiency over the grid approach is evident even with just two hyperparameters. We also show the usefulness of our method for tuning more than two hyperparameters when optimizing validation functions such as the F measure and weighted error rate. This is particularly useful for imbalanced problems. This paper is organized as follows: In section 2, we discuss the general class of SVM models to which our method can be applied. In section 3, we describe our framework and provide the details of the gradient computation for general validation functions. In section 4, we discuss how to develop differentiable versions of several common performance validation functions. Empirical results are presented in section 5. We conclude this paper in section 6. Due to space limitations, several details have been omitted but can be found in the technical report (Keerthi et al. (2006)). 2 SVM Classification Models In this section, we discuss the assumptions required for our method to be applicable. Consider SVM classification models of the form in (1). We assume that the kernel function k is a continuously differentiable function of h. Three commonly used SVM loss functions are: (1) hinge loss; (2) squared hinge loss; and (3) squared loss. In each of these cases, the solution of (1) is obtained by computing the vector α that solves a dual problem. The solution usually leads to a linear system relating α and b: P α b = q (4) where P and q are, in general, functions of h. We make the following assumption: Locally around h (at which we are interested in calculating the gradient of the validation function to be defined soon) P and q are continuously differentiable functions of h. We write down P and q for the hinge loss function and discuss the validity of the above assumption. Details for other loss functions are similar. Hinge loss. l(oi, yi) = max{0, 1 −yioi}. After the solution of (1), the training set indices get partitioned into three sets: I0 = {i : αi = 0}, Ic = {i : αi = C} and Iu = {i : 0 < αi < C}. Let α0, αc, αu, yc, yu, ec, eu, Ωuc, Ωuu etc be appropriately defined vectors and matrices. Then (4) is given by α0 = 0, αc = Cec, Ωuu −yu −yT u 0 αu b = eu −Ωucαc yT c αc (5) If the partitions I0, Ic and Iu do not change locally around a given h then assumption 2 holds. Generically, this happens for almost all h. The modified Huber loss function can also be used, though the derivation of (4) for it is more complex than for the three loss functions mentioned above. Recently, weighted hinge loss with asymmetric margins (Grandvalet et al., 2005) has been explored for treating imbalanced problems. Weighted Hinge loss. l(oi, yi) = Ci max{0, mi −yioi}. where Ci = C+, mi = m+ if yi = 1 and Ci = C−, mi = m−if yi = −1. Because C+ and C−are present, the hyperparameter C in (1) can be omitted. The SVM model with weighted hinge loss has four extra hyperparameters, C+, C−, m+ and m−, apart from the kernel hyperparameters. Our methods in this paper allow the possibility of efficiently tuning all these parameters together with kernel parameters. The method described in this paper is not special to classification models only. It extends to a wide class of kernel methods for which the optimality conditions for minimizing a training objective function can be expressed as a linear system (4) in a continuously differentiable manner1. These include many models for multiclass classification, regression, structured output and semi-supervised learning (see Keerthi et al. (2006)). 3 The gradient of a validation function Suppose that for the purpose of hyperparameter tuning, we are given a validation scheme involving a small number of (training set, validation set) partitions, such as: (1) using a single validation set, (2) k-fold cross validation, or (3) averaging over k randomly chosen (training set, validation set) partitions. Our method applies to any of these three schemes. To keep notations simple, we explain the ideas only for scheme (1) and expand on the other schemes towards the end of this section. Note that throughout the hyperparameter optimization process, the training-validation splits are fixed. Let {˜xl, ˜yl}˜n l=1 denote the validation set. Let ˜Kli = k(˜xl, xi) involving a kernel calculation between an element of a validation set with an element of the training set. The output on the lth validation example is ˜ol = P i αiyi ˜Kli −b which, for convenience, we will rewrite as ˜ol = ψT l β (6) where β is a vector containing α and b, and ψl is a vector containing yi ˜Kli, i = 1, . . . , n and −1 as the last element (corresponding to b). Let us suppose that the model selection problem is formulated as a non-linear optimization problem: h⋆= argmin h f(˜o1, . . . , ˜o˜n) (7) where f is a differentiable validation function of the outputs ˜ol which implicitly depend on h. In the next section, we will outline the construction of such functions for criteria like error rate, F measure etc. We now discuss the computation of ∇hf. Let θ denote a generic parameter in h and let us represent partial derivative of some quantity, say v, with respect to θ as ˙v. Before writing down expressions for ˙f, let us discuss how to get ˙β. Differentiating (4) with respect to θ gives P ˙β + ˙Pβ = ˙q ⇒ ˙β = P −1( ˙q −˙Pβ) (8) Now let us write down ˙f. ˙f = ˜n X l=1 (∂f/∂˜ol) ˙˜ol (9) 1Infact, the main ideas easily extend when the optimality conditions form a non-linear system in (α, b) (e.g., in Kernel Logistic Regression). where ˙˜ol is obtained by differentiating (6): ˙˜ol = ψT l ˙β + ˙ψT l β (10) The computation of ˙β in (8) is the most expensive step, mainly because it requires P −1. Note that, for hinge loss, P −1 can be computed in a somewhat cheaper way: only a matrix of the dimension of Iu needs to be inverted. Even then, in large scale problems the dimension of the matrix to be inverted can become so large that even storing it may be a problem; even when large storage is possible, the inverse can be very expensive. Most times, the effective rank of P is much smaller than its dimension. Thus, instead of computing P −1 in (8), we can instead solve P ˙β = ( ˙q −˙Pβ) (11) for ˙β approximately using decomposition methods or iterative methods such as conjugate-gradients. This can improve efficiency as well as take care of memory issues by storing P only partially and computing the remaining parts of P as and when needed. Since the right-hand-side vector ( ˙q −˙Pβ) in (11) changes for each different θ with respect to which we are differentiating, we need to solve (11) for each element of h. If the number of elements of h is not small (say, we want to use (3) with the MNIST dataset which has more than 700 features) then, even with (11), the computations can still remain very expensive. We now give a simple trick that shows that if the gradient calculations are re-organized, then obtaining the solution of just a single linear system suffices for computing the full gradient of f with respect to all elements of h. Let us denote the coefficient of ˙˜ol in the expression for ˙f in (9) by δl, i.e., δl = ∂f/∂˜ol (12) Using (10) and plugging the expression for ˙β from (8) into (9) gives ˙f = X l δl ˙˜ol = X l δl(ψT l P −1( ˙q −˙Pβ) + ˙ψT l β) = dT ( ˙q −˙Pβ) + ( X l δl ˙ψl)T β (13) where d is the solution of P T d = ( X l δlψl) (14) The beauty of the reorganization in (13) is that d is the same for all variables θ in h about which the differentiation is being done. Thus (14) needs to be solved only once. In concurrent work (Seeger, 2006) has used a similar idea for kernel logistic regression. As a word of caution, note that P may not be symmetric. See, e.g., the P arising from (5) for the hinge loss case. Also, the parts corresponding to zero components should be omitted from calculations and the special structure of P should be utilized,e.g., for hinge loss when computing ˙Pβ the parts of ˙P corresponding to α0 (see (5)) can be ignored. The linear system in the above equation can be efficiently solved using conjugate gradient techniques. The sequence of steps for the computation of the full gradient of f with respect to h is as follows. First compute δl from (12). For various choices of validation function, we outline this computation in the next section. Then solve (14) for d. Then, for each θ use (13) to get all the derivatives of f. The computation of ˙Pβ has to be performed for each hyperparameter separately. In problems with many hyperparameters, this is the most expensive part of the gradient computation. Note that in some cases, e.g., θ = C, ˙Pβ is immediately obtained. For θ = γ or γt, when using (2,3), one can cache pairwise distance computations while computing the kernel matrix. We have found (see section 5) that the cost of computing the gradient of f with respect to h to be usually much less than the cost of solving (1) and then obtaining f. We can also employ the above ideas in a validation scheme where one uses k training-validation splits (e.g in k-fold cross-validation). In this case, for each partition one obtains the linear system (4), corresponding validation outputs (6) and the linear system in (14). The gradient is simply computed by summing over the k partitions, i.e., ˙f = Pk j=1 ˙f (k) where ˙f (k) is given by (13) using the quantities P, q, d etc associated with the kth partition. The model selection problem (7) may now be solved using, e.g., Quasi-Newton methods such as BFGS which only require function value and gradient at a hyperparameter setting. In particular, reaching the minimizer of f too closely is not important. In our implementations we terminate optimization iterations when the following loose termination criterion is met: |f(hk+1) −f(hk)| ≤ 10−3|f(hk)|, where hk+1 and hk are consecutive iterates in the optimization process. A general concern with descent methods is the presence of local minima. In section 5, we make some encouraging empirical observations in this regard, e.g., local minima problems did not occur for the C, γ tuning task; for several other tasks, starting points that work surprisingly well could be easily obtained. 4 Smooth validation functions We consider validation functions that are general functions of the confusion matrix, of the form f(tp, fp) where tp is the number of true positives and fp is the number of false positives. Let u(z) denote the unit step function which is 0 when z < 0 and 1 otherwise. Denote ˜ul = u(˜yl˜ol), which evaluates to 1 if the lth example is correctly classified and 0 otherwise. Then, tp and fp can be written as tp = P l:˜yl=+1 ˜ul, fp = P l:˜yl=−1(1 −˜ul). Let ˜n+ and ˜n−be the number of validation examples in the positive and negative classes. The most commonly used validation function is error rate. Error rate (er) is simply the percentage of incorrect predictions, i.e., er = (˜n+ −tp + fp)/˜n. For classification problems with imbalanced classes it is usual to consider either weighted error rate or a function of precision and recall such as the F measure. Weighted Error rate (wer) is given by wer = (˜n+ −tp + ηfp)/(˜n+ + η˜n−), where η is the ratio of the cost of misclassifications of the negative class to that of the positive class. F measure (F) is the harmonic mean of precision and recall: F = 2tp/(˜n+ + tp + fp) Alternatively, one may want to maximize precision under a recall constraint, or maximize the area under the ROC Curve or maximize the precision-recall breakeven point. See Keerthi et al. (2006) for a discussion on how to treat these cases. It is common practice to evaluate measures like precision, recall and F measure while varying the threshold on the real-valued classifier output, i.e., at any given threshold σ0, tp and fp can be redefined in terms of the following, ˜ul = u (˜yl(˜ol −σ0)) (15) For imbalanced problems one may wish to maximize a score such as the F measure over all values of σ0. In such cases, it is appropriate to incorporate σ0 as an additional hyperparameter that needs to be tuned. Such bias-shifting is particularly also useful as a compensation mechanism for the mismatch between training objective function and validation function; often one uses an SVM as the underlying classifier even though it is not explicitly trained to minimize the validation function that the practitioner truly cares about. In section 5, we make some empirical observations related to this point. The validation functions discussed above are based on discrete counts. In order to use gradient-based methods smooth functions of h are needed. To develop smooth versions of validation functions, we define ˜sl, which is a sigmoidal approximation to ˜ul (15) of the following form: ˜sl = 1/[1 + exp (−σ1˜yl (˜ol −σ0))] (16) where σ1 > 0 is a sigmoidal scale factor. In general, σ0, σ1 may be functions of the validation outputs. (As discussed above, one may alternatively wish to treat σ0 as an additional hyperparameter.) The scale factor σ1 influences how closely ˜sl approximates the step function ˜ul and hence controls the degree of smoothness in building the sigmoidal approximation. As the hyperparameter space is probed, the magnitude of the outputs can vary quite a bit. σ1 takes the scale of the outputs into account. Below we discuss various methods to set σ0, σ1. We build a differentiable version of such a function by simply replacing ˜ul by ˜sl. Thus, we have f = f(˜s1 . . . ˜s˜n). The value of δl (12) is given by: δl = ∂f ∂˜sl ∂˜sl ∂˜ol + X r ∂f ∂˜sr ∂˜sr ∂σ0 ! ∂σ0 ∂˜ol + X r ∂f ∂˜sr ∂˜sr ∂σ1 ! ∂σ1 ∂˜ol (17) −2 0 2 0 2 4 6 Smooth Val Error Rate (er) Log gamma Log C −2 0 2 0 2 4 6 Test Error Rate Log gamma Log C Figure 1: Performance contours for IJCNN with 2000 training points. The sequence of points generated by Grad are shown by □(best is in red). The point chosen by Grid is shown by ◦in red. where the partial derivatives of ˜sl with respect to ˜ol, σ0, σ1 can be easily derived from (16) and (∂f/∂˜sl) = (∂f/∂tp)(∂tp/∂˜sl) + (∂f/∂fp)(∂fp/∂˜sl). We now discuss three methods to compute the sigmoidal parameters σ0, σ1. For each of these methods the partial derivatives of σ0, σ1 with respect to ˜ol can be obtained (Keerthi et al. (2006)) and used for computing (17). Direct Method. Here, we simply set, σ0 = 0, σ1 = t/ρ, where ρ denotes standard deviation of the outputs {˜ol} and t is a constant which is heuristically set to some fixed value in order to well-approximate the step function. In our implementation we use t = 10. Hyperparameter Bias Method. Here, we treat σ0 as a hyperparameter and set σ1 as above. Minimization Method. In this method, we obtain σ0, σ1 by performing sigmoidal fitting based on unconstrained minimization of some smooth criterion N, i.e., (σ0, σ1) = argminR2 N. A natural choice of N is based on Platt’s method (Platt (1999)) where ˜sl is interpreted as the posterior probability that the class of lth validation example is ˜yl, and we take N to be the Negative-LogLikelihood: N = Nnll = −P l log(˜sl). Sigmoidal fitting based on Nnll has also been previously proposed in Chapelle et al. (2002). The probabilistic error rate: per = P l(1 −˜sl)/˜n and f = Nnll are suitable validation functions which go well with the choice N = Nnll. 5 Empirical Results We demonstrate the effectiveness of our method on several binary classification problems. The SVM model with hinge loss was used. SVM training was done using the SMO algorithm. Five fold cross validation was used to form the validation functions. Four datasets were used: Adult, IJCNN, Vehicle and Splice. The first three were taken from http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/ and Splice was taken from http://ida.first.fraunhofer.de/˜raetsch/. The number of examples/features in these datasets are: Adult: 32561/123; IJCNN: 141691/22; Vehicle: 98528/100; and Splice: 3175/60. For each dataset, training sets of different sizes were chosen in a class-wise stratified fashion; the remaining examples formed the test set. The Gaussian kernel (2) and the ARD-Gaussian kernel (3) were used. For (C, γ) tuning with the Gaussian Kernel, we also tried the popular Grid over a 15 × 15 grid of values. For C, γ tuning with the gradient method, the starting point C = γ = 1 was used. Comparison of validation functions. Figure 1 shows the contours of the smoothed validation error rate and the actual test error rate for the IJCNN dataset with 2000 training examples on the (log C, log γ) plane. Grid and Grad respectively denote the grid and the gradient methods applied to the (C, γ) tuning task. We used f = er smoothed with the direct method for Grad. It can be seen that the contours are quite similar. We also generated corresponding contours (omitted) for f = per and f = Nnll (see end of section 4) and found that the validation er with the direct method better represents the test error rate. Figure 1 also shows that the gradient method very quickly plunges into the high-performance region in the (C, γ) space. Comparison of Grid and Grad methods. For various training set sizes of IJCNN, in Table 1, we compare the speed and generalization performance of Grid and Grad, Clearly Grad is much more efficient than Grid. The good speed improvement is seen even at small training set sizes. Although the efficiency of Grid can be improved in certain ways (say, by performing a crude search followed by a refined search, by avoiding unnecessary exploration of difficult regions in the hyperparameter space etc) Grad determines the optimal hyperparameters more precisely. Table 2 compares Grid and Grad on Adult and Vehicle datasets for various training sizes. Though the generalization performance of the two methods are close, Grid is much slower. Table 1: Comparison of Grid, Grad & Grad-ARD on IJCNN & Splice. nf= number of hyperparameter vectors tried. (For Grid, nf= 225.) cpu= cpu time in minutes. erate=% test error rate. Grid Grad Grad-ARD ntrg cpu erate nf cpu erate nf cpu erate IJCNN 2000 10.03 2.95 11 4.58 2.87 28 5.63 2.65 4000 38.77 2.42 12 11.40 2.42 13 8.40 2.14 8000 218.92 1.76 14 68.58 1.77 17 38.58 1.50 16000 1130.37 1.24 12 127.03 1.26 20 154.03 1.08 32000 5331.15 0.91 9 382.20 0.91 7 269.16 0.82 Splice 2000 11.42 9.19 13 7.57 8.17 37 35.04 3.49 Table 2: Comparison of Grad & Grid methods on Adult & Vehicle. Definitions of nf, cpu & erate are as in Table 1. For Vehicle and ntrg=16000, Grid was discontinued after 5 days of computation. Adult Vehicle Grad Grid Grad Grid ntrg nf cpu erate cpu erate nf cpu erate cpu erate 2000 9 3.62 16.21 8.66 16.14 7 2.50 13.58 15.25 13.84 4000 16 15.98 15.64 37.53 15.95 5 8.60 13.29 135.28 13.30 8000 10 52.17 15.69 306.25 15.59 9 83.10 12.84 1458.12 12.82 16000 6 256.40 15.40 3667.90 15.37 6 360.88 12.58 – – Feature Weighting Experiments. To study the effectiveness of our gradient-based approach when many hyperparameters are present, we use the ARD-Gaussian kernel in (3) and tune C together with all the γt’s. As before, we used f = er smoothed with the direct method. The solution for Gaussian kernel was seeded as the starting point for the optimization. Results are reported in Table 1 as Grad-ARD where cpu denotes the extra time for this optimization. We see that Grad-ARD achieves significant improvements in generalization performance over Grad without increasing the computational cost by much even though a large number of hyperparameters are being tuned. Maximizing F-measure by threshold adjustment. In section 4 we mentioned about the possible value of threshold adjustment when the validation/test function of interest is a quantity that is different from error rate. We now illustrate this by taking the Adult dataset, with F measure. The size of the training set is 2000. Gaussian kernel (2) was used. We implemented two methods: (1) we set σ0 = 0 and tuned only C and γ; (2) we tuned the three hyperparameters C, γ and σ0. We ran the methods on ten different random training set/test set splits. Without σ0, the mean (standard deviation) of F measure values on 5-fold cross validation and on the test set were: 0.6385 (0.0062) and 0.6363 (0.0081). With σ0, the corresponding values improved: 0.6635 (0.0095) and 0.6641 (0.0044). Clearly, the use of σ0 yields a very significant improvement on the F-measure. The ability to easily include the threshold as an extra hyperparameter is a very useful advantage for our method. Optimizing weighted error rate in imbalanced problems. In imbalanced problems where the proportion of examples in the positive class is small, one usually minimizes weighted error rate wer (see section 4) with a small value of η. One can think of four possible methods in which, apart from the kernel parameter γ and threshold σ0 (we used the Hyperparameter bias method for smoothening), we include other parameters by considering sub-cases of the weighted hinge loss model (see section 2) – (1) Usual SVM: Set m+ = m−= 1, C+ = C, C−= C and tune C. (2) Set m+ = m−= 1, C+ = C, C−= ηC and tune C. (3) Set m+ = m−= 1 and tune C+ and C−treating them as independent parameters. (4) Use the full Weighted Hinge loss model and tune C+, C−, m+ and m−. To compare the performance of these methods we took the IJCNN dataset, randomly choosing 2000 training examples and keeping the remaining examples as the test set. Ten such random splits were tried. We take η = 0.01. The top half of Table 3 reports weighted error rates associated with validation and test. The weighted hinge loss model performs best. Table 3: Mean (standard deviation) of weighted (η = 0.01) error rate values on the IJCNN dataset. C+ = C, C−= C C+ = C, C−= ηC C+, C−tuned Full Weighted Hinge With σ0 Validation 0.0571 (0.0183) 0.0419 (0.0060) 0.0490 (0.0104) 0.0357 (0.0063) Test 0.0638 (0.0160) 0.0549 (0.0098) 0.0571 (0.0136) 0.0461 (0.0078) Without σ0 Validation 0.1953 (0.0557) 0.1051 (0.0164) 0.1008 (0.0607) 0.0364 (0.0061) Test 0.1861 (0.0540) 0.0897 (0.0154) 0.0969 (0.0502) 0.0469 (0.0076) The presence of the threshold parameter σ0 is important for the first three methods. The bottom half of Table 3 gives the performance statistics of the methods when threshold is not tuned. Interestingly, for the weighted hinge loss method, tuning of threshold has little effect. Grandvalet et al. (2005) also make the observation that this method appropriately sets the threshold on its own. Cost Break-up. In the gradient-based solution process, each step of the optimization requires the evaluation of f and ∇hf. In doing this, there are three steps that take up the bulk of the computational cost: (1) training using the SMO algorithm; (2) the solution of the linear system in (14); and (3) the remaining computations associated with the gradient, of which the computation of ˙Pβ in (13) is the major part. We studied the relative break-up of the costs for the IJCNN dataset (training set sizes ranging from 2000 to 32000), for solution by Grad and Grad-ARD methods. On an average, the cost of solution by SMO forms 85 to 95% of the total computational time. Thus, the gradient computation is very cheap. We also found that the ˙Pβ cost of Grad-ARD doesn’t become large in spite of the fact that 23 hyperparameters are tuned there. This is mainly due to the efficient reusage of terms in the ARD-Gaussian calculations that we mentioned in section 4. 6 Conclusion The main contribution of this paper is a fast method of computing the gradient of a validation function with respect to hyperparameters for a range of SVM models; together with a nonlinear optimization technique it can be used to efficiently determine the optimal values of many hyperparameters. Even in models with just two hyperparameters our approach is faster and offers a more precise hyperparameter placement than the Grid approach. Our approach is particularly of great value for large scale problems. The ability to tune many hyperparameters easily should be used with care. On a text classification problem involving many thousands of features we placed an independent feature weight for each feature and optimized all these weights (together with C) only to find severe overfitting taking place. So, for a given problem it is important to choose the set of hyperparameters carefully, in accordance with the richness of the training set. References S. S. Keerthi, V. Sindhwani and O. Chapelle. An efficient method for gradient-based adaptation of hyperparameters in SVM models. Technical Report, 2006. O. Chapelle, V. Vapnik, O. Bousquet and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46:131–159, 2002. Y. Grandvalet, J. Mari´ethoz and S. Bengio. A probabilistic interpretation of SVMs with an application to unbalanced classification. NIPS, 2005. J. Platt. Probabilities for support vector machines. In Advances in Large Margin Classifiers. MIT Press, Cambridge, Massachusetts, 1999. M. Seeger. Cross validation optimization for structured Hessian kernel methods. Tech. Report, MPI for Biological Cybernetics, T¨ubingen, Germany, May 2006.
|
2006
|
166
|
2,995
|
Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization David Wipf1, Rey Ram´ırez2, Jason Palmer1,2, Scott Makeig2, & Bhaskar Rao1 ∗ 1Signal Processing and Intelligent Systems Lab 2Swartz Center for Computational Neuroscience University of California, San Diego 92093 {dwipf,japalmer,brao}@ucsd.edu, {rey,scott}@sccn.ucsd.edu Abstract The ill-posed nature of the MEG/EEG source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian methods are useful in this capacity because they allow these assumptions to be explicitly quantified. Recently, a number of empirical Bayesian approaches have been proposed that attempt a form of model selection by using the data to guide the search for an appropriate prior. While seemingly quite different in many respects, we apply a unifying framework based on automatic relevance determination (ARD) that elucidates various attributes of these methods and suggests directions for improvement. We also derive theoretical properties of this methodology related to convergence, local minima, and localization bias and explore connections with established algorithms. 1 Introduction Magnetoencephalography (MEG) and electroencephalography (EEG) use an array of sensors to take EM field measurements from on or near the scalp surface with excellent temporal resolution. In both cases, the observed field is generated by the same synchronous, compact current sources located within the brain. Because the mapping from source activity configuration to sensor measurement is many to one, accurately determining the spatial locations of these unknown sources is extremely difficult. The relevant localization problem can be posed as follows: The measured EM signal is B ∈ℜdb×n, where db equals the number of sensors and n is the number of time points at which measurements are made. The unknown sources S ∈ℜds×n are the (discretized) current values at ds candidate locations distributed throughout the cortical surface. These candidate locations are obtained by segmenting a structural MR scan of a human subject and tesselating the gray matter surface with a set of vertices. B and S are related by the generative model B = LS + E, (1) where L is the so-called lead-field matrix, the i-th column of which represents the signal vector that would be observed at the scalp given a unit current source at the i-th vertex with a fixed orientation (flexible orientations can be incorporated by including three columns per location, one for each directional component). Multiple methods based on the physical properties of the brain and Maxwell’s equations are available for this computation. Finally, E is a noise term with columns drawn independently from N(0, Σϵ). To obtain reasonable spatial resolution, the number of candidate source locations will necessarily be much larger than the number of sensors (ds ≫db). The salient inverse problem then becomes the ill-posed estimation of these activity or source regions, which are reflected by the nonzero rows of the source estimate matrix ˆS. Because the inverse model is underdetermined, all efforts at source reconstruction are heavily dependent on prior assumptions, which in a Bayesian framework are embedded in the distribution p(S). Such a prior is often considered to be fixed and known, as in the ∗This work was supported by NSF grants DGE-0333451 and IIS-0613595. case of minimum ℓ2-norm approaches, minimum current estimation (MCE) [6, 18], FOCUSS [2, 5], and sLORETA [10]. Alternatively, a number of empirical Bayesian approaches have been proposed that attempt a form of model selection by using the data to guide the search for an appropriate prior. Examples include variational Bayesian methods [14, 15], hierarchial covariance component models [4, 8, 11], and automatic relevance determination (ARD) [7, 9, 12, 13, 17]. While seemingly quite different in some respects, we present a generalized framework that encompasses many of these methods and points to connections between algorithms. We also analyze several theoretical properties of this framework related to computational/convergence issues, local minima, and localization bias. Overall, we envision that by providing a unifying perspective on these approaches, neuroelectromagnetic imaging practitioners will be better able to assess the relative strengths with respect to a particular application. This process also points to several promising directions for future research. 2 A Generalized Bayesian Framework for Source Localization In this section, we present a general-purpose Bayesian framework for source localization. In doing so, we focus on the common ground between many of the methods discussed above. While derived using different assumptions and methodology, they can be related via the notion of automatic relevance determination [9] and evidence maximization [7]. To begin we involve the noise model from (1), which fully defines the assumed likelihood p(B|S). While the unknown noise covariance can also be parameterized and estimated from the data, for simplicity we assume that Σϵ is known and fixed. Next we adopt the following source prior for S: p (S; Σs) = N (0, Σs) , Σs = dγ X i=1 γiCi, (2) where the distribution is understood to apply independently to each column of S. Here γ = [γ1, . . . , γdγ]T is a vector of dγ nonnegative hyperparameters that control the relative contribution of each covariance basis matrix Ci, all of which we assume are fixed and known. The unknown hyperparameters can be estimated from the data by first integrating out the unknown sources S giving p(B; Σb) = Z p (B|S) p (S; Σs) dS = N(0, Σb), (3) where Σb = Σϵ + LΣsLT . A hyperprior p(γ) can also be included if desired. This expression is then maximized with respect to the unknown hyperparameters, a process referred to as type-II maximum likelihood or evidence maximization [7, 9] or restricted maximum likelihood [4]. Thus the optimization problem shifts from finding the maximum a posteriori sources given a fixed prior to finding the optimal hyperparameters of a parameterized prior. Once these estimates are obtained (computational issues will be discussed in Section 2.1), a tractable posterior distribution p(S|B; ˆΣs) exists in closed form, where ˆΣs = P i ˆγiCi. To the extent that the ‘learned’ prior p(S; ˆΣs) is realistic, this posterior quantifies regions of significant current density and point estimates for the unknown sources can be obtained by evaluating the posterior mean ˆS ≜E h S|B; ˆΣs i = ˆΣsLT Σϵ + LˆΣsLT −1 B. (4) The specific choice of the Ci’s is crucial and can be used to reflect any assumptions about the possible distribution of current sources. It is this selection, rather than the adoption of a covariance component model per se, that primarily differentiates the many different empirical Bayesian approaches and points to novel algorithms for future study. The optimization strategy adopted for computing ˆγ, as well as the particular choice of hyperprior p(γ), if any, can also be distinguishing factors. In the simplest case, use of the single component Σs = γ1C1 = γ1I leads to a regularized minimumℓ2-norm solution. More interesting covariance component terms have been used to effect spatial smoothness, depth bias compensation, and candidate locations of likely activity [8, 11]. With regard to the latter, it has been suggested that prior information about a source location can be codified by including a Ci term with all zeros except a patch of 1’s along the diagonal signifying a location of probable source activity, perhaps based on fMRI data [11]. An associated hyperparameter γi is then estimated to determine the appropriate contribution of this component to the overall prior covariance. The limitation of this approach is that we generally do not know, a priori, the regions where activity is occurring with both high spatial and temporal resolution. Therefore, we cannot reliably known how to choose an appropriate location-prior term in many situations. The empirical Bayesian solution to this dilemma, which amounts to a form of model selection, is to try out many different (or even all possible) combinations of location priors, and determine which one has the highest Bayesian evidence, i.e., maximizes p(B; Σb) [7]. For example, if we assume the underlying currents are formed from a collection of dipolar point sources located at each vertex of the lead-field grid, then we may choose Σs = Pds i=1 γieieT i , where each ei is a standard indexing vector of zeros with a ‘1’ for the i-th element (and so Ci = eieT i encodes a prior preference for a single dipolar source at location i).1 This specification for the prior involves the counterintuitive addition of an unknown hyperparameter for every candidate source location which, on casual analysis may seem prone to severe overfitting (in contrast to [11], which uses only one or two fixed location priors). However, the process of marginalization, or the integrating out of the unknown sources S, provides an extremely powerful regularizing effect, driving most of the unknown γi to zero during the evidence maximization stage (more on this in Section 3). This ameliorates the overfitting problem and effectively reduces the space of possible active source locations by choosing a small relevant subset of location priors that optimizes the Bayesian evidence (hence ARD). With this ‘learned’ prior in place, a once ill-posed inverse problem is no longer untenable, with the posterior mean providing a good estimate of source activity. Such a procedure has been empirically successful in the context of neural networks [9], kernel machines [17], and multiple dipole fitting for MEG [12], a significant benefit to the latter being that the optimal number of dipoles need not be known a priori. In contrast, to model sources with some spatial extent, we can choose Ci = ψiψT i , where each ψi represents, for example, an ds × 1 geodesic neural basis vector that specifies an a priori weight location and activity extent [13]. In this scenario, the number of hyperparameters satisfies dγ = vds, where v is the number of scales we wish to examine in a multi-resolution decomposition, and can be quite large (dγ ≈106). As mentioned above, the ARD framework tests many priors corresponding to many hypotheses or beliefs regarding the locations and scales of the nonzero current activity within the brain, ultimately choosing the one with the highest evidence. The net result of this formulation is a source prior composed of a mixture of Gaussian kernels of varying scales. The number of mixture components, or the number of nonzero γi’s, is learned from the data and is naturally forced to be small (sparse). In general, the methodology is quite flexible and other prior specifications can be included as well, such as temporal and spectral constraints. But the essential ingredient of ARD, that marginalization and subsequent evidence maximization leads to a pruning of unsupported hypotheses, remains unchanged. We turn now to empirical Bayesian procedures that incorporate variational methods. In [15], a plausible hierarchical prior is adopted that, unfortunately, leads to intractable integrations when computing the desired source posterior. This motivates the inclusion of a variational approximation that models the true posterior as a factored distribution over parameters at two levels of the prior hierarchy. While seemingly quite different, drawing on results from [1], we can show that the resulting cost function is exactly equivalent to standard ARD assuming Σs is parameterized as Σs = ds X i=1 γieiei + ds X j=1 γ(ds+j)ψjψT j , (5) and so dγ = 2ds. When fMRI data is available, it is incorporated into a particular inverse Gamma hyperprior on γ, as is also commonly done with ARD methods [1]. Optimization is then performed using simple EM update rules. In summary then, the general methods of [4, 8, 11] and [12, 13, 17] as well as the variational method of [15] are all identical with respect to their ARD-based cost functions; they differ only in which covariance components (and possibly hyperpriors) are used and in how optimization is performed as will be discussed below. In contrast, the variational model from [14] introduces an additional hierarchy to the ARD framework to explicitly model temporal correlations between sources which may be spatially separated.2 Here it is assumed that S can be decomposed with respect to dz pre1Here we assume dipoles with orientations constrained to be orthogonal to the cortical surface; however, the method is easily extended to handle unconstrained dipoles. 2Although standard ARD does not explicitly model correlated sources that are spatially separated, it still works well in this situation (see Section 3) and can reflect such correlations via the inferred posterior mean. sources via S = WZ, p(W; Σw) = N(0, Σw), p(Z) = N(0, I), (6) where Z ∈ℜdz×n represents the pre-source matrix and Σw is analogous to Σs. As stated in [14], direct application of ARD would involve integration over W and Z to find the hyperparameters γ that maximize p(B; Σb). While such a procedure is not analytically tractable, it remains insightful to explore the characteristics of this method were we able to perform the necessary computation. This allows us to relate the full model of [14] to standard ARD. Interestingly, it can be shown that the first and second order statistics of the full prior (6) and the standard ARD prior (2) are equivalent (up to a constant factor), although higher-order moments will be different. However, as the number of pre-sources dz becomes large, multivariate centrallimit-theorem arguments can be used to explicitly show that the distribution of S converges to an identical Gaussian prior as ARD. So exact evaluation of the full model, which is espoused as the ideal objective were it feasible, approaches regular ARD when the number of pre-sources grows large. In practice, because the full model is intractable, a variational approximation is adopted similar to that proposed in [15]. In fact, if we assume the appropriate hyperprior on γ, then this correlated source method is essentially the same as the procedure from [15] but with an additional level in the approximate posterior factorization for handling the decomposition (6). This produces approximate posteriors on W and Z but the result cannot be integrated to form the posterior on S. However, the posterior mean of W, ˆW, is used as an estimate of the source correlation matrix (using ˆW ˆW T ) to substantially improve beamforming results that were errantly based on uncorrelated source models. Note however that this procedure implicitly uses the somewhat peculiar criteria of combining the posterior mean of W with the prior on Z to form an estimate of the distribution of S. 2.1 Computational Issues The primary objective of ARD is to maximize the evidence p(B; Σb) with respect to γ or equivalently, to minimize L(γ) ≜−log p(B; Σb) ≡n log |Σb| + trace BT Σ−1 b B . (7) In [4], a restricted maximum likelihood (ReML) approach is proposed for this optimization, which utilizes what amounts to EM-based updates. This method typically requires a nonlinear search for each M-step and does not guarantee that the estimated covariance is positive definite. While shown to be successful in estimating a handful of hyperparameters in [8, 11], this could potentially be problematic when very large numbers of hyperparameters are present. For example, in several toy problems (with dγ large) we have found that a fraction of the hyperparameters obtained can be negative-valued, inconsistent with our initial premise. As such, we present three alternative optimization procedures that extend the methods from [7, 12, 15, 17] to the arbitrary covariance model discussed above and guarantee that γi ≥0 for all i. Because of the flexibility this allows in constructing Σs, and therefore Σb, some additional notation is required to proceed. A new decomposition of Σb is defined as Σb = Σϵ + L dγ X i=1 γiCi LT = Σϵ + dγ X i=1 γieLieLT i , (8) where eLieLT i ≜LCiLT with ri ≜rank(eLieLT i ) ≤db. Also, using commutative properties of the trace operator, L(γ) only depends on the data B through the db×db sample correlation matrix BBT . Therefore, to reduce the computational burden, we replace B with a matrix eB ∈ℜdb×rank(B) such that eB eBT = BBT . This removes any per-iteration dependency on n, which can potentially be large, without altering that actual cost function. By treating the unknown sources as hidden data, an update can be derived for the (k+1)-th iteration γ(k+1) i = 1 nri
γ(k) i eLT i Σ(k) b −1 eB
2 F + 1 ri trace γ(k) i I −γ(k) i eLT i Σ(k) b −1 eLiγ(k) i , (9) which reduces to the algorithm from [15] given the appropriate simplifying assumptions on the form of Σs and some additional algebraic manipulations. It is also equivalent to ReML with a different effective computation for the M-step. By casting the update rules in this way and noting that off-diagonal elements of the second term need not be computed, the per-iteration cost is at most O d2 b Pdγ i=1 ri ≤O d3 bdγ . This expense can be significantly reduced still further in cases where different pseudo lead-field components, e.g., some eLi and eLj, contain one or more columns in common. This situation occurs if we desire to use the geodesic basis functions with flexible orientation constraints, as opposed to the fixed orientations assumed above. In general, the linear dependence on dγ is one of the attractive aspects of this method, effectively allowing for extremely large numbers of hyperparameters and covariance components. The problem then with (9) is not the per-iteration complexity but the convergence rate, which we have observed to be prohibitively slow in practical situations with high-resolution lead-field matrices and large numbers of hyperparameters. The only reported localization results using this type of EM algorithm are from [15], where a relatively low resolution lead-field matrix is used in conjunction with a simplifying heuristic that constrains some of the hyperparameter values. However, to avoid these types of constraints, which can potentially degrade the quality of source estimates, a faster update rule is needed. To this end, we modified the procedure of [7], which involves taking the gradient of L(γ) with respect to γ, rearranging terms, and forming the fixed-point update γ(k+1) i = γ(k) i n
eLT i Σ(k) b −1 eB
2 F trace eLT i Σ(k) b −1 eLi −1 . (10) The complexity of each iteration is the same as before, only now the convergence rate can be orders of magnitude faster. For example, given db = 275 sensors, n = 1000 observation vectors, and using a pseudo lead-field with 120,000 unique columns and an equal number of hyperparameters, requires approximately 5-10 mins. runtime using Matlab code on a PC to completely converge. The EM update does not converge after 24 hours. Example localization results using (10) demonstrate the ability to recover very complex source configurations with variable spatial extent [13]. Unlike the EM method, one criticism of (10) is that there currently exists no proof that it represents a descent function, although we have never observed it to increase (7) in practice. While we can show that (10) is equivalent to iteratively solving a particular min-max problem in search of a saddle point, provable convergence is still suspect. However, a similar update rule can be derived that is both significantly faster than EM and is proven to produce γ vectors such that L γ(k+1) ≤L γ(k) for every iteration k. Using a dual-form representation of L(γ) that leads to a more tractable auxiliary cost function, this update is given by γ(k+1) i = γ(k) i√n
eLT i Σ(k) b −1 ˜B
F trace eLT i Σ(k) b −1 eLi −1/2 . (11) Details of the derivation can be found in [20]. Finally, the correlated source method from [14] can be incorporated into the general ARD framework as well using update rules related to the above; however, because all off-diagonal terms are required by this method, the iterations now scale as (P i ri)2 in the general case. This quadratic dependence can be prohibitive in applications with large numbers of covariance components. 2.2 Relationship with Other Bayesian Methods As a point of comparison, we now describe how ARD can be related to alternative Bayesian-inspired approaches such as the sLORETA paradigm [10] and the iterative FOCUSS source localization algorithm [5]. The connection is most transparent when we substitute the prior covariance Σs = Pds i=1 γieieT i = diag[γ] into (10), giving the modified update γ(k+1) i =
γ(k) i ℓT i Σϵ + LΓ(k)LT −1 B
2 2 nR(k) ii −1 , R(k) ≜Γ(k)LT Σϵ + LΓ(k)LT −1 L, (12) where Γ ≜diag[γ], ℓi is the i-th column of L, and R(k) is the effective resolution matrix given the hyperparameters at the current iteration. The j-th column of R (called a point-spread function) equals the source estimate obtained using (4) when the true source is a unit dipole at location j [16]. Continuing, if we assume that initialization of ARD occurs with γ(0) = 1 (as is customary), then the hyperparameters produced after a single iteration of ARD are equivalent to computing the sLORETA estimate for standardized current density power [10] (this assumes fixed orientation constraints). In this context, the inclusion of R as a normalization factor helps to compensate for depth bias, which is the propensity for deep current sources within the brain to be underrepresented at the scalp surface [10, 12]. So ARD can be interpreted as a recursive refinement of what amounts to the non-adaptive, linear sLORETA estimate. As a further avenue for comparison, if we assume that R = I for all iterations, then the update (12) is nearly the same as the FOCUSS iterations modified to simultaneously handle multiple observation vectors [2]. The only difference is the factor of n in the denominator in the case of ARD, but this can be offset by an appropriate rescaling of the FOCUSS λ trade-off parameter (analogous to Σϵ). Therefore, ARD can be viewed in some sense as taking the recursive FOCUSS update rules and including the sLORETA normalization that, among other things, allows for depth bias compensation. Thus far, we have focused on similarities in update rules between the ARD formulation (restricted to the case where Σs = Γ) and sLORETA and FOCUSS. We now switch gears and examine how the general ARD cost function relates to that of FOCUSS and MCE and suggests a useful generalization of both approaches. Recall that the evidence maximization procedure upon which ARD is based involves integrating out the unknown sources before optimizing the hyperparameters γ. However, if some p(γ) is assumed for γ, then we could just as easily do the opposite: namely, we can integrate out the hyperparameters and then maximize S directly, thus solving the MAP estimation problem max S Z p (B|S) p (S; Σs) p(γ)dγ ≡ min {S:S=P i Ai eSi} ∥B −LS∥2 Σ−1 ϵ + dγ X i=1 g ∥eSi∥F , (13) where each Ai is derived from the i-th covariance component such that Ci = AiAT i , and g(·) is a function dependent on p(γ). For example, when p(γ) is a noninformative Jeffreys prior, then g(x) = log x and (13) becomes a generalized form of the FOCUSS cost function (and reduces to the exact FOCUSS cost when Ai = ei for all i). Likewise, when an exponential prior chosen, then g(x) = x and we obtain a generalized version of MCE. In both cases, multiple simultaneous constraints (e.g., flexible dipole orientations, spatial smoothing, etc.) can be naturally handled and, if desired, the noise covariance Σϵ can be seamlessly estimated as well (see [3] for a special case of the latter in the context of kernel regression). This addresses many of the concerns raised in [8] pertaining to existing MAP methods. Additionally, as with ARD, source components that are not sufficiently important in representing the observed data are pruned; however, the undesirable discontinuities in standard FOCUSS or MCE source estimates across time, which previously have required smoothing using heuristic measures [6], do not occur when using (13). This is because sparsity is only encouraged between components due to the concavity of g(·), but not within components where the Frobenius norm operator promotes smooth solutions [2]. All of these issues, as well as efficient ARD-like update rules for optimizing (13), are discussed in [20]. 3 General Properties of ARD Methods ARD methods maintain several attributes that make them desirable candidates for source localization. For example, unlike most MAP procedures, the ARD cost function is often invariant to leadfield column normalizations, which only affect the implicit initialization that is used or potentially the selection of the Ci’s. In contrast, MCE produces a different globally minimizing solution for every normalization scheme. As such, ARD is considerably more robust to the particular heuristic used for this task and can readily handle deep current sources. Previously, we have claimed that the ARD process naturally forces excessive/irrelevant hyperparameters to converge to zero, thereby reducing model complexity. While this observation has been verified empirically by ourselves and others in various application settings, there has been relatively little corroborating theoretical evidence, largely because of the difficulty in analyzing the potentially multimodal, non-convex ARD cost function. As such, we provide the following result: Result 1. Every local minimum of the generalized ARD cost function (7) is achieved at a solution with at most rank(B)db ≤d2 b nonzero hyperparameters. The proof follows from a result in [19] and the fact that the ARD cost only depends on the rank(B) matrix BBT . Result 1 comprises a worst-case bound that is only tight in very nuanced situations; in practice, for any reasonable value of Σϵ, the number of nonzero hyperparameters is typically much smaller than db. The bound holds for all Σϵ, including Σϵ = 0, indicating that some measure of hyperparameter pruning, and therefore covariance component pruning, is built into the ARD framework irrespective of the noise-based regularization. Moreover, the number of nonzero hyperparameters decreases monotonically to zero as Σϵ is increased. And so there is always some Σϵ = Σ′ ϵ sufficiently large such that all hyperparameters converge to exactly zero. Therefore, we can be reasonable confident that the pruning mechanism of ARD is not merely an empirical phenomena. Nor is it dependent on a particular sparse hyperprior, since the ARD cost from (7) implicitly assumes a flat (uniform) hyperprior. The number of observation vectors n also plays an important role in shaping ARD solutions. Increasing n has two primary benefits: (i) it facilitates convergence to the global minimum (as opposed to getting stuck in a suboptimal extrema) and (ii), it improves the quality of this minimum by mitigating the effects of noise [20]. With perfectly correlated (spatially separated) sources, primarily only the later benefit is in effect. For example, with low noise and perfectly correlated sources, the estimation problem reduces to an equivalent problem with n = 1, so the local minima profile of the cost function does not improve with increasing n. Of course standard ARD can still be very effective in this scenario [13]. In contrast, geometric arguments can be made to show that uncorrelated sources with large n offer the best opportunity for local minima avoidance. However, when strong correlations are present as well as high noise levels, the method of [14] (which explicitly attempts to model correlations) could offer a worthwhile alternative, albeit at a high computational cost. Further theoretical support for ARD is possible in the context of localization bias assuming simple source configurations. For example, substantial import has been devoted to quantifying localization bias when estimating a single dipolar source. Recently it has been shown, both empirically [10] and theoretically [16], that sLORETA has zero location bias under this condition at high SNR. Viewed then as an iterative enhancement of sLORETA as described in Section 2.2, the question naturally arises whether ARD methods retain this desirable property. In fact, it can be shown that this is indeed the case in two general situations. We assume that the lead-field matrix L represents a sufficiently high sampling of the source space such that any active dipole aligns with some lead-field column. Unbiasedness can also be shown in the continuous case for both sLORETA and ARD, but the discrete scenario is more straightforward and of course more relevant to any practical task. Result 2. Assume that Σs includes (among others) ds covariance components of the form Ci = eieT i . Then in the absence of noise (high SNR), ARD has provably zero localization bias when estimating a single dipolar source, regardless of the value of n. If we are willing to tolerate some additional assumptions, then this result can be significantly expanded. For example, multiple dipolar sources can be localized with zero bias if they are perfectly uncorrelated (orthogonal) across time and assuming some mild technical conditions [20]. This result also formalizes the notion, mentioned above, that ARD performs best with uncorrelated sources. Turning to the more realistic scenario where noise is present gives the following: Result 3. Let Σs be constructed as above and assume the noise covariance matrix Σϵ is known up to a scale factor. Then given a single dipolar source, in the limit as n becomes large the ARD cost function is unimodal, and a source estimate with zero localization bias achieves the global minimum. For most reasonable lead-fields and covariance components, this global minimum will be unique, and so the unbiased solution will be found as in the noiseless case. As for proofs, all the theoretical results pertaining to localization bias in this section follow from local minima properties of ML covariance component estimates. While details have been deferred to [20], the basic idea is that if the outerproduct BBT can be expressed as some non-negative linear combination of the available covariance components, then the ARD cost function is unimodal and Σb = n−1BBT at any minimizing solution. This Σb in turn produces unbiased source estimates in a variety of situations. While theoretical results of this kind are admittedly limited, other iterative Bayesian schemes in fact fail to exhibit similar performance. For example, all of the MAP-based focal algorithms we are aware of, including FOCUSS and MCE methods, provably maintain a localization bias in the general setting, although in particular cases they may not exhibit one. (Also, because of the additional complexity involved, it is still unclear whether the correlated source method of [14] satisfies a similar result.) When we move to more complex source configurations with possible correlations and noise, theoretical results are not available; however, empirical tests provide a useful means of comparison. For example, given a 275 × 40, 000 lead-field matrix constructed from an MR scan and assuming fixed orientation constraints and a spherical head model, ARD using Σs = diag[γ] and n = 1 (equivalent to having perfectly correlated sources) consistently maintains zero empirical localization bias when estimating up to 15-20 dipoles, while sLORETA starts to show a bias with only a few. 4 Discussion The efficacy of modern empirical Bayesian techniques and variational approximations make them attractive candidates for source localization. However, it is not always transparent how these methods relate nor which should be expected to perform best in various situations. By developing a general framework around the notion of ARD, deriving several theoretical properties, and showing connections between algorithms, we hope to bring an insightful perspective to these techniques. References [1] C. M. Bishop and M. E. Tipping, “Variational relevance vector machines,” Proc. 16th Conf. Uncertainty in Artificial Intelligence, 2000. [2] S.F. Cotter, B.D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Trans. Sig. Proc., vol. 53, no. 7, 2005. [3] M.A.T. Figueiredo, “Adaptive sparseness using Jeffreys prior,” Advances in Neural Information Processing Systems 14, MIT Press, 2002. [4] K. Friston, W. Penny, C. Phillips, S. Kiebel, G. Hinton, and J. Ashburner, “Classical and Bayesian inference in neuroimaging: Theory,” NeuroImage, vol. 16, 2002. [5] I.F. Gorodnitsky, J.S. George, and B.D. Rao, “Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm,” J. Electroencephalography and Clinical Neurophysiology, vol. 95, no. 4, 1995. [6] M. Huang, A. Dale, T. Song, E. Halgren, D. Harrington, I. Podgorny, J. Canive, S. Lewis, and R. Lee, “Vector-based spatial-temporal minimum ℓ1-norm solution for MEG,” NeuroImage, vol. 31, 2006. [7] D.J.C. MacKay, “Bayesian interpolation,” Neural Computation, vol. 4, no. 3, 1992. [8] J. Mattout, C. Phillips, W.D. Penny, M.D. Rugg, and K.J. Friston, “MEG source localization under multiple constraints: An extended Bayesian framework,” NeuroImage, vol. 30, 2006. [9] R.M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996. [10] R.D. Pascual-Marqui, “Standardized low resolution brain electromagnetic tomography (sLORETA): Technical details,” Methods and Findings in Experimental and Clinical Pharmacology, vol. 24, no. Suppl D, 2002. [11] C. Phillips, J. Mattout, M.D. Rugg, P. Maquet, and K.J. Friston, “An empirical Bayesian solution to the source reconstruction problem in EEG,” NeuroImage, vol. 24, 2005. [12] R.R. Ram´ırez, Neuromagnetic Source Imaging of Spontaneous and Evoked Human Brain Dynamics, PhD thesis, New York University, 2005. [13] R.R. Ram´ırez and S. Makeig, “Neuroelectromagnetic source imaging using multiscale geodesic neural bases and sparse Bayesian learning,” 12th Conf. Human Brain Mapping, 2006. [14] M. Sahani and S.S. Nagarajan, “Reconstructing MEG sources with unknown correlations,” Advances in Neural Information Processing Systems 16, MIT Press, 2004. [15] M. Sato, T. Yoshioka, S. Kajihara, K. Toyama, N. Goda, K. Doya, and M. Kawato, “Hierarchical Bayesian estimation for MEG inverse problem,” NeuroImage, vol. 23, 2004. [16] K. Sekihara, M. Sahani, and S.S. Nagarajan, “Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction,” NeuroImage, vol. 25, 2005. [17] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” J. Machine Learning Research, vol. 1, 2001. [18] K. Uutela, M. H¨am¨al¨ainen, and E. Somersalo, “Visualization of magnetoencephalographic data using minimum current estimates,” NeuroImage, vol. 10, 1999. [19] D.P. Wipf and B.D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Trans. Sig. Proc., vol. 52, no. 8, 2004. [20] D.P. Wipf and R.R. Ram´ırez and J.A. Palmer and S. Makeig and B.D. Rao, Automatic Relevance Determination for Source Localization with MEG and EEG Data, Technical Report, University of California, San Diego, 2006.
|
2006
|
167
|
2,996
|
Efficient Methods for Privacy Preserving Face Detection Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 avidan@merl.com Moshe Butman Department of Computer Science Bar Ilan University Ramat-Gan, Israel butmanm@cs.biu.edu Abstract Bob offers a face-detection web service where clients can submit their images for analysis. Alice would very much like to use the service, but is reluctant to reveal the content of her images to Bob. Bob, for his part, is reluctant to release his face detector, as he spent a lot of time, energy and money constructing it. Secure MultiParty computations use cryptographic tools to solve this problem without leaking any information. Unfortunately, these methods are slow to compute and we introduce a couple of machine learning techniques that allow the parties to solve the problem while leaking a controlled amount of information. The first method is an information-bottleneck variant of AdaBoost that lets Bob find a subset of features that are enough for classifying an image patch, but not enough to actually reconstruct it. The second machine learning technique is active learning that allows Alice to construct an online classifier, based on a small number of calls to Bob’s face detector. She can then use her online classifier as a fast rejector before using a cryptographically secure classifier on the remaining image patches. 1 Introduction The Internet triggered many opportunities for cooperative computing in which buyers and sellers can meet to buy and sell goods, information or knowledge. Placing classifiers on the Internet allows buyers to enjoy the power of a classifier without having to train it themselves. This benefit is hindered by the fact that the seller, that owns the classifier, learns a great deal about the buyers’ data, needs or goals. This raised the need for privacy in Internet transactions. While it is now common to assume that the buyer and the seller can secure their data exchange from the rest of the world, we are interested in a stronger level of security that allows the buyer to hide his data from the seller as well. Of course, the same can be said about the seller, who would like to maintain the privacy of his hard-earned classifier. Secure Multi-Party Computation (SMC) are based on cryptographic tools that let two parties, Alice and Bob, to engage in a protocol that will allow them to achieve a common goal, without revealing the content of their input. For example, Alice might be interested in classifying her data using Bobs’ classifier without revealing anything to Bob, not even the classification result, and without learning anything about Bobs’ classifier, other than a binary answer to her query. Recently, Avidan & Butman introduced Blind Vision [1] which is a method for securely evaluating a Viola-Jones type face detector [12]. Blind Vision uses standard cryptographic tools and is painfully slow to compute, taking a couple of hours to scan a single image. The purpose of this work is to explore machine learning techniques that can speed up the process, at the cost of a controlled leakage of information. In our hypothetical scenario Bob has a face-detection web service where clients can submit their images to be analyzed. Alice would very much like to use the service, but is reluctant to reveal the content of the images to Bob. Bob, for his part, is reluctant to release his face detector, as he spent a lot of time, energy and money constructing it. In our face detection protocol Alice raster scans the image and sends every image patch to Bob to be classified. We would like to replace cryptographically-based SMC methods with Machine Learning algorithms that might leak some information but are much faster to execute. The challenge is to design protocols that can explicitly control the amount of information leaked. To this end we propose two, well known, machine learning techniques. One based on the information bottleneck and the other on active learning. The first method is a privacy-preserving feature selection which is a variant of the informationbottleneck principle to find features that are useful for classification but not for signal reconstruction. In this case, Bob can use his training data to construct different classifiers that offer different trade-offs of information leakage versus classification accuracy. Alice can then choose the trade-off that suits her best and send only those features to Bob for classification. This method can be used, for example, as a filtering step that rejects a large number of the image patches as having no face included in them, followed by a SMC method that will securely classify the remaining image patches, using the full classifier that is known only to Bob. The second method is active learning and it helps Alice choose which image patches to send to Bob for classification. This method can be used either with the previous method or directly with an SMC protocol. The idea being that instead of sending all image patches to Bob for classification, Alice might try to learn from the interaction as much as she can and use her online trained classifier to reject some of the image patches herself. This can minimize the amount of information revealed to Bob, if the parties use the privacy-preserving features or the computational load, if the parties are using cryptographically-based SMC methods. 2 Background Secure multi-party computation originated from the work of Yao [14] who gave a solution to the millionaire problem: Two parties want to find which one has a larger number, without revealing anything else about the numbers themselves. Later, Goldriech et al. [5] showed that any function can be computed in such a secure manner. However, the theoretical construct was still too demanding to be of practical use. An easy introduction to Cryptography is given in [9] and a more advanced and theoretical treatment is given in [4]. Since then many secure protocols were reported for various data mining applications [7, 13, 1]. A common assumption in SMC is that the parties are honest but curious, meaning that they will follow the agreed-upon protocol but will try to learn as much as possible from the data-flow between the two parties. We will follow this assumption here. The information bottleneck principle [10] shows how to compress a signal while preserving its information with respect to a target signal. We offer a variant of the self-consistent equations used to solve this problem and offer a greedy feature selection algorithm that satisfy privacy constraints, that are represented as the percentage of the power spectrum of the original signal. Active learning methods assume that the student (Alice, in our case) has access to an Oracle (Bob) for labeling. The usual motivation in active learning is that the Oracle is assumed to be a human operator and having him label data is a time consuming task that should be avoided. Our motivation is similar, Alice would like to avoid using Bob because of the high computational cost involved in case of cryptographically secure protocols, or for fear of leaking information in case non-cryptographic methods are used. Typical active learning applications assume that the distribution of class size is similar [2, 11]. A notable exception is the work of [8] that propose an active learning method for anomaly detection. Our case is similar as image patches that contain faces are rare in an image. 3 Privacy-preserving Feature Selection Feature selection aims at finding a subset of the features that optimize some objective function, typically a classification task [6]. However, feature selection does not concern itself with the correlation of the feature subset with the original signal. This is handled with the information bottleneck method [10], that takes a joint distribution p(x, y) and finds a compressed representation of X, denoted by T, that is as informative about Y . This is achieved by minimizing the following functional: min p(t|x) L : L ≡I(X; T) −βI(T; Y ) (1) where β is a trade-off parameter that controls the trade off between compressing X and maintaining information about Y . The functional L admits a set of self-consistent equations that allows one to find a suitable solution. We map the information bottleneck idea to a feature selection algorithm to obtain a Privacypreserving Feature Selection (PPFS) and describe how Bob can construct such a feature set. Let Bob have a training set of image patches, their associated label and a weight associated with every feature (pixel) denoted {xn, yn, sn}N n=1. Bob’s goal is to find a feature subset I ≡{i1, . . . , ik} s.t. a classifier F(x(I)) will minimize the classification error, where x(I) denotes a sample x that uses only the features in the set I. Formally, Bob needs to minimize: min F N X n=1 (F(xn(I)) −yn))2 (2) subject to X i∈I si < Λ where Λ is a user defined threshold that defines the amount of information that can be leaked. We found it useful to use the PCA spectrum to measure the amount of information. Specifically, Bob computes the PCA space of all the face images in his database and maps all the data to that space, without reducing dimensionality. The weights {sn}N n=1 are now set to the eigenvalues associated with each dimension in the PCA space. This avoids the need to compute the mutual information between pixels by making the assumption that features do not carry mutual information with other features beyond second order statistics. Algorithm 1 Privacy-Preserving Feature Selection Input: {xn, yn, sn}N n=1 Threshold Λ Number of iterations T Output: A privacy-preserving strong classifier F(x) • Start with weights wn = 1/N n = 1, 2, . . . , N, F(x) = 0, I = ∅ • Repeat for t = 1, 2, . . . , T – Set working index set J = I ∪{j|sj + P i∈I si < Λ} – Repeat for j ∈J ∗Fit a regression stump gj(x(j)) ≡aj(x(j) > θj) + bj to the j-th feature, x(j) ∗Compute error ej = PN n=1 wn(yn−(aj(xn(j)>θj+bj)2 PN n=1 wn – Set ft = gi where ei < ej ∀j ∈J – Update: F(x) ←F(x) + ft(x) (3) wn ←wne−ynft(xn) (4) I ←I ∪{i} (5) Boosting was used for feature selection before [12] and Bob takes a similar approach here. He uses a variant of the gentleBoost algorithm [3] to find a greedy solution to (2). Specifically, Bob uses gentleBoost with “stumps” as the weak classifiers where each “stump” works on only one feature. The only difference from gentleBoost is in the choice of the features to be selected. In the original algorithm all the features are evaluated in every iteration of the algorithm, but here Bob can only use a subset of the features. In each iteration Bob can use features that were already selected or those that adding them will not increase the total weight of selected features beyond the threshold Λ. Once Bob computed the privacy-preserving feature subset, the amount of information it leaks and its classification accuracy he publishes this information on the web. Alice then needs to map her image patches to this low-dimensional privacy-preserving feature space and send the data to Bob for classification. 4 Privacy-Preserving Active Learning In our face detection example Alice needs to submit many image patches to Bob for classification. This is computationally expensive if SMC methods are used and reveals information, in case the privacy-preserving feature selection method discussed earlier is used. Hence, it would be beneficial if Alice could minimize the number of image patches she needs to send Bob for classification. This is where she might use active learning. Instead of raster scanning the image and submitting every image patch for classification she sends a small number of randomly selected image patches, and based on their label, she determines the next group of image patches to be sent for classification. We found that substantial gains can be made this way. Specifically, Alice maintains an RBF network that is trained on-line, based on the list of labeled prototypes. Let {cj, yj}M j=1 be the list of M prototypes that were labeled so far. Then, Alice constructs a kernel matrix K where Kij = k(ci, cj) and solves the least squares equation Ku = y, where y = [y1, . . . , yM]T . The kernel Alice uses is a Gaussian kernel whose width is set to be the range of the prototype coordinates, in each dimension. The score of each image patch x is given by h(x) = [k(x, c1), . . . , k(x, cM)]u. For the next round of classification Alice chooses the image patches with the highest h(x) score. This is in line with [2, 11, 8] that consider choosing the examples of which one has the least amount of information. In our case, Alice is interested in finding image patches that contain faces (which we assume are labeled +1) but most of the prototypes will be labeled −1, because faces are a rear event in an image. As long as Alice does not sample a face image patch she will keep exploring the space of image patches in her image, by sampling image patches that are farthest away from the current set of prototypes. If an image patch that contains a face is sampled, then her online classifier h(x) will label similar image patches with a high score, thus guiding the search towards other image patches that might contain a face. To avoid large overlap between patches, we force a minimum distance, in the image plane, between selected patches. The algorithm is given in algorithm 2. Algorithm 2 Privacy-Preserving Active Learning Input: {xi}N i=1 unlabeled samples Number M of classification calls allowed Number s of samples to classify in each iteration Output: Online classifier h(x) • Choose s random samples {xi}s i=1, set C = [x1, . . . , xs] and obtain their labels y = [y1, . . . , ys] from Bob. • Repeat for m = 1, 2, ..., M times – Construct the kernel matrix Kij = k(ci, cj) and solve for the weight vector u through least squares Ku = y. – Evaluate h(xi) = [k(xi, c1), . . . , k(xi, cm)]u ∀i = 1, . . . , N. – Choose top s samples with highest h(x) score, send them to Bob for classification and add them, and their labels to C, y, respectively. 5 Experiments We have conducted a couple of experiments to validate both methods. Figure 1: Privacy preserving feature selection. We show the ROC curves of four strong classifiers, each trained with 100 weak, “stump” classifiers, but with different levels of information leakage. The information leakage is defined as the amount of PCA spectrum captured by the features used in each classifier. The number in parenthesis shows how much of the eigen spectrum is captured by the features used in each classifier. The first experiment evaluates the privacy-preserving feature selection method. The training set consisted of 9666 image patches of size 24 × 24 pixels each, split evenly to face/no-face images. The test set was of similar size. We then run algorithm 1 with different levels of the threshold Λ and created a strong classifier with 100 weak, “stump” based, classifiers. The ROC curves of several such classifiers are shown in figure 1. We found that, for this particular dataset, setting Λ = 0.1 gives identical results to a full classifier, without any privacy constraints. Reducing Λ to 0.01 did hurt the classification performance somewhat. The second experiment tests the active learning approach. We assume that Alice and Bob use the classifier with Λ = 0.05 from the previous experiment, and measure how effective is the on-line classifier that Alice constructs in rejecting no-face image patches. Recall that there are three classifiers at play. One is the full classifier that Bob owns, the second is the privacy-preserving classifier that Bob owns and the last is the on-line classifier that Alice constructs. Alice uses the labels of Bobs’ privacy-preserving classifier to construct her on-line classifier and the questions is: how many image patches she can reject, without actually rejecting image patches that will be classified as faces by the full classifier (that she knows nothing about)? Before we performed the experiment, we conducted the following pre-processing operation: We find, for each image, the scale at which the largest number of faces are detected using Bob’s full classifier, and used only the image at that scale. The experiment proceeds as follows. Alice chooses 5 image patches in each round, maps them to the reduced PCA space and sends them to Bob for classification, using his privacy-preserving classifier. Based on his labels, Alice then picks the next 5 image patches according to algorithm 2 and so on. Alice repeats the process 10 times, resulting in 50 patches that are sent to Bob for classification. The first 5 patches are chosen at random. Figure 2 shows the 50 patches selected by Alice, the online classifier h and the corresponding rejection/recall curve, for several test images. The rejection/recall curve shows how many image patches Alice can safely reject, based on h, without rejecting a face that will be detected by Bobs’ full classifier. For example, in the top row of figure 2 we see that rejecting the bottom 40% of image patches based on the on-line classifier h will not reject any face that can be detected with the full classifier. Thus 50 image patches that can be quickly labeled while leaking very little information can help Alice reject thousands of image patches. Next, we conducted the same experiment on a larger set of images, consisting of 65 of the CMU+MIT database images1. Figure 3 shows the results. We found that, on average (dashed line), 1We used the 65 images in the newtest directory of the CMU+MIT dataset (a-1) (b-1) (c-1) (a-2) (b-2) (c-2) (a-3) (b-3) (c-3) Figure 2: Examples of privacy-preserving feature selection and active learning. (a) The input images and the image patches (marked with white rectangles) selected by the active learning algorithm. (b) The response image computed by the online classifier (the black spots correspond to the position of the selected image patches). Brighter means a higher score. (c) The rejection/recall curve showing how many image patches can be safely rejected. For example, panel (c-1) shows that Alice can reject almost 50% of the image patches, based on her online classifier (i.e., response image), and not miss a face that can be detected by the full classifier (that is known to Bob and not to Alice). (a) Figure 3: Privacy preserving active learning. Results on a dataset of 65 images. The figure shows how many image patches can be rejected, based on the online classifier that Alice owns, without rejecting a face. The horizontal axis shows how many image patches are rejected, based on the on-line classifier, and the vertical axis shows how many faces are maintained. For example, the figure shows (dashed line) that rejecting 20% of all image patches, based on the on-line classifier, will maintain 80% of all faces. The solid line shows that rejecting 40% of all image patches, based on the on-line classifier, will not miss a face in at least half (i.e. the median) of the images in the dataset. using only 50 labeled image patches Alice can reject up to about 20% of the image patches in an image while keeping 80% of the faces in that image (i.e., Alice will reject 20% of the image patches that Bob’s full classifier will classify as a face). If we look at the median (solid line), we see that for at least half the images in the data set, Alice can reject a little more than 40% of the image patches without erroneously rejecting a face. We found that increasing the number of labeled examples from 50 to a few hundreds does not greatly improve results, unless many thousands of samples are labeled, at which point too much information might be leaked. 6 Conclusions We described two machine learning methods to accelerate cryptographically secure classification protocols. The methods greatly accelerate the performance of the system, while leaking a controlled amount of information. The two methods are a privacy preserving feature selection that is similar to the information bottleneck and an active learning technique that was found to be useful in learning a rejector from an extremely small number of labeled data. We plan to keep investigating these methods, apply them to classification tasks in other domains and develop new methods to make secure classification faster to use. References [1] S. Avidan and M. Butman. Blind vision. In Proc. of European Conference on Computer Vision, 2006. [2] Y. Baram, R. El-Yaniv, and K. Luz. Online choice of active learning algorithms. Journal of Machine Learning Research, 5:255–291, March 2004. [3] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting, 1998. [4] O. Goldreich. Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press, New York, 2001. [5] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In ACM Symposium on Theory of Computing, pages 218–229, 1987. [6] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3:1157–1182, 2003. [7] Y. Lindell and B. Pinkas. Privacy preserving data mining. In CRYPTO: Proceedings of Crypto, 2000. [8] D. Pelleg and A. Moore. Active learning for anomaly and rare-category detection. In In Advances in Neural Information Processing Systems 18, 2004. [9] B. Schneier. Applied Cryptography. John Wiley & Sons, New York, 1996. [10] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In In Proc. of 37th Allerton Conference on communication and computation, 1999. [11] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2:45–66, 2001. [12] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Coference on Computer Vision and Pattern Recognition (CVPR), 2001. [13] R. N. Wright and Z. Yang. ”privacy-preserving bayesian network structure computataion on distributed heterogeneous data”. In KDD ’04: Proceeding of the tenth ACM SIGKDD international conference on Knowledge discovery in data mining, pages 22–25, 2004. [14] A. C. Yao. Protocols for secure computations. In Proc. 23rd IEEE Symp. on Foundations of Comp. Science, pages 160–164, Chicago, 1982. IEEE.
|
2006
|
168
|
2,997
|
Balanced Graph Matching Timothee Cour, Praveen Srinivasan and Jianbo Shi Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 {timothee,psrin,jshi}@seas.upenn.edu Abstract Graph matching is a fundamental problem in Computer Vision and Machine Learning. We present two contributions. First, we give a new spectral relaxation technique for approximate solutions to matching problems, that naturally incorporates one-to-one or one-to-many constraints within the relaxation scheme. The second is a normalization procedure for existing graph matching scoring functions that can dramatically improve the matching accuracy. It is based on a reinterpretation of the graph matching compatibility matrix as a bipartite graph on edges for which we seek a bistochastic normalization. We evaluate our two contributions on a comprehensive test set of random graph matching problems, as well as on image correspondence problem. Our normalization procedure can be used to improve the performance of many existing graph matching algorithms, including spectral matching, graduated assignment and semidefinite programming. 1 Introduction Many problems of interest in Computer Vision and Machine Learning can be formulated as a problem of correspondence: finding a mapping between one set of points and another set of points. Because these point sets can have important internal structure, they are often considered not simply as point sets, but as two separate graphs. As a result, the correspondence problem is commonly referred to as graph matching. In this setting, graph nodes represent feature points extracted from each instance (e.g. a test image and a template image) and graph edges represent relationships between feature points. The problem of graph matching is to find a mapping between the two node sets that preserves as much as possible the relationships between nodes. Because of its combinatorial nature, graph matching is either solved exactly in a very restricted setting (bipartite matching, for example with the Hungarian method) or approximately. Most of the recent literature on graph matching has followed this second path, developing approximate relaxations to the graph matching problem. In this paper, we make two contributions. The first contribution is a spectral relaxation for the graph matching problem that incorporates one-to-one or one-to-many mapping constraints, represented as affine constraints. A new mathematical tool is developed for that respect, Affinely Constrained Rayleigh Quotients. Our method achieves comparable performance to state of the art algorithms, while offering much better scalability. Our second contribution relates to the graph matching scoring function itself, which we argue, is prone to systematic confusion errors. We show how a proper bistochastic normalization of the graph matching compatibility matrix is able to considerably reduce those errors and improve the overall matching performance. This improvement is demonstrated both for our spectral relaxation algorithm, and for three state of the art graph matching algorithms: spectral matching, graduated assignment and semidefinite programming. 2 Problem formulation Attributed Graph We define an attributed graph[1] as a graph G = (V, E, A) where each edge e = ij ∈E is assigned an attribute Ae, which could be a real number or a vector in case of multiattributes. We represent vertex attributes as special edge attributes, i.e. Aii for a vertex i. For example, the nodes could represent feature points with attributes for spatial location/orientation and image feature descriptors, while edge attributes could represent spatial relationships between two nodes such as relative position/orientation. Graph Matching Cost Let G = (V, E, A), G′ = (V ′, E′, A′) be two attributed graphs. We want to find a mapping between V and V ′ that best preserves the attributes between edges e = ij ∈E and e′ = i′j′ ∈E′. Equivalently, we seek a set of correspondences, or matches M = {ii′} so as to maximize the graph matching score, defined as: ǫGM(M) = X ii′∈M,jj′∈M f(Aij, A′ i′j′) = X e∼e′ f(Ae, A′ e′), (1) with the shorthand notation e ∼e′ iff ii′ ∈M, jj′ ∈M. The function f(·, ·) measures the similarity between edge attributes. As a special case, f(Aii, A′ i′i′) is simply the score associated with the match ii′. In the rest of the paper, we let n = |V |, m = |E|, and likewise for n′, m′. Formulation as Integer Quadratic Program We explain here how to rewrite (1) in a more manageable form. Let us represent M as a binary vector x ∈{0, 1}nn′: xii′ = 1 iff ii′ ∈M. For most problems, one requires the matching to have a special structure, such as one-to-one or one-to-many: this is the mapping constraint. For one-to-one matching, this is P i′ xii′ = 1 and P i xii′ = 1 (with x binary), and M is a permutation matrix. In general, this is an affine inequality constraint of the form Cx ≤b. With those notations, (1) takes the form of an Integer Quadratic Program (IQP): max ǫ(x) = xTWx s.t. Cx ≤b, x ∈{0, 1}nn′ (2) W is a nn′ × nn′ compatibility matrix with Wii′,jj′ = f(Aij, A′ i′j′). In general such IQP is NPhard, and approximate solutions are needed. Graph Matching Relaxations Continuous relaxations of the IQP (2) are among the most successful methods for non-bipartite graph matching, and so we focus on them. We review three state of the art matching algorithms: semidefinite programming (SDP) [2, 3], graduated assignment (GA) [4], and spectral matching (SM) [5]. We also introduce a new method, Spectral Matching with Affine Constraints (SMAC) that provides a tigher relaxation than SM (and more accurate results in our experiments) while still retaining the speed and scalability benefits of spectral methods, which we also quantify in our evaluations. All of these methods relax the original IQP into a continuous program (removing the x ∈{0, 1} constraint), so we omit this step in the derivations below. SDP Relaxation In [2], the authors rewrite the objective as a matrix innner product: xTWx = ⟨X, Weq⟩, where X = [1; x]T[1; x] is a (nn′ + 1) × (nn′ + 1) rank-one matrix and Weq = 0 dT/2 d/2 W −D , where d = diag(W) and D is a diagonal matrix of diagonal d. The non-convex rank-one constraint is further relaxed by only requiring X to be positive semi-definite. Finally the relaxation is: max ⟨X, Weq⟩ s.t. ⟨X, C(i) eq ⟩≤b(i) eq , X ⪰0, for suitable Ceq, beq. The relaxation squares the problem size, which we will see, prevents SDP from scaling to large problems. Graduated Assignment GA[4] relaxes the IQP into a non-convex quadratic program (QP) by removing the constraint x ∈{0, 1}. It then solves a sequence of convex approximations, each time by maximizing a Taylor expansion of the QP around the previous approximate solution. The accuracy of the approximation is controlled by a continuation parameter, annealed after each iteration. Spectral Matching (SM) In [5], the authors drop the constraint Cx ≤b during relaxation and only incorporate it during the discretization step. The resulting program: max xTWx s.t. ||x|| = 1, which is the same as max xTW x xTx , can be solved by computing the leading eigenvector x of W. It verifies x ≥0 when W is nonnegative, by Perron-Frobenius’ theorem. 3 Spectral Matching with Affine Constraint (SMAC) We present here our first contribution, SMAC. Our method is closely related to the spectral matching formulation of [5], but we are able to impose affine constraints Cx = b on the relaxed solution. We demonstrate later that the ability to maintain this constraint, coupled with scalability and speed of spectral methods, results in a very effective solution to graph matching. We solve the following: max xTWx xTx s.t. Cx = b (3) Note, for one-to-one matching the objective coincides with the IQP for binary x since xTx = n. Computational Solution We can formulate (3) as maximization of a Rayleigh quotient under affine constraint. While the case of linear constraints has been addressed previously[6], imposing affine constraints is novel. We fully address this class of problem in the supplementary material1 and give a brief summary here. The solution to (3) is given by the leading eigenpair of PCWPC x = λx, (4) where x is scaled so that Cx = b exactly. We introduced PC = Inn′ −CT eq(CeqCT eq)−1Ceq and Ceq = [Ik−1, 0] (C −(1/bk)bCk), where Ck, bk denote the last row of C, b and k = # constraints. Discretization We show here how to tighten our approximation during the discretization step in the case of one-to-one matching (we can fall back to this case by introducing dummy nodes). Let us assume for a moment that n = n′. It is a well known result that for any n × n matrix X, X is a permutation matrix iff X1 = XT1 = 1, X is orthogonal, and X ≥0 elementwise. We show here we can obtain a tighter relaxation by incorporating the first 2 (out of 3) constraints as a postprocessing before the final discretization. We carry on the following steps even when n ̸= n′: 1) reshape the solution x of (3) into a n × n′ matrix X, 2) compute the best orthogonal approximation Xorth of X. It can be computed using the SVD decomposition X = UΣV T , similarly to [7]: Xorth = arg min {||X −Q|| : Q ∈O(n, n′)} = UV T , where O(n, n′) denotes the orthogonal matrices of Rn×n′, and 3) discretize Xorth like the other methods, as explained in the results section. The following proposition shows Xorth is orthogonal and satisfies the affine constraint, as promised. Proposition 3.1 (Xorth satisfies the affine constraint) If u is left and right eigenvector of a matrix Y , then u is left and right eigenvector of Yorth. Corollary: when n = n′, Xorth1 = Xorth T1 = 1. Proof: see supplementary materials. Note that in general, X and Xorth do not have the same eigenvectors, here we are lucky because of the particular constraint induced by C, b. Computational Cost The cost of this algorithm is dominated by the computation of the leading eigenvector of (4), which is function of two terms: 1) number of matrix-vector operations required in an eigensolver (which we can fix, as convergence is fast in practice), and 2) cost per matrix-vector operation. PC is a full matrix, even when C is sparse, but we showed the operation y := PCx can be computed in O(nn′) using the Sherman-Morrison formula (for one-to-one matching). Finally, the total complexity is proportional to the number of non-zero elements in W. If we assume a full-matching, this is O(mm′), which is linear in the problem description length. 4 How Robust is the Matching? We ran extensive graph matching experiments on both real image graphs and synthetic graphs with the algorithms presented above. We noticed a clear trend: the algorithms get confused when there is ambiguity in the compatibility matrix. Figure 1 shows a typical example of what happens. We extracted a set of feature points (indexed by i and i′) in two airplane images, and for each edge e = ij in the first graph, we plotted the most similar edges e′ = i′j′ in the second graph. As we can see, the first edge plotted has many correspondences everywhere in the image and is therefore uninformative. The second edge on the other hand has correspondences with roughly only 5 locations, it is informative, and yet its contribution is outweighted by the first edge. The compatibility matrix is unbalanced. We illustrate next what happens with a synthetic example. 1http://www.seas.upenn.edu/˜timothee/ Figure 1: Representative cliques for graph matching. Blue arrows indicate edges with high similarity, showing 2 groups: cliques of type 1 (pairing roughly horizontal edges in the 2 images) are uninformative, cliques of type 2 (pairing vertical edges) are distinctive. Figure 2: Left: edges 12 and 13 are uninformative and make spurious connections of strength σ to all edges in the second graph. Edge 23 is informative and makes a single connection to the second graph, 2’3’. Middle: corresponding compatibility matrices W (top: before normalization, bottom: after normalization). Right: margin as a function of σ (difference between correct matching score and best runner-up score). Synthetic noise model example Let us look at a synthetic example to illustrate this concept, on which the IQP can be solved by brute-force. Figure 2 shows two isomorphic graphs with 3 nodes. In our simple noise model, edges 12 and 13 are uninformative and make connections to every edge in the second graph, with strength σ (our noise parameter). The informative edge 23 on the other hand only connects to 2′3′. We displayed Wii′,jj′ to visualize the connections. When the noise is small enough, the optimal matching is the desired permutation p∗= {11′, 22′, 33′}, with an initial score of 8 for σ = 0. We computed the score of the second best permutation as a function of σ (see plot of margin), and showed that for σ greater than σ0 ≈1.6, p∗is no longer optimal. W is unbalanced, with some edges making spurious connections, overwhelming the influence of other edges with few connections. This problem is not incidental. In fact we argue this is the main source of confusion for graph matching. The next section introduces a normalization algorithm to address this problem. Figure 3: Left: matching compatibility matrix W and edge similarity matrix S. The shaded areas in each matrix correspond to the same entries. Right: graphical representation of S, W as a clique potential on i, i′, j, j′. 5 How to balance the Compatibility Matrix As we saw in the previous section, a main source of confusion for graph matching algorithms is the unbalance in the compatibility matrix. This confusion occurs when an edge e ∈E has many good potential matches e′ ∈E′. Such an edge is not discriminative and its influence should be decreased. On the other hand, an edge with small number of good matches will help disambiguate the optimal matching. Its influence should be increased. The following presents our second contribution, bistochastic normalization. 5.1 Dual Representation: Matching Compatibility Matrix W vs. Edge Similarity Matrix S The similarity function f(·, ·) can be interpreted in two ways: either as a similarity between edges ij ∈E and i′j′ ∈E′, or as a compatibility between match hypothesis ii′ ∈M and jj′ ∈M. We define the similarity matrix S of size m × m′ as Sij,i′j′ = f(Aij, A′ i′j′), and (as before) the compatibility matrix W of size nn′ × nn′ as Wii′,jj′ = f(Aij, A′ i′j′), see Figure 3. Each vertex i in the first graph should ideally match to a small number of vertices i′ in the second graph. Similarly, each edge e = ij ∈E should also match to a small number of edges e′ = i′j′ ∈E′. Although this constraint would be very hard to enforce, we approach this behavior by normalizing the influence of each edge. This corresponds to having each row and column in S (not W!) sum to one, in other words, S should be bistochastic. 5.2 Bistochastic Normalization of Edge Similarity Matrix S Recall we are given a compatibility matrix W. Can we enforce its dual representation S to be bistochastic? One problem is that, even though W is square (of size nn′ ×nn′), S could be rectangular (of size m × m′), in which case its rows and columns cannot both sum to 1. We define a m × m′ matrix B to be Rectangular Bistochastic if it satisfies: B1m′ = 1m and BT1m = (m/m′)1m′. We can formulate the normalization as solving the following balancing problem: Find (D, D′) diagonal matrices of order m, m′ s.t. DSD′ is rectangular bistochastic (5) We propose the following algorithm to solve (5), and then show its correctness. 1. Input: compatibility matrix W, of size nn′ × nn′ 2. Convert W to S: Sij,i′j′ = Wii′,jj′ 3. repeat until convergence (a) normalize the rows of S: St+1 ij,i′j′ := St ij,i′j′/ P k′l′ St ij,k′l′ (b) normalize the columns of S: St+2 ij,i′j′ := St+1 ij,i′j′/ P kl St+1 kl,i′j′ 4. Convert back S to W, output W Proposition 5.1 (Existence and Uniqueness of (D,D’)) Under the condition S > 0 elementwise, Problem (5) has a unique solution (D, D′), up to a scale factor. D and D′ can be found by iteratively normalizing the rows and columns of S. Proof Let ¯S = S ⊗1m′×m, which is square. Since ¯S > 0 elementwise, we can apply an existing version of (5.1) for square matrices[8]. We conclude the proof by noticing that normalizing rows and columns of ¯S preserves kronecker structure: ¯D ¯S ¯ D′ = (D ⊗1m′×m′)(S ⊗1m′×m)(D′ ⊗1m×m) = mm′DSD′ ⊗1m′×m, and so (m2D, m′D′) is solution for S iff ( ¯D, ¯ D′) is solution for ¯S □ We illustrate in Figure 2 the improvement of normalization on our previous synthetic example of noise model. Spurious correspondences are suppressed and informative correspondances such as W23,2′3′ are enhanced, which makes the correct correspondence clearer. The plot on the right shows that normalization makes the matching robust to arbitrarily large noise in this model, while unnormalized correspondences will eventually result in incorrect matchings. 6 Experimental Results Discretization and Implementation Details Because all of the methods described are continuous relaxations, a post-processing step is needed to discretize the continuous solution while satisfying the desired constraints. Given an initial solution estimate, GA finds a near-discrete local minimum of the IQP by solving a series of Taylor approximations. We can therefore use GA as follows: 1) initialize GA with the relaxed solution of each algorithm, and 2) discretize the output of GA with a simple greedy procedure described in [5]. Software: For SDP, we used the popular SeDuMi [9] optimization package. Spectral matching and SMAC were implemented using the standard Lanczos eigensolver available with MATLAB, and we implemented an optimized version of GA in C++. 6.1 One-to-one Attributed Graph Matching on Random Graphs Following [4], we performed a comprehensive evaluation of the 4 algorithms on random one-to-one graph matching problems. For each matching problem, we constructed a graph G with n = 20 nodes, and m random edges (m = 10%n2 in a first series of experiments). Each edge ij ∈E, was assigned a random attribute Aij distributed uniformly in [0, 1]. We then created a perturbed graph G′ by adding noise on the edge attributes: A′ i′j′ = Ap(i)p(j) +noise, where p is a random permutation; the noise was distributed uniformly in [0, σ], σ varying from 0 to 6. The compatibility matrix W was computed from graphs G, G′ as follows: Wii′,jj′ = exp(−|Aij −A′ i′j′|2), ∀ij ∈E, i′j′ ∈E′. For each noise level we generated 100 different matching problems and computed the average error rate by comparing the discretized matching to the ground truth permutation. Effect of Normalization on each method We computed the average error rates with and without normalization of the compatibility matrix W, for each method: SDP, GA, SM and SMAC, see Figure 4. We can see dramatic improvement due to normalization, regardless of the relaxation method used. At higher noise levels, all methods had a 2 to 3-fold decrease in error rate. Comparison Across Methods We plotted the performance of all 4 methods using normalized compatibility matrices in Figure 5 (left), again, with 100 trials per noise level. We can see that SDP and SMAC give comparable performance, while GA and especially SM do worse. These results validate SMAC with normalization as a state-of-the-art relaxation method for graph matching. Influence of edge density and graph size We experimented with varying edge density: noise σ = 2, n = 20, edge density varying from 10% to 100% by increments of 10% with 20 trials per increment. For SMAC, the normalization resulted in an average absolute error reduction of 60%, and for all density levels the reduction was at least 40%. For SDP, the respective figures were 31%, 20%. We also did the same experiments, but with fixed edge density and varying graph sizes, from 10 to 100 nodes. For SMAC, normalization resulted in an average absolute error reduction of 52%; for all graph sizes the reduction was at least 40%. Scalability and Speed In addition to accuracy, scalability and speed of the methods are also important considerations. Matching problems arising from images and other sensor data (e.g., range 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Error rate vs. Noise level Noise level Average Error Rate GA GA (Norm.) 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Error rate vs. Noise level Noise level Average Error Rate SDP SDP (Norm.) 1 2 3 4 5 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Error rate vs. Noise level Noise level Average Error Rate SM SM (Norm.) 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Error rate vs. Noise level Noise level Average Error Rate SMAC SMAC (Norm.) Figure 4: Comparison of matching performance with normalized and unnormalized compatibility matrices. Axes are error rate vs. noise level. In all cases (GA, SDP, SM, SMAC), the error rate decreases substantially. scans) may have hundreds of nodes in each graph. As mentioned previously, the SDP relaxation squares the problem size (in addition to requiring expensive solvers), greatly impacting its speed and scalability. Figure 5 (middle and right) demonstrates this. For a set of random one-to-one matching problems of varying size n (horizontal axis), we averaged the time for computing the relaxed solution of all four methods (10 trials for each n). We can see that SDP scales quite poorly (almost 30 minutes for n = 30). In addition, on a machine with 2GB of RAM, SDP typically ran out of memory for n = 60. By contrast, SMAC and SM scale easily to much larger problems (n = 200). 6.2 Image correspondence We also tested the effect of normalization on a simple but instructive image correspondence task. In each of two images to match, we formed a multiple attribute graph by sub-sampling n = 100 canny edge points as graph nodes. Each pair of feature points e = ij within 30 pixels was assigned two attributes: angle ∠e = ∠−→ ij and distance d(e) = ||−→ ij ||. S was computed as follows: S(e, e′) = 1 iff cos(∠e′ −∠e) > cos π/8 and |d(e)−d(e′)| min(d(e),d(e′)) < 0.5. By using simple geometric attributes, we emphasized the effect of normalization on the energy function, rather than feature design. Figure 6 shows an image correspondence example between the two airplane images of Figure 1. We display the result of SMAC with and without normalization. Correspondence is represented by similarly colored dots. Clearly, normalization improved the correspondence result. Without normalization, large systematic errors are made, such as mapping the bottom of one plane to the top of the other. With normalization these errors are largely eliminated. Let us return to Figure 1 to see the effect of normalization on S(e, e′). As we saw, there are roughly 2 types of connections: 1) horizontal edges (uninformative) and 2) vertical edges (discriminative). Normalization exploits this disparity to enhance the latter edges: before normalization, each connection contributed up to 1.0 to the overall matching score. After normalization, connections of type 2 contributed up to 0.64 to the overall matching score, versus 0.08 for connections of type 1, which is 8 times more. We can view normalization as imposing an upper bound on the contribution of each connection: the upper bound is smaller for spurious matches, and higher for discriminative matches. 7 Conclusion While recent literature mostly focuses on improving relaxation methods for graph matching problems, we contribute both an improved relaxation algorithm, SMAC, and a method for improving the energy function itself, graph balancing with bistochastic normalization. In our experiments, SMAC outperformed GA and SM, with similar accuracy to SDP, it also scaled much better than SDP. We motivate the normalization with an intuitive example, showing it improves noise tolerance by enhancing informative matches and de-enhancing uninformative matches. The experiments we performed on random one-to-one matchings show that normalization dramatically improves both our relaxation method SMAC, and the three algorithms mentioned. We also demonstrated the value of normalization for establishing one-to-one correspondences between image pairs. Normalization imposes an upper bound on the score contribution of each edge in proportion to its saliency. 1 2 3 4 5 6 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Error rate vs. Noise level Noise level Average Error Rate SM (Norm.) SMAC (Norm.) GA (Norm.) SDP (Norm.) 0 50 100 150 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Problem Size (|V|=|V’| = # of nodes) CPU time in seconds (in log scale) Computation Time vs. Graph Size Spectral Matching Spectral Matching with affine Constraint Graduated Assignment SDP 0 50 100 150 0 5 10 15 20 25 30 35 Problem Size (|V|=|V’| = # of nodes) CPU time in seconds Computation Time vs. Graph Size Spectral Matching Spectral Matching with Affine Constraint Graduated Assignment Figure 5: Left: comparison of different methods with normalized compatibility matrices. Axes: vertical is error rate averaged over 100 trials; horizontal is noise level. SMAC achieves comparable performance to SDP. Middle,right: computation times of graph matching methods (left: log-scale, right: linear scale). Figure 6: Image correspondence via SMAC with and without normalization; like colors indicate matches. References [1] Marcello Pelillo. A unifying framework for relational structure matching. icpr, 02:1316, 1998. [2] Christian Schellewald and Christoph Schn¨orr. Probabilistic subgraph matching based on convex relaxation. In Energy Minimization Methods in Computer Vision and Pattern Recognition, 2005. [3] P.H.S. Torr. Solving markov random fields using semi definite programming. In Artificial Intelligence and Statistics, 2003. [4] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 18, 1996. [5] Marius Leordeanu and Martial Hebert. A spectral technique for correspondence problems using pairwise constraints. In International Conference on Computer Vision, October 2005. [6] Stella X. Yu and Jianbo Shi. Grouping with bias. In Advances in Neural Information Processing Systems, 2001. [7] G.L.Scott and H.C.Longuett-Higgins. An algorithm for associating the features of two images. In Proceedings of the Royal Society of London B, 1991. [8] Paul Knopp and Richard Sinkhorn. Concerning nonnegative matrices and doubly stochastic matrices. Pacific J. Math, 2:343–348, 1967. [9] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11–12:625–653, 1999. Special issue on Interior Point Methods.
|
2006
|
169
|
2,998
|
Combining causal and similarity-based reasoning Charles Kemp, Patrick Shafto, Allison Berke & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139 {ckemp,shafto,berke,jbt}@mit.edu Abstract Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationships between properties and knowledge about relationships between objects. Previous accounts of inductive reasoning generally focus on just one kind of knowledge: models of causal reasoning often focus on relationships between properties, and models of similarity-based reasoning often focus on similarity relationships between objects. We present a Bayesian model of inductive reasoning that incorporates both kinds of knowledge, and show that it accounts well for human inferences about the properties of biological species. 1 Introduction Will that berry taste good? Is that table strong enough to sit on? Predicting whether an object has an unobserved property is among the most basic of all inductive problems. Many kinds of knowledge appear to be relevant: different researchers emphasize the role of causal knowledge, similarity, category judgments, associations, analogical mappings, scripts, and intuitive theories, and each of these approaches accounts for an important subset of everyday inferences. Taken in isolation, however, each of these approaches is fundamentally limited. Humans draw on multiple kinds of knowledge and integrate them flexibly when required, and eventually our models should attempt to match this ability [1]. As an initial step towards this goal, we present a model of inductive reasoning that is sensitive both to causal relationships between properties and to similarity relationships between objects. The inductive problem we consider can be formalized as the problem of filling in missing entries in an object-property matrix (Figure 1). Previous accounts of inductive reasoning generally address some version of this problem. Models of causal reasoning [2] usually focus on relationships between properties (Figure 1a): if animal A has wings, for instance, it is likely that animal A can fly. Similarity-based models [3, 4, 5] usually focus on relationships between objects (Figure 1b): if a duck carries gene X, a goose is probably more likely than a pig to carry the same gene. Previous models, however, cannot account for inferences that rely on similarity and causality: if a duck carries gene X and gene X causes enzyme Y to be expressed, it is likely that a goose expresses enzyme Y (Figure 1c). We develop a unifying model that handles inferences like this, and that subsumes previous probabilistic approaches to causal reasoning [2] and similarity-based reasoning [5, 6]. Our formal framework overcomes some serious limitations of the two approaches it subsumes. Approaches that rely on causal graphical models typically assume that the feature vectors of any two objects (any two rows of the matrix in Figure 1a) are conditionally independent given a causal network over the features. Suppose, for example, that the rows of the matrix correspond to people and the causal network states that smoking leads to lung cancer with probability 0.3. Suppose that Tim, Tom and Zach are smokers, that Tim and Tom are identical twins, and that Tim has lung cancer. The assumption of conditional independence implies that Tom and Zach are equally likely to suffer from long cancer, a conclusion that seems unsatisfactory. The assumption is false because of variables that are unknown but causally relevant—variables capturing unknown biological and environmental factors that mediate the relationship between smoking and disease. Dealing with these unknown 1 0 0 1 0 1 1 0 1 ? ? ? 1 0 0 1 1 1 1 ? 1 0 1 ? 1 ? ? ? ? ? ? ? ? ? ? ? a) c) b) ... ... ... ... o1 o2 o3 o4 o1 o2 o3 o4 o1 o2 o3 o4 . . . . . . . . . . . . . . . f1 f2 f3 f2 f3 f1 f1 f2 f3 Figure 1: (a) Models of causal reasoning generally assume that the rows of an object-feature matrix are conditionally independent given a causal structure over the features. These models are often used to make predictions about unobserved features of novel objects. (b) Models of similaritybased reasoning generally assume that columns of the matrix are conditionally independent given a similarity structure over the objects. These models are often used to make predictions about novel features. (c) We develop a generative model for object-feature matrices that incorporates causal relationships between features and similarity relationships between objects. The model uses both kinds of information to make predictions about matrices with missing entries. variables is difficult, but we suggest that knowledge about similarity between objects can help. Since Tim is more similar to Tom than Zach, our model correctly predicts that Tom is more likely to have lung cancer than Zach. Previous models of similarity-based reasoning [5, 6] also suffer from a restrictive assumption of conditional independence. This time the assumption states that features (columns of the matrix in Figure 1b) are conditionally independent given information about the similarity between objects. Empirical tests of similarity-based models often attempt to satisfy this assumption by using blank properties—subjects, for example, might be told that coyotes have property P, and asked to judge the probability that foxes have property P [3]. To a first approximation, inferences in tasks like this conform to judgments of similarity: subjects conclude, for example, that foxes are more likely to have property P than mice, since coyotes are more similar to foxes than mice. People, however, find it natural to reason about properties that are linked to familiar properties, and that therefore violate the assumption of conditional independence. Suppose, for instance, you learn that desert foxes have skin that is resistant to sunburn. It now seems that desert rats are more likely to share this property than arctic foxes, even though desert foxes are more similar in general to arctic foxes than to desert rats. Our model captures inferences like this by incorporating causal relationships between properties: in this case, having sunburn-resistant skin is linked to the property of living in the desert. Limiting assumptions of conditional independence can be avoided by specifying a joint distribution on an entire object-property matrix. Our model uses a distribution that is sensitive both to causal relationships between properties and to similarity relationships between objects. We know of no previous models that attempt to combine causality and similarity, and one set of experiments that has been taken to suggest that people find it difficult to combine these sources of information [7]. After introducing our model, we present two experiments designed to test it. The results suggest that people are able to combine causality with similarity, and that our model accounts well for this capacity. 2 A generative model for object-feature matrices Consider first a probabilistic approach to similarity-based reasoning. Assume that So is an object structure: a graphical model that captures relationships between a known set of objects (Figure 1b). Suppose, for instance, that the objects include a mouse, a rat, a squirrel and a sheep (o1 through o4). So can be viewed as a graphical model that captures phylogenetic relationships, or as a formalization of the intuitive similarity between these animals. Given some feature of interest, the feature values for all objects can be collected into an object vector vo and So specifies a distribution P(vo) on these vectors. We work with the case where (So, λ) is a tree-structured graphical model of the sort previously used by methods for Bayesian phylogenetics [8] and cognitive models of property induction [5, 6]. The objects lie at the leaves of the tree, and we assume that object vectors are binary vectors generated by a mutation process over the tree. This process has a parameter, λ, that represents the base rate of a novel feature—the expected proportion of objects with that feature. For instance, if λ is low, the model (So, λ) will predict that a novel feature will probably not be found in any of the animals, but if the feature does occur in exactly two of the animals, the mouse and the rat are a more likely pair than the mouse and the sheep. The mutation process can be formalized as a continuous-time Markov process with two states (off and on) and with infinitesimal matrix: Q = −λ λ 1 −λ −(1 −λ) We can generate object vectors from this model by imagining a binary feature spreading out over the tree from root to leaves. The feature is on at the root with probability λ, and the feature may switch states at any point along any branch. The parameter λ determines how easy it is to move between the on state and the off state. If λ is high, it will be easy for the Markov process to enter the on state, and difficult for it to leave once it is there. Consider now a probabilistic approach to causal reasoning. Assume that Sf is a feature structure: a graphical model that captures relationships between a known set of features (Figure 1a). The features, for instance, may correspond to enzymes, and Sf may capture the causal relationships between these enzymes. One possible structure states that enzyme f1 is involved in the production of enzyme f2, which is in turn involved in the production of enzyme f3. The feature values for any given object can be collected into a feature vector vf and Sf specifies a distribution P(vf) on these vectors. Suppose now that we are interested in a model that combines the knowledge represented by Sf and So (Figure 1c). Given that the mouse expresses enzyme f1, for instance, a combined model should predict that rats are more likely than squirrels to express enzyme f2. Formally, we seek a distribution P(M), where M is an object-feature matrix, and P(M) is sensitive to both the relationships between features and the relationships between animals. Given this distribution, Bayesian inference can be used to make predictions about the missing entries in a partially observed matrix. If the features in Sf happen to be independent (Figure 1b), we can assume that column i of the matrix is generated by (So, λi), where λi is the base rate of fi. Consider then the case where Sf captures causal relationships between the features (Figure 1c). These causal relationships will typically depend on several hidden variables. Causal relationships between enzymes, for instance, are likely to depend on other biological variables, and the causal link between smoking and lung cancer is mediated by many genetic and environmental variables. Often little is known about these hidden variables, but to a first approximation we can assume that they respect the similarity structure So. In Figure 1c, for example, the unknown variables that mediate the relationship between f1 and f2 are more likely to take the same value in o1 and o2 than in o1 and o4. We formalize these intuitions by converting a probabilistic model Sf (Figure 2a) into an equivalent model SD f (Figure 2b) that uses a deterministic combination of independent random events. These random events will include hidden but causally relevant variables. In Figure 2b, for example, the model SD f indicates that the effect e is deterministically present if the cause c is present and the transmission mechanism t is active, or if there is a background cause b that activates e. The model SD f is equivalent to Sf in the sense that both models induce the same distribution over the variables that appear in Sf. In general there will be many models SD f that meet this condition, and there are algorithms which convert Sf into one of these models [9]. For some applications it might be desirable to integrate over all of these models, but here we attempt to choose the simplest—the model SD f with the fewest variables. Given a commitment to a specific deterministic model, we assume that the root variables in SD f are independently generated over So. More precisely, suppose that the base rate of the ith variable in SD f is λi. The distribution P(M) we seek must meet two conditions (note that each candidate matrix M now has a column for each variable in SD f ). First, the marginal distribution on each row must match c e = 0 e = 1 0 0.9 0.1 1 0.18 0.72 c t b e = 0 e = 1 0 0 0 1 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 1 c = 0 c = 1 0.9 0.1 c = 0 c = 1 0.9 0.1 t = 0 t = 1 0.2 0.8 b = 0 b = 1 0.9 0.1 b) a) c) c3 t3 b3 e3 c1 t1 b1 e1 c2 t2 b2 e2 o1 o3 o2 e c b t c c e b t e Figure 2: (a) A graphical model Sf that captures a probabilistic relationship between a cause c and an effect e. (b) A deterministic model SD f that induces the same joint distribution over c and e. t indicates whether the mechanism of causal transmission between c and e is active, and b indicates whether e is true owing to a background cause independent of c. All of the root variables (c, t and b) are independent, and the remaining variables (e) are deterministically specified once the root variables are fixed. (c) A graphical model created by combining SD f with a tree-structured representation of the similarity between three objects. The root variables in SD f (c, t, and b) are independently generated over the tree. Note that the arrows on the edges of the combined model have been suppressed. the distribution specified by SD f . Second, if fi is a root variable in SD f , the marginal distribution on column i must match the distribution specified by (So, λi). There is precisely one distribution P(M) that satisfies these conditions, and we can represent it using a graphical model that we call the combined model. Suppose that there are n objects in So. To create the combined model, we first introduce n copies of SD f . For each root variable i in SD f , we now connect all copies of variable i according to the structure of So (Figure 2c). The resulting graph provides the topology of the combined model, and the conditional probability distributions (CPDs) are inherited from So and SD f . Each node that belongs to the ith copy of So inherits a CPD from (So, λi), and all remaining nodes inherit a (deterministic) CPD from Sf. Now that the distribution P(M) is represented as a graphical model, standard inference techniques can be used to compute the missing entries in a partially-observed matrix M. All results in this paper were computed using the implementation of the junction tree algorithm included in the Bayes Net toolbox [10]. 3 Experiments When making inductive inferences, a rational agent should exploit all of the information available, including causal relationships between features and similarity relationships between objects. Whether humans are able to meet this normative standard is not clear, and almost certainly varies from task to task. On one hand, there are motivating examples like the case of the three smokers where it seems natural to think about causal relationships and similarity relationships at the same time. On the other hand, Rehder [7] argues that causal information tends to overwhelm similarity information, and supports this conclusion with data from several tasks involving artificial categories. To help resolve these competing views, we designed several tasks where subjects were required to simultaneously reason about causal relationships between enzymes and similarity relationships between animals. 3.1 Experiment 1 Materials and Methods. 16 adults participated in this experiment. Subjects were asked to reason about the presence of enzymes in a set of four animals: a mouse, a rat, a sheep, and a squirrel. Each subject was trained on two causal structures, each of which involved three enzymes. Pseudobiological names like “dexotase” were used in the experiment, but here we will call the enzymes f1, f2 and f3. In the chain condition, subjects were told that f3 is known to be produced by several −2 −1 0 1 2 r=0.94 r=0.83 r=0.74 mouse rat sqrl sheep −1 0 1 2 mouse rat sqrl sheep r=0.93 mouse rat sqrl sheep r=0.86 mouse rat sqrl sheep r=0.82 −1 0 1 2 r=0.97 r=0.68 r=0.89 −1 0 1 2 r=0.91 r=0.93 r=0.86 Human Combined Causal Similarity a) b) Task 1 Task 2 Task 1 Task 2 f1 f2 f3 f1 f2 f3 Figure 3: Experiment 1: Behavioral data (column 1) and predictions for three models. (a) Results for the chain condition. Known test results are marked with arrows: in task 1, subjects were told only that the mouse had tested positive for f1, and in task 2 they were told in addition that the rat had tested negative for f2. Error bars represent the standard error of the mean. (b) Results for the common-effect condition. pathways, and that the most common pathway begins with f1, which stimulates production of f2, which in turn leads to the production of f3. In the common-effect condition, subjects were told that f3 is known to be produced by several pathways, and that one of the most common pathways involves f1 and the other involves f2. To reinforce each causal structure, subjects were shown 20 cards representing animals from twenty different mammal species (names of the species were not supplied). The card for each animal represented whether that animal had tested positive for each of the three enzymes. The cards were chosen to be representative of the distribution captured by a causal network with known structure (chain or common-effect) and known parameterization. In the chain condition, for example, the network was a noisy-or network with the form of a chain, where leak probabilities were set to 0.4 (f1) or 0.3 (f2 and f3), and the probability that each causal link was active was set to 0.7. After subjects had studied the cards for as long as they liked, the cards were removed and subjects were asked several questions about the enzymes (e.g. “you learn about a new mammal—how likely is it that the mammal produces f3?”) The questions in this training phase were intended to encourage subjects to reflect on the causal relationships between the enzymes. In both conditions, subjects were told that they would be testing the four animals (mouse, rat, sheep and squirrel) for each of the three enzymes. Each condition included two tasks. In the chain condition, subjects were told that the mouse had tested positive for f1, and asked to predict the outcome of each remaining test (Figure 1c). Subjects were then told in addition that the rat had tested negative for f2, and again asked to predict the outcome of each remaining test. Note that this second task requires subjects to integrate causal reasoning with similarity-based reasoning: causal reasoning predicts that the mouse has f2, and similarity-based reasoning predicts that it does not. In the common-effect condition, subjects were told that the mouse had tested positive for f3, then told in addition that the rat had tested negative for f2. Ratings were provided on a scale from 0 (very likely to test negative) to 100 (very likely to test positive). Results. Subjects used the 100 point scale very differently: in task 1 of each condition, some subjects chose numbers between 80 and 100, and others chose numbers between 0 and 100. We therefore converted each set of ratings to z-scores. Average z-scores are shown in the first column of Figure 3, and the remaining columns show predictions for several models. In each case, model predictions have been converted from probabilities to z-scores to allow a direct comparison with the human data. Our combined model uses a tree over the four animals and a causal network over the features. We used the tree shown in Figure 1b, where objects o1 through o4 correspond to the mouse, the rat, the squirrel and the sheep. The tree component of our model has one free-parameter—the total path length of the tree. The smaller the path length, the more likely that all four animals have the same feature values, and the greater the path length, the more likely that distant animals in the tree (e.g. the mouse and the sheep) will have different feature values. All results reported here use the same value of this parameter—the value that maximizes the average correlation achieved by our model across Experiments 1 and 2. The causal component of our model includes no free parameters, since we used the parameters of the network that generated the cards shown to subjects during the training phase. Comparing the first two columns of Figure 3, we see that our combined model accounts well for the human data. Columns 3 and 4 of Figure 3 show model predictions when we remove the similarity component (column 3) or the causal component (column 4) from our combined model. The model that uses the causal network alone is described by [2], among others, and the model that uses the tree alone is described by [6]. Both of these models miss qualitative trends evident in the human data. In task 1 of each condition, the causal model makes identical predictions about the rat, the squirrel and the sheep: in task 1 of the chain condition, for example, it cannot use the similarity between the mouse and the rat to predict that the rat is also likely to test positive for f1. In task 1 of each condition the similarity model predicts that the unobserved features (f2 and f3 for the chain condition, and f1 and f2 for the common-effect condition) are distributed identically across the four animals. In task 1 of the chain condition, for example, the similarity model does not predict that the mouse is more likely than the sheep to test positive for f2 and f3. The limitations of the causal and similarity models suggest that some combination of causality and similarity is necessary to account for our data. There are likely to be approaches other than our combined model that account well for our data, but we suggest that accurate predictions will only be achieved when the causal network and the similarity information are tightly integrated. Simply averaging the predictions for the causal model and the similarity model will not suffice: in task 1 of the chain condition, for example, both of these models predict that the rat and the sheep are equally likely to test positive for f2, and computing an average across these models will result in the same prediction. 3.2 Experiment 2 Our working hypothesis is that similarity and causality should be combined in most contexts. An alternative hypothesis—the root-variables hypothesis—was suggested to us by Bob Rehder, and states that similarity relationships are used only if some of the root variables in a causal structure Sf are unobserved. For instance, similarity might have influenced inferences in the chain condition of Experiment 1 only because the root variable f1 was never observed for all four animals. The root-variables hypothesis should be correct in cases where all root variables in the true causal structure are known. In Figure 2c, for instance, similarity no longer plays a role once the root variables are observed, since the remaining variables are deterministically specified. We are interested, however, in cases where Sf may not contain all of the causally relevant variables, and where similarity can help to make predictions about the effects of unobserved variables. Consider, for example, the case of the three smokers, where Sf states that smoking causes lung cancer. Even though the root variable is observed for Tim, Tom and Zach (all three are smokers), we still believe that Tom is more likely to suffer from lung cancer than Zach having discovered that Tim has lung cancer. The case of the three smokers therefore provides intuitive evidence against the root-variables hypothesis, and we designed a related experiment to explore this hypothesis empirically. Materials and Methods. Experiment 2 was similar to Experiment 1, except that the common-effect condition was replaced by a common-cause condition. In the first task for each condition, subjects were told only that the mouse had tested positive for f1. In the second task, subjects were told in addition that the rat, the squirrel and the sheep had tested positive for f1, and that the mouse had mouse rat sqrl sheep −2 −1 0 1 mouse rat sqrl sheep r=0.98 mouse rat sqrl sheep r=0.89 mouse rat sqrl sheep r=0.98 −2 −1 0 1 r=0.98 r=0.82 r=0.97 −1 0 1 2 r=0.94 r=0.55 r=0.92 −1 0 1 2 r=0.99 r=0.79 r=0.84 Similarity Human Combined Causal Task 1 Task 1 a) b) Task 2 Task 2 f1 f2 f3 f2 f1 f3 Figure 4: Experiment 2: Behavioral data and predictions for three models. In task 2 of each condition, the root variable in the causal network (f1) is observed for all four animals. tested negative for f2. Note that in the second task, values for the root variable (f1) were provided for all animals. 18 adults participated in this experiment. Results. Figure 4 shows mean z-scores for the subjects and for the four models described previously. The judgments for the first task in each condition replicate the finding from Experiment 1 that subjects combine causality and similarity when just one of the 12 animal-feature pairs is observed. The results for the second task rule out the root-variables hypothesis. In the chain condition, for example, the causal model predicts that the rat and the sheep are equally likely to test positive for f2. Subjects predict that the rat is less likely than the sheep to test positive for f2, and our combined model accounts for this prediction. 4 Discussion We developed a model of inductive reasoning that is sensitive to causal relationships between features and similarity relationships between objects, and demonstrated in two experiments that it provides a good account of human reasoning. Our model makes three contributions. First, it provides an integrated view of two inductive problems—causal reasoning and similarity-based reasoning—that are usually considered separately. Second, unlike previous accounts of causal reasoning, it acknowledges the importance of unknown but causally relevant variables, and uses similarity to constrain inferences about the effects of these variables. Third, unlike previous models of similarity-based reasoning, our model can handle novel properties that are causally linked to known properties. For expository convenience we have emphasized the distinction between causality and similarity, but the notion of similarity needed by our approach will often have a causal interpretation. A treestructured taxonomy, for example, is a simple representation of the causal process that generated biological species—the process of evolution. Our combined model can therefore be seen as a causal model that takes both relationships between features and evolutionary relationships between species into account. More generally, our framework can be seen as a method for building sophisticated causal models, and our experiments suggest that these kinds of models will be needed to account for the complexity and subtlety of human causal reasoning. Other researchers have proposed strategies for combining probabilistic models [11], and some of these methods may account well for our data. In particular, the product of experts approach [12] should lead to predictions that are qualitatively similar to the predictions of our combined model. Unlike our approach, a product of experts model is not a directed graphical model, and does not support predictions about interventions. Neither of our experiments explored inferences about interventions, but an adequate causal model should be able to handle inferences of this sort. Causal knowledge and similarity are just two of the many varieties of knowledge that support inductive reasoning. Any single form of knowledge is a worthy topic of study, but everyday inferences often draw upon multiple kinds of knowledge. We have not provided a recipe for combining arbitrary forms of knowledge, but our work illustrates two general themes that may apply quite broadly. First, different generative models may capture different aspects of human knowledge, but all of these models use a common language: the language of probability. Probabilistic models are modular, and can be composed in many different ways to build integrated models of inductive reasoning. Second, the stochastic component of most generative models is in part an expression of ignorance. Using one model (e.g. a similarity model) to constrain the stochastic component of another model (e.g. a causal network) may be a relatively general method for combining probabilistic knowledge representations. Although we have focused on human reasoning, integrated models of induction are needed in many scientific fields. Our combined model may find applications in computational biology: predicting whether an organism expresses a certain gene, for example, should rely on phylogenetic relationships between organisms and causal relationships between genes. Related models have already been explored: Engelhardt et al. [13] develop an approach to protein function prediction that combines phylogenetic relationships between proteins with relationships between protein functions, and several authors have explored models that combine phylogenies with hidden Markov models. Combining two models is only a small step towards a fully integrated approach, but probability theory provides a lingua franca for combining many different representations of the world. Acknowledgments We thank Bob Rehder and Brian Milch for valuable discussions. This work was supported in part by AFOSR MURI contract FA9550-05-1-0321, the William Asbjornsen Albert memorial fellowship (CK) and the Paul E. Newton Chair (JBT). References [1] A. Newell. Unified theories of cognition. Harvard University Press, Cambridge, MA, 1989. [2] B. Rehder and R. Burnett. Feature inference and the causal structure of categories. Cognitive Science, 50: 264–314, 2005. [3] D. N. Osherson, E. E. Smith, O. Wilkie, A. Lopez, and E. Shafir. Category-based induction. Psychological Review, 97(2):185–200, 1990. [4] S. A. Sloman. Feature-based induction. Cognitive Psychology, 25:231–280, 1993. [5] C. Kemp and J. B. Tenenbaum. Theory-based induction. In Proceedings of the Twenty-Fifth Annual Conference of the Cognitive Science Society, pages 658–663. Lawrence Erlbaum Associates, 2003. [6] C. Kemp, T. L. Griffiths, S. Stromsten, and J. B. Tenenbaum. Semi-supervised learning with trees. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [7] B. Rehder. When similarity and causality compete in category-based property generalization. Memory and Cognition, 34(1):3–16, 2006. [8] J. P. Huelsenbeck and F. Ronquist. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics, 17(8):754–755, 2001. [9] D. Poole. Probabilistic Horn abduction and Bayesian networks. Artificial Intelligence, 64(1):81–129, 1993. [10] K. Murphy. The Bayes Net Toolbox for MATLAB. Computing science and statistics, 33:1786–1789, 2001. [11] C. Genest and J. V. Zidek. Combining probability distributions: a critique and an annotated bibliography. Statistical Science, 1(2):114–135, 1986. [12] G. E. Hinton. Modelling high-dimensional data by combining simple experts. In Proceedings of the 17th National Conference on Artificial Intelligence. AAAI Press, 2000. [13] B. E. Engelhardt, M. I. Jordan, and S. E. Brenner. A graphical model for predicting protein molecular function. In Proceedings of the 23rd International Conference on Machine Learning, 2006.
|
2006
|
17
|
2,999
|
Fast Discriminative Visual Codebooks using Randomized Clustering Forests Frank Moosmann∗, Bill Triggs and Frederic Jurie GRAVIR-CNRS-INRIA, 655 avenue de l’Europe, Montbonnot 38330, France firstname.lastname@inrialpes.fr Abstract Some of the most effective recent methods for content-based image classification work by extracting dense or sparse local image descriptors, quantizing them according to a coding rule such as k-means vector quantization, accumulating histograms of the resulting “visual word” codes over the image, and classifying these with a conventional classifier such as an SVM. Large numbers of descriptors and large codebooks are needed for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests – ensembles of randomly created clustering trees – and show that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks. 1 Introduction Many of the most popular current methods for image classification represent images as collections of independent patches characterized by local visual descriptors. Patches can be sampled densely [18, 24], randomly [15], or at selected salient points [14]. Various local descriptors exist with different degrees of geometric and photometric invariance, but all encode the local patch appearance as a numerical vector and the more discriminant ones tend to be high-dimensional. The usual way to handle the resulting set of descriptor vectors is to vector quantize them to produce socalled textons [12] or visual words [5, 22]. The introduction of such visual codebooks has allowed significant advances in image classification, especially when combined with bag-of-words models inspired by text analysis [5, 7, 22, 24, 25]. There are various methods for creating visual codebooks. K-means clustering is currently the most common [5, 22] but mean-shift [9] and hierarchical k-means [17] clusterers have some advantages. These methods are generative but some recent approaches focus on building more discriminative codebooks [20, 24]. The above methods give impressive results but they are computationally expensive owing to the cost of assigning visual descriptors to visual words during training and use. Tree based coders [11, 17, 23] are quicker but (so far) somewhat less discriminative. It seems to be difficult to achieve both speed and good discrimination. This paper contributes two main ideas. One is that (small) ensembles of trees eliminate many of the disadvantages of single tree based coders without losing the speed advantages of trees. The second is that classification trees contain a lot of valuable information about locality in descriptor space that is not apparent in the final class labels. One can exploit this by training them for classification then ignoring the class labels and using them as “clustering trees” – simple spatial partitioners that assign a distinct region label (visual word) to each leaf. Combining these ideas, we introduce Extremely ∗Current address: Institute of Measurement and Control, University of Karlsruhe, Germany. Contact: moosmann@mrt.uka.de Figure 1: Using ERC-Forests as visual codebooks in bag-of-feature image classification. Randomized Clustering Forests (ERC-Forests) – ensembles of randomly created clustering trees. We show that these have good resistance to background clutter and that they provide much faster training and testing and more accurate results than conventional k-means in several state-of-the-art image classification tasks. In the rest of the paper, we first explain how decision trees can provide good visual vocabularies, then we describe our approach and present experimental results and conclusions. 2 Tree Structured Visual Dictionaries Our overall goal is to classify images according to the object classes that they contain (see figure 1). We will do this by selecting or sampling patches from the image, characterizing them by vectors of local visual descriptors and coding (quantizing) the vectors using a learned visual dictionary, i.e. a process that assigns discrete labels to descriptors, with similar descriptors having a high probability of being assigned the same label. As in text categorization, the occurrences of each label (“visual word”) are then counted to build a global histogram (“bag of words”) summarizing the image (“document”) contents. The histogram is fed to a classifier to estimate the image’s category label. Unlike text, visual ‘words’ are not intrinsic entities and different quantization methods can lead to very different performances. Computational efficiency is important because a typical image yields 103–104 local descriptors and data sets often contain thousands of images. Also, many of the descriptors generally lie on the background not the object being classified, so the coding method needs to be able to learn a discriminative labelling despite considerable background ‘noise’. K-means and tree structured codes. Visual coding based on K-means vector quantization is effective but slow because it relies on nearest neighbor search, which remains hard to accelerate in high dimensional descriptor spaces despite years of research on spatial data structures (e.g. [21]). Nearest neighbour assignments can also be somewhat unstable: in high dimensions, concentration of measure [2] tends to ensure that there are many centres with similar distances to any given point. Component-wise decision trees offer logarithmic-time coding, but individual trees can rarely compete with a good K-means coding: each path through the tree typically accesses only a few of the feature dimensions, so there is little scope for producing a consensus over many different dimensions. Nist´er et al.[17] introduced a tree coding based on hierarchical K-means. This uses all components and gives a good compromise between speed and loss of accuracy. Random forests. Despite their popularity, we believe that K-means codes are not the best compromise. No single data structure can capture the diversity and richness of high dimensional descriptors. To do this an ensemble approach is needed. The theoretical and practical performance of ensemble classifiers is well documented [1]. Ensembles of random trees [4] seem particularly suitable for visual dictionaries owing to their simplicity, speed and performance [11]. Sufficiently diverse trees can be constructed using randomized data splits or samples [4]. Extremely Randomized Trees (see below) take this further by randomizing both attribute choices and quantization thresholds, obtaining even better results [8]. Compared to standard approaches such as C4.5, ER tree construction is rapid, depends only weakly on the dimensionality and requires relatively little memory. Clustering forests. Methods such as [11, 8] classify descriptors by majority voting over the treeassigned class labels. There are typically many leaves that assign a given class label. Our method works differently after the trees are built. It uses the trees as spatial partitioning methods not classifiers, assigning each leaf of each tree a distinct region label (visual word). For the overall image classification tasks studied here, histograms of these leaf labels are then accumulated over the whole image and a global SVM classifier is applied. Our approach is thus related to clustering trees – decision trees whose leaves define a spatial partitioning or grouping [3, 13]. Such trees are able to find natural clusters in high dimensional spaces. They can be built without external class labels, but if labels are available they can be used to guide the tree construction. Ensemble methods and particularly forests of extremely randomized trees again offer considerable performance advantages here. The next section shows how such Extremely Randomized Clustering Forests can be used to produce efficient visual vocabularies for image classification tasks. 3 Extremely Randomized Clustering Forests (ERC-Forests) Our goal is to build a discriminative coding method. Our method starts by building randomized decision trees that predict class labels y from visual descriptor vectors d = (f1, . . . , fD), where fi, i = 1, . . . , D are elementary scalar features. For notational simplicity we assume that all of the descriptors from a given image share the same label y. We train the trees using a labeled (for now) training set L = {(dn, yn), n = 1, . . . , N}. However we use the trees only for spatial coding, not classification per se. During a query, for each descriptor tested, each tree is traversed from the root down to a leaf and the returned label is the unique leaf index, not the (set of) descriptor label(s) y associated with the leaf. ERC-Trees. The trees are built recursively top down. At each node t corresponding to descriptor space region Rt, two children l, r are created by choosing a boolean test Tt that divides Rt into two disjoint regions, Rt = Rl ∪Rr with Rl ∩Rr = φ. Recursion continues until further subdivision is impossible: either all surviving training examples belong to the same class or all have identical values for all attributes. We use thresholds on elementary features as tests, Tt = {fi(t) ≤θt} for some feature index i(t) and threshold θt. The tests are selected randomly as follows. A feature index i(t) is chosen randomly, a threshold θt is sampled randomly from a uniform distribution, and the resulting node is scored over the surviving points using Shannon entropy [8]: Sc(C, T ) = 2I(C,T ) HC+HT , where HC denotes entropy of the class label distribution, HT the entropy of the partition induced by the test and I(C, T ) their mutual information. High scores indicate that the split separates the classes well. This procedure is repeated until the score is higher than a fixed threshold Smin or until a fixed maximum number Tmax of trials have been made. The test Tt that achieved the highest score is adopted and the recursion continues. The parameters (Smin, Tmax) control the strength and randomness of the generated trees. High values (e.g. (1, D) for normal ID3 decision tree learning) produce highly discriminant trees with little diversity, while Smin = 0 or Tmax = 1 produce completely random trees. ERC-Forests. Compared to standard decision tree learning, the trees built using random decisions are larger and have higher variance. Class label variance can be reduced by voting over the ensemble of trees (e.g. [15]), but here, instead of voting we treat each leaf in each tree as a separate visual word and stack the leaf indices from each tree into an extended code vector for each input descriptor, leaving the integration of votes to the final classifier. The resulting process is reminiscent of spatial search algorithms based on random line projections (e.g. [10]), with each tree being responsible for distributing the data across its own set of clusters. Classifier forests are characterized by Breiman’s bound on the asymptotic generalization error [4], PE∗≤ρ (1 −s2)/s2 where s measures the strength of the individual trees and ρ measures the correlation between them in terms of the raw margin. It would be interesting to optimize Smin and Tmax to minimize the bound but we have not yet tried this. Experimentally, the trees appear to be rather diverse while still remaining relatively strong, which should lead to good error bounds. Application to visual vocabularies. In the experiments below, local features are extracted from the training images by sampling sub-windows at random positions and scales1 and coding them using a visual descriptor function. An ERC-Forest is then built using the given class labels. To control the codebook size, we grow the trees fully then prune them back bottom up, recursively removing the node with the lowest gain until either a specified threshold on the gain or a specified number of 1For image classification, dense enough random sampling eventually outperforms keypoint based sampling [18]. Figure 2: Left: example images from GRAZ-02. The rows are respectively Bikes (B), Cars (C) and background (N). Right: some test patches that were assigned to a particular ‘car’ leaf (left) and a particular ‘bike’ one (right)). leaves is reached. One can also prune during construction, which is faster but does not allow the number of leaf nodes to be controlled directly. In use, the trees transform each descriptor into a set of leaf node indices with one element from each tree. Votes for each index are accumulated into a global histogram and used for classification as in any other bag of features approach. Independently of the codebook, the denser the sampling the better the results, so typically we sample images more densely during testing than during codebook training, c.f. [18]. Computational complexity. The worst-case complexity for building a tree is O(TmaxNk), where N is the number of patches and k is the number of clusters/leaf nodes before pruning. With adversarial data the method cannot guarantee balanced trees so it can not do better than this, but in our experiments on real data we always obtained well balanced trees at a practical complexity of around O(TmaxN log k). The dependence on data dimensionality D is hidden in the constant Tmax, which needs to be set large enough to filter out irrelevant feature dimensions, thus providing better coding and more balanced trees. A value of Tmax ∼O( √ D) has been suggested [8], leading to a total complexity of O( √ DN log k). In contrast, k-means has a complexity of O(DNk) which is more than 104 times larger for our 768-D wavelet descriptor with N = 20000 image patches and k = 5000 clusters, not counting the number of iterations that k-means has to perform. Our method is also faster in use – a useful property given that reliable image classification requires large numbers of subwindows to be labelled [18, 24]. Labeling a descriptor with a balanced tree requires O(log k) operations whereas k-means costs O(kD). 4 Experiments We present detailed results on the GRAZ-02 test set, http://www.emt.tugraz.at/˜pinz/data/. Similar conclusions hold for two other sets that we tested, so we comment only briefly on these. GRAZ-02 (figure 2-left) contains three object categories – bicycles (B), cars (C), persons (P) – and negatives (N, meaning that none of B,C,P are present). It is challenging in the sense that the illumination is highly variable and the objects appear at a wide range of different perspectives and scales and are sometimes partially hidden. It is also neutral with respect to background, so it is not possible to detect objects reliably based on context alone. We tested various visual descriptors. The best choice turns out to depend on the database. Our color descriptor uses raw HSL color pixels to produce a 768-D feature vector (16×16 pixels × 3 colors). Our color wavelet descriptor transforms this into another 768-D vector using a 16×16 Haar wavelet transform. Finally, we tested the popular grayscale SIFT descriptor [14], which returns 128-D vectors (4×4 histograms of 8 orientations). We measure performance with ROC curves and classification rates at equal error rate (EER). The method is randomized so we report means and variances over 10 learning runs. We use Smin = 0.5 but the exact value is not critical. In contrast Tmax has a significant influence on performance so it Figure 3: ‘Bike’ visual words for 4 different images. The brightness denotes the posterior probability for the visual word at the given image position to be labelled ‘bike’. is chosen using a validation set. For the 768-D Color Wavelet Descriptor on the GRAZ-02 dataset, Tmax ≈50. Our algorithm’s ability to produce meaningful visual words is illustrated in figure 3 (c.f. [16]). Each white dot corresponds to the center of an image sub-window that reached an unmixed leaf node for the given object category (i.e. all of the training vectors belonging to the leaf are labeled with that category). Note that even though they have been learned on entire images without object segmentation, the visual vocabulary is discriminative enough to detect local structures in the test images that correspond well with representative object fragments, as illustrated in figure 2(right). The tests here were for individual object categories versus negatives (N). We took 300 images from each category, using images with even numbers for training and ones with odd numbers for testing. For Setting 1 tests we trained on the whole image as in [19], while for Setting 2 ones we used the segmentation masks provided with the images to train on the objects alone without background. For the GRAZ-02 database the wavelet descriptors gave the best performance. We report results for these on the two hardest categories, bikes and cars. For B vs. N we achieve 84.4% average EER classification rate for setting 1 and 84.1% for setting 2, in comparison to 76.5% from Opelt et al.[19]. For C vs. N the respective figures are 79.9%, 79.8% and 70.7%. Remarkably, using segmentation masks during training does not improve the image classification performance. This suggests that the method is able to pick out the relevant information from a significant amount of clutter. Comparing ERC-Forests with k-means and kd-clustering trees. Unless otherwise stated, 20 000 features (67 per image) were used to learn 1000 spatial bins per tree for 5 trees, and 8000 patches were sampled per image to build the resulting 5000-D histograms. The histograms are binarized using trivial thresholding at count 1 before being fed to the global linear SVM image classifier. We also tested with histograms normalized to total sum 1, and with thresholding by maximizing the mutual information of each dimension, but neither yielded better results for ERC-Forests. Fig. 4 gives some quantitative results on the bikes category (B vs. N). Fig. 4(a) shows the clear difference between our method and classical k-means for vocabulary construction. Note that we were not able to extend the k-means curve beyond 20 000 windows per image owing to prohibitive execution times. The figure also shows results for ‘unsupervised trees’ – ERC-Forests built without using the class labels during tree construction. The algorithm remains the same, but the node scoring function is defined as the ratio between the splits so as to encourage balanced trees similar to randomized KD-trees. If only a few patches are sampled this is as good as k-means and much faster. However the spatial partition is so bad that with additional test windows, the binarized histogram vectors become almost entirely filled with ones, so discrimination suffers. As the dotted line shows, using binarization thresholds that maximize the mutual information can fix this problem but the results are still far below ERC-Forests. This comparison clearly shows the advantages of using supervision during clustering. Fig. 4(b) shows that codebooks with around 5000 entries (1000 per tree) suffice for good results. Fig. 4(c) shows that when the number of features used to build the codebooks is increased, the Figure 4: Evaluation of the parameters for B vs. N in setting 2: classification rate at the EER, averaged over trials. The error bars indicate standard deviations. See the text for further explanations. optimal codebook size also increases slightly. Also, if the trees are pruned too heavily they lose discriminative power: it is better to grow them fully and do without pruning. Fig. 4(d) shows that increasing the number of trees from 1 to 5 reduces the variance and increases the accuracy, with little improvement beyond this. Here, the number of leaves per tree was kept constant at 1000, so doubling the number of trees effectively doubles the vocabulary size. We also tested our method on the 2005 Pascal Challenge dataset, http://www.pascalnetwork.org/challenges/VOC/voc2005. This contains four categories, motorbikes, bicycles, people and cars. The goal is to distinguish each category from the others. Just 73 patches per image (50 000 in total over the 648 training images) were used to build the codebook. The maximum patch size was 30% of the image size. SIFT descriptors gave the best results for coding. The chosen forest contained four 7500-leaf trees, producing a 30 000-D histogram. The results (M:95.8%, B:90.1%, P:94%, C:96%) were either similar to or up to 2% better than the frontrunners in the 2005 Pascal Challenge [6], but used less information and had much faster processing times. A 2.8GHz P4 took around 20 minutes to build the codebook. Building the histograms for the 684 training and 689 test images with 10 000 patches per image took only a few hours. All times include both feature extraction and coding. We also compared our results with those of Mar´ee et al. [15]. They use the same kind of tree structures to classify images directly, without introducing the vocabulary layer that we propose. Our EER error rates are consistently 5–10% better than theirs. Finally, we tested the horse database from http://pascal.inrialpes.fr/data/horses. The task is difficult because the images were taken randomly from the internet and are highly variable regarding subject size, pose and visibility. Using SIFT descriptors we get an EER classification rate of 85.3%, which is significantly better than the other methods that we are aware of. 100 patches per image were used to build a codebook with 4 trees. 10 000 patches per image were used for testing. 5 Conclusions Bag of local descriptor based image classifiers give state-of-the art results but require the quantization of large numbers of high-dimensional image descriptors into many label classes. Extremely Randomized Clustering Forests provide a rapid and highly discriminative approach to this that outperforms k-means based coding in training time and memory, testing time, and classification accuracy. The method can use unlabelled data but it benefits significantly from labels when they are available. It is also resistant to background clutter, giving relatively clean segmentation and “popout” of foreground classes even when trained on images that contain significantly more background features than foreground ones. Although trained as classifiers, the trees are used as descriptor-space quantization rules with the final classification being handled by a separate SVM trained on the leaf indices. This seems to be a promising approach for visual recognition, and may be beneficial in other areas such as object detection and segmentation. References [1] E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning Journal, 36(1-2):105–139, 1999. [2] K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is nearest neighbors meaningful? In Int. Conf. Database Theorie, pages 217–235, 1999. [3] H. Blockeel, L. De Raedt, and J. Ramon. Top-down induction of clustering trees. In ICML, pages 55–63, 1998. [4] L. Breiman. Random forests. ML Journal, 45(1):5–32, 2001. [5] G. Csurka, C. Dance, L. Fan, J. Williamowski, and C. Bray. Visual categorization with bags of keypoints. In ECCV’04 workshop on Statistical Learning in CV, pages 59–74, 2004. [6] M. Everingham et al. (33 authors). The 2005 PASCAL visual object classes challenge. In F. d’Alche Buc, I. Dagan, and J. Quinonero, editors, Proc. 1st PASCAL Challenges Workshop. Springer LNAI, 2006. [7] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google’s image search. In ICCV, pages II: 1816–1823, 2005. [8] P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machile Learning Journal, 63(1), 2006. [9] F. Jurie and B. Triggs. Creating efficient codebooks for visual recognition. In ICCV, 2005. [10] H. Lejsek, F.H. ´Asmundsson, B. Th´or-J´onsson, and L. Amsaleg. Scalability of local image descriptors: A comparative study. In ACM Int. Conf. on Multimedia, Santa Barbara, 2006. [11] V. Lepetit, P. Lagger, and P. Fua. Randomized trees for real-time keypoint recognition. In CVPR ’05 Vol.2, pages 775–781, 2005. [12] T. Leung and J. Malik. Representing and recognizing the visual appearance of materials using threedimensional textons. IJCV, 43(1):29–44, June 2001. [13] Bing Liu, Yiyuan Xia, and Philip S. Yu. Clustering through decision tree construction. In CIKM ’00, pages 20–29, 2000. [14] D.G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2), 2004. [15] R. Mar´ee, P. Geurts, J. Piater, and L. Wehenkel. Random subwindows for robust image classification. In CVPR, volume 1, pages 34–40, 2005. [16] F. Moosmann, D. Larlus, and F. Jurie. Learning saliency maps for object categorization. In ECCV’06 Workshop on the Representation and Use of Prior Knowledge in Vision, 2006. [17] D. Nist´er and H. Stew´enius. Scalable recognition with a vocabulary tree. In CVPR, 2006. [18] E. Nowak, F. Jurie, and B. Triggs. Sampling strategies for bag-of-features image classification. In ECCV’06, 2006. [19] A. Opelt and A. Pinz. Object localization with boosting and weak supervision for generic object recognition. In SCIA, 2005. [20] F. Perronnin, C. Dance, G. Csurka, and M. Bressan. Adapted vocabularies for generic visual categorization. In ECCV, 2006. [21] U. Shaft, J. Goldstein, and K. Beyer. Nearest neighbor query performance for unstable distributions. Technical Report TR 1388, Dpt of Computer Science, Univ. of Wisconsin, 1998. [22] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In ICCV, volume 2, pages 1470–1477, October 2003. [23] J. Winn and A. Criminisi. Object class recognition at a glance. In CVPR’06 - video tracks, 2006. [24] J. Winn, A. Criminisi, and T. Minka. Object categorization by learned universal visual dictionary. In ICCV, pages II: 1800–1807, 2005. [25] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Computer Vision. To appear, 2006.
|
2006
|
170
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.