index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
3,000
Learning Motion Style Synthesis from Perceptual Observations Lorenzo Torresani Riya, Inc. lorenzo@riya.com Peggy Hackney Integrated Movement Studies pjhackney@aol.com Christoph Bregler New York University chris.bregler@nyu.edu Abstract This paper presents an algorithm for synthesis of human motion in specified styles. We use a theory of movement observation (Laban Movement Analysis) to describe movement styles as points in a multi-dimensional perceptual space. We cast the task of learning to synthesize desired movement styles as a regression problem: sequences generated via space-time interpolation of motion capture data are used to learn a nonlinear mapping between animation parameters and movement styles in perceptual space. We demonstrate that the learned model can apply a variety of motion styles to pre-recorded motion sequences and it can extrapolate styles not originally included in the training data. 1 Introduction Human motion perception can be generally thought of as the result of interaction of two factors, traditionally termed content and style. Content generally refers to the nature of the action in the movement (e.g. walking, reaching, etc.), while style denotes the particular way that action is performed. In computer animation, the separation of the underlying content of a movement from its stylistic characteristics is particularly important. For example, a system that can synthesize stylistic variations of a given action would be a useful tool for animators. In this work we address such a problem by proposing a system that applies user-specified styles to motion sequences. Specifically, given as input a target motion style and an arbitrary animation or pre-recorded motion, we want to synthesize a novel sequence that preserves the content of the original input motion but exhibits style similar to the user-specified target. Our approach is inspired by two classes of methods that have successfully emerged within the genre of data-driven animation: sample-based concatenation methods, and techniques based on learned parametric models. Concatenative synthesis techniques [15, 1, 11] are based on the simple idea of generating novel movements by concatenation of motion capture snippets. Since motion is produced by cutting and pasting pre-recorded examples, the resulting animations achieve realism similar to that of pure motion-capture play back. Snippet concatenation can produce novel content by generating arbitrarily complex new movements. However, this approach is restricted to synthesize only the subset of styles originally contained in the input database. Sample-based concatenation techniques are unable to produce novel stylistic variations and cannot generalize style differences from the existing examples. In recent years, several machine learning animation systems [2, 12, 9] have been proposed that attempt to overcome some of these limitations. Unfortunately, most of these methods learn simple parametric motion models that are unable to fully capture the subtleties and complexities of human movement. As a consequence, animations resulting from these systems are often plagued by low quality and scarce realism. The technique introduced in this paper is a compromise between the pure concatenative approaches and the methods based on learned parametric models. The aim is to maintain the animated precision of motion capture data, while introducing the flexibility of style changes achievable by learned parametric models. Our system builds on the observation that stylistically novel, yet highly realistic animations can be generated via space-time interpolation of pairs of motion sequences. We propose to learn not a parametric function of the motion, but rather a parametric function of how the interpolation or extrapolation weights applied to data snippets relate to the styles of the output sequences. This allows us to create motions with arbitrary styles without compromising animation quality. Several researchers have previously proposed the use of motion interpolation for synthesis of novel movement [18, 6, 10]. These approaches are based on the na¨ıve assumption that motion interpolation produces styles corresponding precisely to the interpolation of the styles of the original sequences. In this paper we experimentally demonstrate that styles generated through motion interpolation are a rather complex function of styles and contents of the original snippets. We propose to explicitly learn the mapping between motion blending parameters and resulting animation styles. This enables our animation system not only to generate arbitrary stylistic variations of a given action, but, more importantly, to synthesize sequences matching user-specified stylistic characteristics. Our approach bears similarities with the Verbs and Adverbs work of Rose et al. [16], in which interpolation models parameterized by style attributes are learned for several actions, such as walking or reaching. Unlike this previously proposed algorithm, our solution can automatically identify sequences having similar content, and therefore does not require manual categorization of motions into classes of actions. This feature allows our algorithm to be used for style editing of sequences without content specification by the user. Additionally, while the Verb and Adverb system characterizes motion styles in terms of difficult-to-measure emotional attributes, such as sad or clueless, our approach relies on a theory of movement observation, Laban Movement Analysis, describing styles by means of a set of rigorously defined perceptual attributes. 2 The LMA Framework In computer animation literature motion style is a vaguely defined concept. In our work, we describe motion styles according to a movement notation system, called Laban Movement Analysis or LMA [7]. We focus on a subset of Laban Movement Analysis: the ”LMA-Effort” dimensions. This system does not attempt to describe the coarse aspects of a motion, e.g. whether someone is walking, or swinging his/her arm. Instead, it targets the subtle differences in motion style, e.g. is the movement ”bound” or ”free”? Each LMA-Effort factor varies in intensity between opposing poles, and takes values in a continuous range. The factors are briefly described as follows: 1. The ”LMA-Effort Factor of Flow” defines the continuity of the movement. The two opposing poles are ”Free” (fluid, released), and ”Bound” (controlled, contained, restrained). 2. The ”LMA-Effort Factor of Weight” is about the relationship of the movement to gravity. The two opposing extremes are ”Light” (gentle, delicate, fine touch) and ”Strong” (powerful, forceful, firm touch). 3. The ”LMA-Effort Factor of Time” has to do with the persons inner attitude toward the time available, but not with how long it takes to perform the movement. The two opposing poles are ”Sudden” (urgent, quick) and ”Sustained” (stretching the time, indulging). 4. The ”LMA-Effort Factor of Space” describes the directness of the movement. Generally, additional features not present in motion capture data, such as eye gaze, are necessary to detect this factor. We use only the LMA-Effort factors of Flow, Weight, and Time. We model styles as points in a three-dimensional perceptual space derived by translating the LMA-Effort notations for each of these factors into numerical values ranging in the interval [−3, 3]. 3 Overview of the system The key-idea of our work is to learn motion style synthesis from a training set of computer-generated animations. The training animations are observed by a human expert who assigns LMA labels to each sequence. This set of supervised data is used to learn a mapping between the space of motion styles and the animation system parameters. We next provide a high-level description of our system, while the following sections give specific details of each component. 3.1 Training: Learning the Style of Motion Interpolation In order to train our system to synthesize motion styles, we employ a corpus of human motion sequences recorded with a motion capture system. We represent the motion as a time-varying vector of joint angles. In the training stage each motion sequence is manually segmented by an LMA human expert into fragments corresponding to fundamental actions or units of motions. Let Xi denote the joint angle data of the i-th fragment in the database. Step 1: Matching motion content. We apply a motion matching algorithm to identify fragment pairs (Xi, Xj) containing similar actions. Our motion matching algorithm is based on dynamic-time warping. This allows us to compare kinematic contents while factoring out differences in timing or acceleration, more often associated to variations in style. Step 2: Space-time interpolation. We use these motion matches to augment the database with new synthetically-generated styles: given matching motion fragments Xi, Xj, and an interpolation parameter α, space-time interpolation smoothly blends the kinematics and dynamics of the two fragments to produce a new motion Xα i,j with novel distinct style and timing. Step 3: Style interpolation learning. Both the synthesized animations Xα i,j as well as the ”seed” motion capture data Xi are labeled with LMA-Effort values by an LMA expert. Let ei and eα i,j denote the three-dimensional vectors encoding the LMA-Effort qualities of Xi and Xα i,j, respectively. A non-linear regression model [5] is fitted to the LMA labels and the parameters α of the space-time interpolation algorithm. This regression defines a function f predicting LMA-Effort factors eα i,j from the style attributes and joint angle data of fragments i and j: eα i,j = f(Xi, Xj, ei, ej, α) (1) This function-fitting stage allows us to learn how the knobs of our animation system relate to the perceptual space of movement styles. 3.2 Testing: Style Transfer At testing stage we are given a motion sequence Y, and a user-specified motion style ¯e. The goal is to apply style ¯e to the input sequence Y, without modifying the content of the motion. First, we use dynamic-time warping to segment the input sequence into snippets Yi, such that each snippet matches the content of a set of analogous motions {Xi1, ..., XiK} in the database. Among all possible pairwise blends Xα ik,il of examples in the set {Xi1, ..., XiK}, we determine the one that provides the best approximation to the target style ¯e. This objective can be formulated as α∗, k∗, l∗←arg min α,k,l ||¯e −f(Xik, Xil, eik, eil, α)|| (2) The animation resulting from space-time interpolation of fragments Xik∗and Xil∗with parameter α∗will exhibit content similar to that of snippet Yi and style approximating the target ¯e. Concatenating these artificially-generated snippets will produce the desired output. 4 Matching motion content The objective of the matching algorithm is to identify pairs of sequences having similar motion content or consisting of analogous activities. The method should ignore variations in the style with which movements are performed. Previous work [2, 12] has shown that the differences in movement styles can be found by examining the parameters of timing and movement acceleration. By contrast, an action is primarily characterized by changes of body configurations in space rather than over time. Thus we compare the content of two motions by identifying similar spatial body poses while allowing for potentially large differences in timing. Specifically, we define the content similarity between motion snippets Xi and Xj, as the minimum sum of their squared joint angle differences SSD(Xi, Xi) under a dynamic time warping path. Let d(p, q) = ||Xi(p) −Xj(q)||2 be our local measure of the distance between spatial body configurations Xi at frame p and Xj at frame q. Let Ti be the number of frames in sequence i and L the variable length of a time path w(n) = (p(n), q(n)) aligning the two snippets. We can then formally define SSD(Xi, Xi) as: SSD(Xi, Xi) = min w X n d(w(n)) (3) subject to constraints: p(1) = 1, q(1) = 1, p(L) = Ti, q(L) = Tj (4) if w(n) = (p, q) then w(n −1) ∈{(p −1, q), (p −1, q −1), (p, q −1)} (5) We say that two motions i and j have similar content if SSD(Xi, Xi) is below a certain value. 5 Space-time interpolation A time warping strategy is also employed to synthesize novel animations from the pairs of contentmatching examples found by the algorithm outlined in the previous section. Given matching snippets Xi and Xj, the objective is to generate a stylistically novel sequence that maintains the content of the two original motions. The idea is to induce changes in style by acting primarily on the timings of the motions. Let w∗= (p∗, q∗) be the path minimizing Equation 3. This path defines a time alignment between the two sequences. We can interpret frame correspondences (p∗(n), q∗(n)) for n = 1, ..., L, as discrete samples from a continuous 2D curve parameterized by n. Resampling Xi and Xj along this curve will produce synchronized versions of the two animations, but with new timings. Suppose parameter values n0 1, ...., n0 Ti are chosen such that p∗(n0 k) = k. Then Xi(p∗(n0 k)) will be replayed with its original timing. However, if we use these same parameter values on sequence Xj (i.e. we estimate joint angles Xj at time steps q∗(n0 k)) then the resampled motion will correspond to playing sequence j with the timing of sequence i. Similarly, n1 1, ...., n1 Tj can be chosen, such that q∗(n1 k) = k, and these parameter values can be used to synthesize motion i with the timing of motion j. It is also possible to smoothly interpolate between these two scenarios according to an interpolation parameter α ∈[0, 1] to produce intermediate time warps. This will result in a time path of length T α ij = (1−α)Ti +αTj. Let us indicate with nα 1 , ...., nα T α ij the path parameter values obtained from this time interpolation. New stylistic versions of motions i and j can be produced by estimating the joint angles Xi and Xj at p∗(nα k) and q∗(nα k), respectively. The two resulting sequences will move in synchrony according to the new intermediate timing. From these two synchronized sequences, a novel motion Xα i,j can be generated by averaging the joint angles according to mixing coefficients (1 −α) and α: Xα i,j(k) = (1 −α)Xi(p∗(nα k)) + αXj(q∗(nα k)). The synthesized motion Xα i,j will display content similar to that of Xi and Xj, but it will have distinct style. We call this procedure ”space-time interpolation”, as it modifies the spatial body configurations and the timings of sequences. 6 Learning style interpolation Given a pair of content-matching snippets Xi and Xj, our goal is to determine the parameter α that needs to be applied to space-time interpolation in order to produce a motion Xα i,j exhibiting target style ¯e. We propose to solve this task by learning to predict the LMA-Effort qualities of animations synthesized by space-time interpolation. The training data for this supervised learning task consists of our seed motion sequences {Xi} in the database, a set of interpolated motions {Xα i,j}, and the corresponding LMA-Effort qualities {ei}, {eα i,j} observed by an LMA human expert. In order to maintain a consistent data size, we stretch or shrink the time trajectories of the joint angles {Xi} to a set length. In order to avoid overfitting, we compress further the motion data by projecting it onto a low-dimensional linear subspace computed using Principal Component Analysis (PCA). In many of the test cases, we found it was sufficient to retain only the first two or three principal components in order to obtain a discriminative representation of the motion contents. Let ci denote the vector containing the PCA coefficients computed from Xi. Let zα i,j = [cT i , cT j , eT i , eT j , α]T . We pose the task of predicting LMA-Effort qualities as a function approximation problem: the goal is to learn the optimal parameters θ of a parameterized function f(zα i,j, θ) that models the dependencies between zα i,j and the observed LMA-Effort values eα i,j. Parameters θ are chosen so as to minimize the objective function: E(θ) = U X L(f(zα i,j, θ) −eα i,j) + ||θ||2 (6) where L is a general loss function and U is a regularization constant aimed at avoiding overfitting and improving generalization. We experimented with several function parameterizations and loss functions applied to our problem. The simplest of the adopted approaches is linear ridge regression [4], which corresponds to choosing the loss function L to be quadratic (i.e. L(.) = (.)2) and f to be linear in input space: f(z, θ) = zT · θ (7) We also applied kernel ridge regression, resulting from mapping the input vectors z into features of a higher-dimensional space via a nonlinear function Φ: z →Φ(z). In order to avoid the explicit computation of the vectors Φ(zj) in the high-dimensional feature space, we apply the kernel trick and choose mappings Φ such that the inner product Φ(zi)T · Φ(zj) can be computed via a kernel function k(zi, zj) of the inputs. We compared the performance of kernel ridge regression with that of support vector regression [5]. While kernel ridge regression requires us to store all training examples in order to evaluate function f at a given input, support vector regression overcomes this limitation by using an ǫ-insensitive loss function [17]. The resulting f can be evaluated using only a subset of the training data, the set of support vectors. 7 Testing: Style Transfer We can restate our initial objective as follows: given an input motion sequence Y in unknown style, and a target motion style ¯e specified by LMA-Effort values, we want to synthesize a sequence having style ¯e and content analogous to that of motion Y. A na¨ıve approach to this problem is to seek in the motion database a pair of sequences having content similar to Y and whose interpolation can approximate style ¯e. The learned function f can be used to determine the pair of motions and the interpolation parameter α that produce the best approximation to ¯e. However, such an approach is destined to fail as Y can be any arbitrarily long and complex sequence, possibly consisting of several movements performed one after the other. As a consequence, we might not have in the database examples that match sequence Y in its entirety. 7.1 Input segmentation and matching The solution that we propose is inspired by concatenative methods. The idea is to determine the concatenation of database motion examples [X1, ..., XN] that best matches the content of input sequence Y. Our approach relies again on dynamic programming and can be interpreted as a generalization of the dynamic time warping technique presented in Section 4, for the case when a time alignment is sought between a given sequence and a concatenation of a variable number of examples chosen from a set. Let d(p, q, i) be the sum of squared differences between the joint angles of sequence Y at frame p and those of example Xi at frame q. The goal is to recover the time warping path w(n) = (p(n), q(n), i(n)) that minimizes the global error min w X n d(w(n)) (8) subject to basic segment transition and endpoint constraints. Transitions constraints are enforced to guarantee that time order is preserved and that no time frames are omitted. Endpoint constraints require that the time path starts at beginning frames and finishes at ending frames of the sequences. The above mentioned conditions can be formalized as follows: if w(n) = (p, 1, i), then w(n −1) ∈{(p −1, 1, i), (p −1, Tj, j)for j = 1, ..., J} (9) if w(n) = (p, q, i) and q > 1, then w(n −1) ∈{(p −1, q, i), (p −1, q −1, i), (p, q −1, i)}(10) p(1) = 1, q(1) = 1, p(L) = T, q(L) = Ti(L) (11) where J denotes the number of fragments in the database, L the length of the time warping path, T the number of frames of the input sequence, and Tj the length of the j-th fragment in the database. Table 1: Mean squared error on LMA-Effort prediction for different function approximation methods Function Approxim. Linear Linear Ridge Kernel Ridge Support Vector Method Interpolation Regression Regression Regression Flow MSE 0.65 1.03 0.50 0.48 Weight MSE 0.97 1.04 0.39 0.48 Time MSE 1.01 1.01 0.60 0.61 The global minimum of the objective in Equation (8), subject to constraints (9),(10),(11), can be found using a dynamic programming method originally developed by Ney [14] for the problem of connected word recognition in speech data. Note that this approach induces a segmentation of the input sequence Y into snippets [Y1, ..., YN], matching the examples in the optimal concatenation [X1, ..., XN]. 7.2 Piecewise Style synthesis The final step of our algorithm uses the concatenation of examples [X1, ..., XN] determined by the method outlined in the previous section to synthesize a version of motion Y in style ¯e. For each Xi in [X1, ..., XN], we identify the K most similar database examples according to the criterion defined in Equation 3. Let {Xi1, ..., XiK} denote the K content-neighbors of Xi and {ei1, ..., eiK} their LMA-Effort values. {Xi1, ..., XiK} defines a cluster of examples having content similar to that of snippet Yi. The final goal then is to replace each snippet Yi with a pairwise blend of examples in its cluster so as to produce a motion exhibiting style ¯e. Formally, this is achieved by determining the pair of examples (ik∗, il∗) in Yi’s cluster, and the interpolation weight α∗that provide the best approximation to target style ¯e, according to the learned style-prediction function f: α∗, k∗, l∗←arg min α,k,l ||¯e −f(zα ik,il)|| (12) Minimization of this objective is achieved by first finding the optimal α for each possible pair (ik, il) of candidate motion fragments. We then select the pair (ik∗, il∗) providing the minimum deviation from the target style ¯e. In order to estimate the optimal values of α for pair (ik, il), we evaluate f(zα ik,il) for M values of α uniformly sampled in the interval [-0.25,1.25], and choose the value with the closest fit to the target style. We found that f tends to vary smoothly as a function of α, and thus a good estimate of the global minimum in the specified interval can be obtained even with a modest number M of samples. The approximation is further refined using a golden section search [8] around the initial estimate. Note that, by allowing values α to be chosen in the range [-0.25,1.25] rather than [0,1], we give the algorithm the ability to extrapolate from existing motion styles. Given optimal parameters (α∗, k∗, l∗), space-time interpolation of fragments Xik∗and Xil∗with parameter value α∗produces an animation with content similar to that of Yi and style approximating the desired target ¯e. This procedure is repeated for all snippets of Y. The final animation is obtained by concatenating all of the fragments generated via interpolation with optimal parameters. 8 Experiments The system was tested using a motion database consisting of 12 sequences performed by different professional dancers. The subjects were asked to perform a specific movement phrase in their own natural style. Each of the 12 sequences was segmented by an LMA expert into 5 fragments corresponding to the main actions in the phrase. All fragments were then automatically clustered into 5 content groups using the SSD criterion outlined in section 4. The motions were recorded using a marker-based motion capture system. In order to derive joint angles, the 3D trajectories of the markers were fitted to a kinematic chain with 17 joints. The joint angles were represented with exponential maps [13], which have the property of being locally linear and thus particularly suitable for motion interpolation. From these 60 motion fragments, 105 novel motions were synthesized with space-time interpolation using random values of α in the range [−0.25, 1.25]. All motions, both those recorded and those artificially generated, were annotated with LMA-Effort qualities by -0.5 0 0.5 1 1.5 -3 -2 -1 0 1 2 3 α LMA-Effort values FLOW WEIGHT TIME -0.5 0 0.5 1 1.5 -3 -2 -1 0 1 2 3 α LMA-Effort values FLOW WEIGHT TIME -0.5 0 0.5 1 1.5 -3 -2 -1 0 1 2 3 α LMA-Effort values FLOW WEIGHT TIME Figure 1: Sample LMA-Effort attributes estimated by kernel ridge regression on three different pairs of motions (Xi, Xj) and for α varying in [-0.25, 1.25]. The Flow attribute appears to be almost linearly dependent on α. By contrast, Weight and Time exhibit non-linear relations with the interpolation parameter. an LMA expert. From this set of motions, 85 training examples were randomly selected to train the style regression models. The remaining 20 examples were used for testing. Table 1 summarizes the LMA-Effort prediction performance in terms of mean squared error for the different function approximation models discussed in the paper. Results are reported by averaging over 500 runs of random splitting of the examples into training and testing sets. We include in our analysis the linear style interpolation model, commonly used in previous work. This model assumes that the style of a sequence generated via motion interpolation is equal to the interpolation of the styles of the two seed motions: eα i,j = αei + (1 −α)ej. In all experiments involving kernel-based approximation methods, we used a Gaussian RBF kernel. The hyperparameters (i.e. the kernel and the regularization parameters) were tuned using tenfold cross-validation. Since the size of the training data is not overly large, it was possible to run kernel ridge regression without problems despite the absence of sparsity of this solution. The simple linear interpolation model performed reasonably well only on the Flow dimension. Overall, non-linear regression models proved to be much superior to the linear interpolation function, indicating that the style of sequences generates via space-time interpolation is a complex function of the original styles and motions. Figure 1 shows the LMA-Effort qualities predicted by kernel ridge regression while varying α for three different sample values of the inputs (Xi, Xj, ei, ej). Note that the shapes of the sample curves learned by kernel ridge regression for the Flow attribute suggest an almost linear dependence of Flow on α. By contrast, sample functions for the Weight and Time dimensions exhibit non-linear behavior. These results are consistent with the differences in prediction performance between the non-linear function models and the linear approximations, as outlined in Table 1. Several additional motion examples performed by dancers not included in the training data were used to evaluate the complete pipeline of the motion synthesis algorithm. The input sequences were always correctly segmented by the dynamic programming algorithm into the five fragments associated with the actions in the phrase. Kernel ridge regression was used to estimate the values of α∗, k∗, l∗as to minimize Equation 12 for different user-specified LMA-Effort vectors ¯e. The recovered parameter values were used to synthesize animations with the specified desired styles. Videos of these automatically generated motions as well as additional results can be viewed at http://movement.nyu.edu/learning-motion-styles/ . In order to test the generalization ability of our system, the target styles in this experiment were chosen to be considerably different from those in the training set. All of the synthesized sequences were visually inspected by LMA experts and, for the great majority, they were found to be consistent with the style target labels. 9 Discussions and Future Work We have presented a novel technique that learns motion style synthesis from artificially-generated examples. Animations produced by our system have quality similar to pure motion capture playback. Furthermore, we have shown that, even with a small database, it is possible to use pair-wise interpolation or extrapolation to generate new styles. In previous LMA-based animation systems [3], heuristic and hand-designed rules have been adopted to implement the style changes associated to LMA-Effort variations. To the best of our knowledge, our work represents the first attempt at automatically learning the mapping between LMA attributes and animation parameters. Although our algorithm has shown to produce good results with small training data, we expect that larger databases with a wider variety of motion contents and styles are needed in order to build an effective animation system. Multi-way, as opposed to pair-wise, interpolation might lead to synthesis of more varied motion styles. Our approach could be easily generalized to other languages and notations, and to additional domains, such as facial animation. Our future work will focus on the recognition of LMA categories in motion capture data. Research in this area might point to methods for learning person-specific styles and to techniques for transferring individual movement signatures to arbitrary motion sequences. Acknowledgments This work was carried out while LT was at Stanford University and visiting New York University. Thanks to Alyssa Lees for her help on this project and paper. We are grateful to Edward Warburton, Kevin Feeley, and Robb Bifano for assistance with the experimental setup and to Jared Silver for the Maya animations. Special thanks to Jan Burkhardt, Begonia Caparros, Ed Groff, Ellen Goldman and Pamela Schick for LMA observations and notations. This work has been supported by the National Science Foundation. References [1] O. Arikan and D. A. Forsyth. Synthesizing constrained motions from examples. ACM Transactions on Graphics, 21(3):483–490, July 2002. [2] M. Brand and A. Hertzmann. Style machines. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages 183–192, July 2000. [3] D. Chi, M. Costa, L. Zhao, and N. Badler. The emote model for effort and shape. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, July 2000. [4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines (and other kernel-based learning methods). Cambridge University Press, 2000. [5] H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik. Support vector regression machines. In Proc. NIPS 9, 2003. [6] M. A. Giese and T. Poggio. Morphable models for the analysis and synthesis of complex motion patterns. International Journal of Computer Vision, 38(1):59–73, 2000. [7] P. Hackney. Making Connections: Total Body Integration Through Bartenieff Fundamentals. Routledge, 2000. [8] M. T. Heath. Scientific Computing: An Introductory Survey, Second edition. McGraw Hill, 2002. [9] E. Hsu, K. Pulli, and J. Popovic. Style translation for human motion. ACM Transactions on Graphics, 24(3):1082–1089, 2005. [10] L. Kovar and M. Gleicher. Automated extraction and parameterization of motions in large data sets. ACM Transactions on Graphics, 23(3):559–568, Aug. 2004. [11] J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics, 21(3):491–500, July 2002. [12] Y. Li, T. Wang, and H.-Y. Shum. Motion texture: A two-level statistical model for character motion synthesis. ACM Transactions on Graphics, 21(3):465–472, July 2002. [13] R. Murray, Z. Li, and S. Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, 1994. [14] H. Ney. The use of a one–stage dynamic programming algorithm for connected word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(3):263–271, 1984. [15] K. Pullen and C. Bregler. Motion capture assisted animation: Texturing and synthesis. ACM Transactions on Graphics, 21(3):501–508, July 2002. [16] C. Rose, M. Cohen, and B. Bodenheimer. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Application, 18(5):32–40, 1998. [17] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995. [18] D. J. Wiley and J. K. Hahn. Interpolation synthesis of articulated figure motion. IEEE Computer Graphics and Application, 17(6):39–45, 1997.
2006
171
3,001
Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego todorov@cogsci.ucsd.edu Abstract We introduce a class of MPDs which greatly simplify Reinforcement Learning. They have discrete state spaces and continuous control spaces. The controls have the effect of rescaling the transition probabilities of an underlying Markov chain. A control cost penalizing KL divergence between controlled and uncontrolled transition probabilities makes the minimization problem convex, and allows analytical computation of the optimal controls given the optimal value function. An exponential transformation of the optimal value function makes the minimized Bellman equation linear. Apart from their theoretical signicance, the new MDPs enable efcient approximations to traditional MDPs. Shortest path problems are approximated to arbitrary precision with largest eigenvalue problems, yielding an O (n) algorithm. Accurate approximations to generic MDPs are obtained via continuous embedding reminiscent of LP relaxation in integer programming. Offpolicy learning of the optimal value function is possible without need for stateaction values; the new algorithm (Z-learning) outperforms Q-learning. This work was supported by NSF grant ECS–0524761. 1 Introduction In recent years many hard problems have been transformed into easier problems that can be solved efciently via linear methods [1] or convex optimization [2]. One area where these trends have not yet had a signicant impact is Reinforcement Learning. Indeed the discrete and unstructured nature of traditional MDPs seems incompatible with simplifying features such as linearity and convexity. This motivates the search for more tractable problem formulations. Here we construct the rst MDP family where the minimization over the control space is convex and analytically tractable, and where the Bellman equation can be exactly transformed into a linear equation. The new formalism enables efcient numerical methods which could not previously be applied in Reinforcement Learning. It also yields accurate approximations to traditional MDPs. Before introducing our new family of MDPs, we recall the standard formalism. Throughout the paper S is a nite set of states, U (i) is a set of admissible controls at state i 2 S, ` (i; u)  0 is a cost for being in state i and choosing control u 2 U (i), and P (u) is a stochastic matrix whose element pij (u) is the transition probability from state i to state j under control u. We focus on problems where a non-empty subset A  S of states are absorbing and incur zero cost: pij (u) = j i and ` (i; u) = 0 whenever i 2 A. Results for other formulations will be summarized later. If A can be reached with non-zero probability in a nite number of steps from any state, then the undiscounted innite-horizon optimal value function is nite and is the unique solution [3] to the Bellman equation v (i) = min u2U(i) n ` (i; u) + X j pij (u) v (j) o (1) For generic MDPs this equation is about as far as one can get analytically. 2 A class of more tractable MDPs In our new class of MDPs the control u 2 RjSj is a real-valued vector with dimensionality equal to the number of discrete states. The elements uj of u have the effect of directly modifying the transition probabilities of an uncontrolled Markov chain. In particular, given an uncontrolled transition probability matrix P with elements pij, we dene the controlled transition probabilities as pij (u) = pij exp (uj) (2) Note that P (0) = P. In some sense this is the most general notion of "control" one can imagine – we are allowing the controller to rescale the underlying transition probabilities in any way it wishes. However there are two constraints implicit in (2). First, pij = 0 implies pij (u) = 0. In this case uj has no effect and so we set it to 0 for concreteness. Second, P (u) must have row-sums equal to 1. Thus the admissible controls are U (i) = n u 2 RjSj; X j pij exp (uj) = 1; pij = 0 =) uj = 0 o (3) Real-valued controls make it possible to dene a natural control cost. Since the control vector acts directly on the transition probabilities, it makes sense to measure its magnitude in terms of the difference between the controlled and uncontrolled transition probabilities. Differences between probability distributions are most naturally measured using KL divergence, suggesting the following denition. Let pi (u) denote the i-th row-vector of the matrix P (u), that is, the vector of transition probabilities from state i to all other states under control u. The control cost is dened as r (i; u) = KL (pi (u) jjpi (0)) = X j:pij6=0 pij (u) log pij (u) pij (0) (4) From the properties of KL divergence it follows that r (i; u)  0, and r (i; u) = 0 iff u = 0. Substituting (2) in (4) and simplifying, the control cost becomes r (i; u) = X j pij (u) uj (5) This has an interesting interpretation. The Markov chain likes to behave according to P but can be paid to behave according to P (u). Before each transition the controller species the price uj it is willing to pay (or collect, if uj < 0) for every possible next state j. When the actual transition occurs, say to state k, the controller pays the price uk it promised. Then r (i; u) is the price the controller expects to pay before observing the transition. Coming back to the MDP construction, we allow an arbitrary state cost q (i)  0 in addition to the above control cost: ` (i; u) = q (i) + r (i; u) (6) We require q (i) = 0 for absorbing states i 2 A so that the process can continue indenitely without incurring extra costs. Substituting (5, 6) in (1), the Bellman equation for our MDP is v (i) = min u2U(i) n q (i) + X j pij exp (uj) (uj + v (j)) o (7) We can now exploit the benets of this unusual construction. The minimization in (7) subject to the constraint (3) can be performed in closed form using Lagrange multipliers, as follows. For each i dene the Lagrangian L (u; i) = X j pij exp (uj) (uj + v (j)) + i X j pij exp (uj) 1  (8) The necessary condition for an extremum with respect to uj is 0 = @L @uj = pij exp (uj) (uj + v (j) + i + 1) (9) When pij 6= 0 the only solution is u j (i) = v (j) i 1 (10) Taking another derivative yields @2L @uj@uj uj=u j (i) = pij exp u j (i)  > 0 (11) and therefore (10) is a minimum. The Lagrange multiplier i can be found by applying the constraint (3) to the optimal control (10). The result is i = log X j pij exp (v (j))  1 (12) and therefore the optimal control law is u j (i) = v (j) log X k pik exp (v (k))  (13) Thus we have expressed the optimal control law in closed form given the optimal value function. Note that the only inuence of the current state i is through the second term, which serves to normalize the transition probability distribution pi (u) and is identical for all next states j. Thus the optimal controller is a high-level controller: it tells the Markov chain to go to good states without specifying how to get there. The details of the trajectory emerge from the interaction of this controller and the uncontrolled stochastic dynamics. In particular, the optimally-controlled transition probabilities are pij (u (i)) = pij exp (v (j)) P k pik exp (v (k)) (14) These probabilities are proportional to the product of two terms: the uncontrolled transition probabilities pij which do not depend on the costs or values, and the (exponentiated) next-state values v (j) which do not depend on the current state. In the special case pij = consti the transition probabilities (14) correspond to a Gibbs distribution where the optimal value function plays the role of an energy function. Substituting the optimal control (13) in the Bellman equation (7) and dropping the min operator, v (i) = q (i) + X j pij (u (i)) u j (i) + v (j)  (15) = q (i) + X j pij (u (i)) (i 1) = q (i) i 1 = q (i) log X j pij exp (v (j))  Rearranging terms and exponentiating both sides of (15) yields exp (v (i)) = exp (q (i)) X j pij exp (v (j)) (16) We now introduce the exponential transformation z (i) = exp (v (i)) (17) which makes the minimized Bellman equation linear: z (i) = exp (q (i)) X j pijz (j) (18) Dening the vector z with elements z (i), and the diagonal matrix G with elements exp (q (i)) along its main diagonal, (18) becomes z = GPz (19) Thus our class of optimal control problems has been reduced to a linear eigenvalue problem. 2.1 Iterative solution and convergence analysis From (19) it follows that z is an eigenvector of GP with eigenvalue 1. Furthermore z (i) > 0 for all i 2 S and z (i) = 1 for i 2 A. Is there a vector z with these properties and is it unique? The answer to both questions is afrmative, because the Bellman equation has a unique solution and v is a solution to the Bellman equation iff z = exp (v) is an admissible solution to (19). The only remaining question then is how to nd the unique solution z. The obvious iterative method is zk+1 = GPzk; z0 = 1 (20) This iteration always converges to the unique solution, for the following reasons. A stochastic matrix P has spectral radius 1. Multiplication by G scales down some of the rows of P , therefore GP has spectral radius at most 1. But we are guaranteed than an eigenvector z with eigenvalue 1 exists, therefore GP has spectral radius 1 and z is a largest eigenvector. Iteration (20) is equivalent to the power method (without the rescaling which is unnecessary here) so it converges to a largest eigenvector. The additional constraints on z are clearly satised at all stages of the iteration. In particular, for i 2 A the i-th row of GP has elements j i, and so the i-th element of zk remains equal to 1 for all k. We now analyze the rate of convergence. Let m = jAj and n = jSj. The states can be permuted so that GP is in canonical form: GP =  T1 T2 0 I  (21) where the absorbing states are last, T1 is (n m) by (n m), and T2 is (n m) by m. The reason we have the identity matrix in the lower-right corner, despite multiplication by G, is that q (i) = 0 for i 2 A. From (21) we have GP k =  T k 1 T k1 1 +    + T1 + I  T2 0 I  =  T k 1 I T k 1  (I T1)1 T2 0 I  (22) A stochastic matrix P with m absorbing states has m eigenvalues 1, and all other eigenvalues are smaller than 1 in absolute value. Since the diagonal elements of G are no greater than 1, all eigenvalues of T1 are smaller than 1 and so limk!1 T k 1 = 0. Therefore iteration (20) converges exponentially as k where < 1 is the largest eigenvalue of T1. Faster convergence is obtained for smaller . The factors that can make small are: (i) large state costs q (i) resulting in small terms exp (q (i)) along the diagonal of G; (ii) small transition probabilities among non-absorbing states (and large transition probabilities from non-absorbing to absorbing states). Convergence is independent of problem size because has no reason to increase as the dimensionality of T1 increases. Indeed numerical simulations on randomly generated MDPs have shown that problem size does not systematically affect the number of iterations needed to reach a given convergence criterion. Thus the average running time scales linearly with the number of non-zero elements in P. 2.2 Alternative problem formulations While the focus of this paper is on innite-horizon total-cost problems with absorbing states, we have obtained similar results for all other problem formulations commonly used in Reinforcement Learning. Here we summarize these results. In nite-horizon problems equation (19) becomes z (t) = G (t) P (t) z (t + 1) (23) where z (t nal) is initialized from a given nal cost function. In innite-horizon average-cost-perstage problems equation (19) becomes z = GPz (24) where is the largest eigenvalue of GP, z is a differential value function, and the average cost-perstage turns out to be log ( ). In innite-horizon discounted-cost problems equation (19) becomes z = GPz (25) where < 1 is the discount factor and z is dened element-wise. Even though the latter equation is nonlinear, we have observed that the analog of iteration (20) still converges rapidly. 0 2 22 22 23 25 3 3 20 21 23 25 4 4 18 24 25 6 6 16 26 26 8 9 14 28 28 10 11 11 14 30 30 12 12 13 13 31 31 0 1 10 10 10 11 1 1 9 9 10 11 2 2 8 10 11 3 3 7 11 11 4 4 6 12 12 5 5 5 6 13 13 6 6 6 6 14 14 Fig 1A Fig 1B 3 Shortest paths as an eigenvalue problem Suppose the state space S of our MDP corresponds to the vertex set of a directed graph, and let D be the graph adjacency matrix whose element dij indicates the presence (dij = 1) or absence (dij = 0) of a directed edge from vertex i to vertex j. Let A  S be a non-empty set of destination vertices. Our goal is to nd the length s (i) of the shortest path from every i 2 S to some vertex in A. For i 2 A we have s (i) = 0 and dij = j i. We now show how the shortest path lengths s (i) can be obtained from our MDP. Dene the elements of the stochastic matrix P as pij = dij P k dik (26) corresponding to a random walk on the graph. Next choose  > 0 and dene the state costs q (i) =  when i =2 A, q (i) = 0 when i 2 A (27) This cost model means that we pay a price  whenever the current state is not in A. Let v (i) denote the optimal value function for the MDP dened by (26, 27). If the control costs were 0 then the shortest paths would simply be s (i) = 1 v (i). Here the control costs are not 0, however they are bounded. This can be shown using pij (u) = pij exp (uj)  1 (28) which implies that for pij 6= 0 we have uj  log pij  . Since r (i; u) is a convex combination of the elements of u, the following bound holds: r (i; u)  maxj (uj)  log  minj:pij6=0 pij  (29) The control costs are bounded and we are free to choose  arbitrarily large, so we can make the state costs dominate the optimal value function. This yields the following result: s (i) = lim !1 v (i)  (30) Thus we have reduced the shortest path problem to an eigenvalue problem. In spectral graph theory many problems have previously been related to eigenvalues of the graph Laplacian [4], but the shortest path problem was not among them until now. Currently the most widely used algorithm is Dijkstra's algorithm. In sparse graphs its running time is O (n log (n)). In contrast, algorithms for nding largest eigenpairs have running time O (n) for sparse matrices. Of course (30) involves a limit and so we cannot obtain the exact shortest paths by solving a single eigenvalue problem. However we can obtain a good approximation by setting  large enough – but not too large because exp () may become numerically indistinguishable from 0. Fig 1 illustrates the solution obtained from (30) and rounded down to the nearest integer, for  = 1 in 1A and  = 50 in 1B. Transitions are allowed to all neighbors. The result in 1B matches the exact shortest paths. Although the solution for  = 1 is numerically larger, it is basically a scaled-up version of the correct solution. Indeed the R2 between the two solutions before rounding was 0:997. 4 Approximating discrete MDPs via continuous embedding In the previous section we replaced the shortest path problem with a continuous MDP and obtained an excellent approximation. Here we obtain approximations of similar quality in more general settings, using an approach reminiscent of LP-relaxation in integer programming. As in LP-relaxation, theoretical results are hard to derive but empirically the method works well. We construct an embedding which associates the controls in the discrete MDP with specic control vectors of a continuous MDP, making sure that for these control vectors the continuous MDP has the same costs and transition probabilities as the discrete MDP. This turns out to be possible under mild and reasonable assumptions, as follows. Consider a discrete MDP with transition probabilities and costs denoted ep and e`. Dene the matrix B (i) of all controlled transition probabilities from state i. This matrix has elements baj (i) = epij (a) ; a 2 U (i) (31) We need two assumptions to guarantee the existence of an exact embedding: for all i 2 S the matrix B (i) must have full row-rank, and if any element of B (i) is 0 then the entire column must be 0. If the latter assumption does not hold, we can replace the problematic 0 elements of B (i) with a small  and renormalize. Let N (i) denote the set of possible next states, i.e. states j for which epij (a) > 0 for any/all a 2 U (i). Remove the zero-columns of B (i) and restrict j 2 N (i). The rst step in the construction is to compute the real-valued control vectors ua corresponding to the discrete actions a. This is accomplished by matching the transition probabilities of the discrete and continuous MDPs: pij exp ua j  = epij (a) ; 8 i 2 S; j 2 N (i) ; a 2 U (i) (32) These constraints are satised iff the elements of the vector ua are ua j = log (epij (a)) log pij  (33) The second step is to compute the uncontrolled transition probabilities pij and state costs q (i) in the continuous MDP so as to match the costs in the discrete MDP. This yields the set of constraints q (i) + r (i; ua) = e` (i; a) ; 8 i 2 S; a 2 U (i) (34) For the control vector given by (33) the KL-divergence cost is r (i; ua) = X j pij exp ua j  ua j = h (i; a) X j epij (a) log pij  (35) where h (i; a) is the entropy of the transition probability distribution in the discrete MDP: h (i; a) = X j epij (a) log (epij (a)) (36) The constraints (34) are then equivalent to q (i) X j baj (i) log pij  = e` (i; a) h (i; a) (37) Dene the vector y (i) with elements e` (i; a) h (i; a), and the vector x (i) with elements log pij  . The dimensionality of y (i) is jU (i)j while the dimensionality of x (i) is jN (i)j  jU (i)j. The latter inequality follows from the assumption that B (i) has full row-rank. Suppressing the dependence on the current state i, the constraints (34) can be written in matrix notation as q1 Bx = y (38) Since the probabilities pij must sum up to 1, the vector x must satisfy the additional constraint X j exp (xj) = 1 (39) We are given B; y and need to compute q; x satisfying (38, 39). Let bx be any vector such that Bbx = y, for example bx = Byy where y denotes the Moore-Penrose pseudoinverse. Since B is a stochastic matrix we have B1 = 1, and so q1 B (bx + q1) = Bbx = y (40) 0 20 40 60 0 10 20 30 40 R2 = 0.986 value in discrete MDP * * * * Fig 2A Fig 2B Fig 2C Therefore x = bx + q1 satises (38) for all q, and we can adjust q to also satisfy (39), namely q = log X j exp (bxj)  (41) This completes the embedding. If the above q turns out to be negative, we can either choose another bx by adding an element from the null-space of B, or scale all costs e` (i; a) by a positive constant. Such scaling does not affect the optimal control law for the discrete MDP, but it makes the elements of Byy more negative and thus q becomes more positive. We now illustrate this construction with the example in Fig 2. The grid world has a number of obstacles (black squares) and two absorbing states (white stars). The possible next states are the immediate neighbors including the current state. Thus jN (i)j is at most 9. The discrete MDP has jN (i)j 1 actions corresponding to stochastic transitions to each of the neighbors. For each action, the transition probability to the "desired" state is 0:8 and the remaining 0:2 is equally distributed among the other states. The costs e` (i; a) are random numbers between 1 and 10 – which is why the optimal value function shown in grayscale appears irregular. Fig 2A shows the optimal value function for the discrete MDP. Fig 2B shows the optimal value function for the corresponding continuous MDP. The scatterplot in Fig 2C shows the optimal values in the discrete and continuous MDP (each dot is a state). The values in the continuous MDP are numerically smaller – which is to be expected since the control space is larger. Nevertheless, the correlation between the optimal values in the discrete and continuous MDPs is excellent. We have observed similar performance in a number of randomly-generated problems. 5 Z-learning So far we assumed that a model of the continuous MDP is available. We now turn to stochastic approximations of the optimal value function which can be used when a model is not available. All we have access to are samples (ik; jk; qk) where ik is the current state, jk is the next state, qk is the state cost incurred at ik, and k is the sample number. Equation (18) can be rewritten as z (i) = exp (q (i)) X j pijz (j) = exp (q (i)) EP [z (j)] (42) This suggests an obvious stochastic approximation bz to the function z, namely bz (ik) (1 k) bz (ik) + k exp (qk) bz (jk) (43) where the sequence of learning rates k is appropriately decreased as k increases. The approximation to v (i) is simply log (bz (i)). We will call this algorithm Z-learning. Let us now compare (43) to the Q-learning algorithm applicable to discrete MDPs. Here we have samples (ik; jk; `k; uk). The difference is that `k is now a total cost rather than a state cost, and we have a control uk generated by some control policy. The update equation for Q-learning is bQ (ik; uk) (1 k) bQ (ik; uk) + k min u02U(jk)  `k + bQ (jk; u0)  (44) 0 1 2 3 x 10 4 0 1 * Z-learning Q-learning 0 5 10 15 0 1 * * Z Q Number of state transitions Fig 3A Fig 3B To compare the two algorithms, we rst constructed continuous MDPs with q (i) = 1 and transitions to the immediate neighbors in the grid worlds shown in Fig 3. For each state we found the optimal transition probabilities (14). We then constructed a discrete MDP which had one action (per state) that caused the same transition probabilities, and the corresponding cost was the same as in the continuous MDP. We then added jN (i)j 1 other actions by permuting the transition probabilities. Thus the discrete and continuous MDPs were guaranteed to have identical optimal value functions. Note that the goal here is no longer to approximate discrete with continuous MDPs, but to construct pairs of problems with identical solutions allowing fair comparison of Z-learning and Q-learning. We run both algorithms with the same random policy. The learning rates decayed as k = c= (c + t (k)) where the constant c was optimized separately for each algorithm and t (k) is the run to which sample k belongs. When the MDP reaches an absorbing state a new run is started from a random initial state. The approximation error plotted in Fig 3 is dened as maxi jv (i) bv (i)j maxi v (i) (45) and is computed at the end of each run. For small problems (Fig 3A) the two algorithms had identical convergence, however for larger problems (Fig 3B) the new Z-learning algorithm was clearly faster. This is not surprising: even though Z-learning is as model-free as Q-learning, it benets from the analytical developments in this paper and in particular it does not need a maximization operator or state-action values. The performance of Q-learning can be improved by using a non-random (say greedy) policy. If we combine Z-learning with importance sampling, the performance of Z-learning can also be improved by using such a policy. 6 Summary We introduced a new class of MDPs which have a number of remarkable properties, can be solved efciently, and yield accurate approximations to traditional MDPs. In general, no single approach is likely to be a magic wand which simplies all optimal control problems. Nevertheless the results so far are very encouraging. While the limitations remain to be claried, our approach appears to have great potential and should be thoroughly investigated. References [1] B. Scholkopf and A. Smola, Learning with kernels. MIT Press (2002) [2] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge University Press (2004) [3] D. Bertsekas, Dynamic programming and optimal control (2nd ed). Athena Scientic (2000) [4] F. Chung, Spectral graph theory. CMBS Regional Conference Series in Mathematics (1997)
2006
172
3,002
Learning on Graph with Laplacian Regularization Rie Kubota Ando IBM T.J. Watson Research Center Hawthorne, NY 10532, U.S.A. rie1@us.ibm.com Tong Zhang Yahoo! Inc. New York City, NY 10011, U.S.A. tzhang@yahoo-inc.com Abstract We consider a general form of transductive learning on graphs with Laplacian regularization, and derive margin-based generalization bounds using appropriate geometric properties of the graph. We use this analysis to obtain a better understanding of the role of normalization of the graph Laplacian matrix as well as the effect of dimension reduction. The results suggest a limitation of the standard degree-based normalization. We propose a remedy from our analysis and demonstrate empirically that the remedy leads to improved classification performance. 1 Introduction In graph-based methods, one often constructs similarity graphs by linking similar data points that are close in the feature space. It was proposed in [3] that one may first project these data points into the eigenspace corresponding to the largest eigenvalues of a normalized adjacency matrix of the graph and then use the standard k-means method for clustering. In the ideal case, points in the same class will be mapped into a single point in the reduced eigenspace, while points in different classes will be mapped to different points. One may also consider similar ideas in semi-supervised learning using a discriminative kernel method. If the underlying kernel is induced from the graph, one may formulate semi-supervised learning directly on the graph (e.g., [1, 5, 7, 8]). In these studies, the kernel is induced from the adjacency matrix W whose (i, j)-entry is the weight of edge (i, j). W is sometimes normalized by D−1/2WD−1/2 [2, 4, 3, 7] where D is a diagonal matrix whose (j, j)-entry is the degree of the j-th node, but sometimes not [1, 8]. Although such normalization may significantly affect the performance, the issue has not been studied from the learning theory perspective. The relationship of kernel design and graph learning was investigated in [6], which argued that quadratic regularization-based graph learning can be regarded as kernel design. However, normalization of W was not considered there. The goal of this paper is to provide some learning theoretical insight into the role of normalization of the graph Laplacian matrix (D −W). We first present a model for transductive learning on graphs and develop a margin analysis for multi-class graph learning. Based on this, we analyze the performance of Laplacian regularization-based graph learning in relation to graph properties. We use this analysis to obtain a better understanding of the role of normalization of the graph Laplacian matrix as well as dimension reduction in graph learning. The results indicate a limitation of the commonly practiced degree-based normalization mentioned above. We propose a learning theoretical remedy based on our analysis and use experiments to demonstrate that the remedy leads to improved classification performance. 2 Transductive Learning Model We consider the following multi-category transductive learning model defined on a graph. Let V = {v1, . . . , vm} be a set of m nodes, and let Y be a set of K possible output values. Assume that each node vj is associated with an output value yj ∈Y, which we are interested in predicting. We randomly draw a set of n indices Zn = {ji : 1 ≤i ≤n} from {1, . . ., m} uniformly and without replacement. We then manually label the n nodes vji with labels yji ∈Y, and then automatically label the remaining m −n nodes. The goal is to estimate the labels on the remaining m −n nodes as accurately as possible. We encode the label yj into a vector in RK, so that the problem becomes that of generating an estimation vector fj,· = [fj,1, . . . , fj,K] ∈RK, which can then be used to recover the label yj. In multi-category classification with K classes Y = {1, . . ., K}, we encode each yj = k ∈Y as ek ∈RK, where ek is a vector of zero entries except for the k-th entry being one. Given fj,· = [fj,1, . . . , fj,K] ∈RK (which is intended to approximate eyj), we decode the corresponding label estimation ˆyj as: ˆyj = arg maxk {fj,k : k = 1, . . . , K}. If the true label is yj, then the classification error is err(fj,·, yj) = I(ˆyj ̸= yj), where we use I(·) to denote the set indicator function. In order to estimate f = [fj,k] ∈RmK from only a subset of labeled nodes, we consider for a given kernel matrix K ∈Rm, the quadratic regularization f T QKf = PK k=1 f T ·,kK−1f·,k, where f·,k = [f1,k, . . . , fm,k] ∈Rm. We assume that K is full-rank. We will consider the kernel matrix induced by the graph Laplacian, to be introduced later in the paper. Note that the bold symbol K denotes the kernel matrix, and regular K denotes the number of classes. Given a vector f ∈RmK, the accuracy of its component fj,· = [fj,1, . . . , fj,K] ∈RK is measured by a loss function φ(fj,·, yj). Our learning method attempts to minimize the empirical risk on the set Zn of n labeled training nodes, subject to f T QKf being small: ˆf(Zn) = arg min f∈RmK  1 n X j∈Zn φ(fj,·, yj) + λf T QKf  . (1) where λ > 0 is an appropriately chosen regularization parameter. In this paper, we focus on a special class of loss function that is of the form φ(fj,·, yj) = PK k=1 φ0(fj,k, δk,yj), where δa,b is the delta function defined as: δa,b = 1 when a = b and δa,b = 0 otherwise. We are interested in the generalization behavior of (1) compared to a properly defined optimal regularized risk, often referred to as “oracle inequalities” in the learning theory literature. Theorem 1 Let φ(fj,·, yj) = PK k=1 φ0(fj,k, δk,yj) in (1). Assume that there exist positive constants a, b, and c such that: (i) φ0(x, y) is non-negative and convex in x, (ii) φ0(x, y) is Lipschitz with constant b when φ0(x, y) ≤a, and (iii) c = inf{x : φ0(x, 1) ≤a} −sup{x : φ0(x, 0) ≤a}. Then ∀p > 0, the expected generalization error of the learning method (1) over the random training samples Zn can be bounded by: EZn 1 m −n X j∈¯ Zn err( ˆfj,·(Zn), yj) ≤1 a inf f∈RmK " 1 m m X j=1 φ0(fj,·, yj) + λf T QKf # + btrp(K) λnc p , where ¯Zn = {1, . . . , m} −Zn, trp(K) =  1 m Pm j=1 Kp j,j 1/p , and Kj,j is the (j, j)-entry of K. Proof. The proof is similar to the proof of a related bound for binary-classification in [6]. We shall introduce the following notation. let in+1 ̸= i1, . . . , in be an integer randomly drawn from ¯Zn, and let Zn+1 = Zn ∪{in+1}. Let ˆf(Zn+1) be the semi-supervised learning method (1) using training data in Zn+1: ˆf(Zn+1) = arg inff∈RmK h 1 n P j∈Zn+1 φ(fj,·, Yj) + λf T QKf i . Adapted from a related lemma used in [6] for proving a similar result, we have the following inequality for each k = 1, . . . , K: | ˆfin+1,k(Zn+1) −ˆfin+1,k(Zn)| ≤|∇1,kφ( ˆfin+1,·(Zn+1), Yin+1)|Kin+1,in+1/(2λn), (2) where ∇1,kφ(fi,·, y) denotes a sub-gradient of φ(fi,·, y) with respect to fi,k, where fi,· = [fi,1, . . . , fi,K]. Next we prove err( ˆfin+1,·(Zn), yin+1) ≤ sup k=k0,in+1 1 aφ0( ˆfin+1,k(Zn+1), δin+1,k) +  b cλnKin+1,in+1 p . (3) In fact, if ˆf(Zn) does not make an error on the in+1-th example, then the inequality automatically holds. Otherwise, assume that ˆf(Zn) makes an error on the in+1-th example, then there exists k0 ̸= yin+1 such that ˆfin+1,yin+1(Zn) ≤ˆfin+1,k0(Zn). If we let d = (inf{x : φ0(x, 1) ≤a} + sup{x : φ0(x, 0) ≤a})/2, then either ˆfin+1,yin+1(Zn) ≤d or ˆfin+1,k0(Zn) ≥d. By the definition of c and d, it follows that there exists k = k0 or k = in+1 such that either φ0( ˆfin+1,k(Zn+1), δin+1,k) ≥a or ˆfin+1,k(Zn+1) −ˆfin+1,k(Zn) ≥c/2. Using (2), we have either φ0( ˆfin+1,k(Zn+1), δin+1,k) ≥a or bKin+1,in+1/(2λn) ≥c/2, implying that 1 aφ0( ˆfin+1,k(Zn+1), δin+1,k) + bKin+1,in+1 cλn p ≥1 = err( ˆfin+1,·(Zn), yin+1). This proves (3). We are now ready to prove Theorem 1 using (3). For every j ∈Zn+1, denote by Z(j) n+1 the subset of n samples in Zn+1 with the j-th data point left out. We have err( ˆfj,·(Z(j) n ), yj) ≤ 1 aφ( ˆfj,·(Zn+1), yj) + b cλnKj,j p. We thus obtain for all f ∈RmK: EZn 1 m −n X j∈¯ Zn err( ˆfj,·(Zn), yj) ≤ 1 n + 1EZn+1 X j∈Zn+1 err( ˆfj,·(Z(j) n ), yj) ≤ 1 n + 1EZn+1 2 4 1 a X j∈Zn+1 φ( ˆfj,·(Zn+1), yj) + X j∈Zn+1  b cλnKj,j p 3 5 ≤ n a(n + 1)EZn+1 2 4 1 n X j∈Zn+1 φ(fj,·, yj) + λf T QKf 3 5 + 1 n + 1EZn+1 X j∈Zn+1  b cλnKj,j p . 2 The formulation used here corresponds to the one-versus-all method for multi-category classification. For the SVM loss φ0(x, y) = max(0, 1 −(2x −1)(2y −1)), we may take a = 0.5, b = 2, and c = 0.5. In the experiments reported here, we shall employ the least squares function φ0(x, y) = (x−y)2 which is widely used for graph learning. With this formulation, we may choose a = 1/16, b = 0.5, c = 0.5 in Theorem 1. 3 Laplacian regularization Consider an undirected graph G = (V, E) defined on the nodes V = {vj : j = 1, . . . , m}, with edges E ⊂{1, . . ., m} × {1, . . . , m}, and weights wj,j′ ≥0 associated with edges (j, j′) ∈E. For simplicity, we assume that (j, j) /∈E and wj,j′ = 0 when (j, j′) /∈E. Let degj(G) = Pm j′=1 wj,j′ be the degree of node j of graph G. We consider the following definition of normalized Laplacian. Definition 1 Consider a graph G = (V, E) of m nodes with weights wj,j′ (j, j′ = 1, . . . , m). The unnormalized Laplacian matrix L(G) ∈Rm×m is defined as: Lj,j′(G) = −wj,j′ if j ̸= j′; degj(G) otherwise. Given m scaling factors Sj (j = 1, . . . , m), let S = diag({Sj}). The S-normalized Laplacian matrix is defined as: LS(G) = S−1/2L(G)S−1/2. The corresponding regularization is based on: f T ·,kLS(G)f·,k = 1 2 Pm j,j′=1 wj,j′  fj,k √ Sj − fj′,k √ Sj′ 2 . A common choice of S is S = I, corresponding to regularizing with the unnormalized Laplacian L. The idea is natural: we assume that the predictive values fj,k and fj′,k should be close when (j, j′) ∈E with a strong link. Another common choice is to normalize by Sj = degj(G) (i.e. S = D) so that diagonals of LS become all one [3, 4, 7, 2]. Definition 2 Given label y = {yj}j=1,...,m on V , we define the cut for LS in Definition 1 as: cut(LS, y) = P j,j′:yj̸=yj′ wj,j′ 2  1 Sj + 1 Sj′  + P j,j′:yj=yj′ wj,j′ 2  1 √ Sj − 1 √ Sj′ 2 . Unlike typical graph-theoreticaldefinitions of graph-cut, this learning theoretical definition of graphcut penalizes not only between-class edge weights but also within-class edge weights when such an edge connects two nodes with different scaling factors. This penalization is intuitive if we look at the regularizer in Definition (1), which encourages fj,k/pSj to be similar to fj′,k/pSj′ when wj,j′ is large. If j and j′ belongs to the same class, we want fj,k to be similar to fj′,k. Therefore for such an in-class pair (j, j′), we want to have Sj ≈Sj′. This penalization has important consequences, which we will investigate later in the paper. For unnormalized Laplacian (i.e. Sj = 1), the second term on the right hand side of Definition 2 vanishes, and our learning theoretical definition becomes identical to the standard graph-theoretical definition: cut(L, y) = P j,j′:yj̸=yj′ wj,j′. We consider K in (1) defined as follows: K = (αS−1 + LS(G))−1, where α > 0 is a tuning parameter to make K strictly positive definite. This parameter is important. For simplicity, we state the generalization bound based on Theorem 1 with optimal λ. Note that in applications, λ is usually tuned through cross validation. Therefore assuming optimal λ will simplify the bound so that we can focus on the more essential characteristics of generalization performance. Theorem 2 Let the conditions in Theorem 1 hold with the regularization condition K = (αS−1 + LS(G))−1. Assume that φ0(0, 0) = φ0(1, 1) = 0, then ∀p > 0, there exists a sample independent regularization parameter λ in (1) such that the expected generalization error is bounded by: EZn 1 m −n X j∈¯ Zn err( ˆfj,·(Zn), yj) ≤Cp(a, b, c) np/(p+1) (αs + cut(LS, y))p/(p+1)trp(K)p/(p+1), where Cp(a, b, c) = (b/ac)p/(p+1)(p1/(p+1) + p−p/(p+1)) and s = Pm j=1 S−1 j . Proof. Let fj,k = δyj,k. It can be easily verified that Pm j=1 φ(fj,·, yj)/m + λf T QKf = λ(αs + cut(LS, y)). Now, we simply use this expression in Theorem 1, and then optimize over λ. 2 This theorem relates graph-cut to generalization performance. The conditions on the loss function in Theorem 2 hold for least squares with b/ac = 16. It also applies to other standard loss functions such as SVM. With p fixed, the generalization error decreases at the rate O(n−p/(p+1)) when n increases. This rate of convergence is faster when p increases. However in general, trp(K) is an increasing function of p. Therefore we have a trade-off between the two terms. The bound also suggests that if we normalize the diagonal entries of K such that Kj,j is a constant, then trp(K) is independent of p, and thus a larger p can be used in the bound. This motivates the idea of normalizing the diagonals of K. Our goal is to better understand how the quantity (αs + cut(LS, y)) p p+1 trp(K) p p+1 is related to properties of the graph, which gives better understanding of graph-based learning. Definition 3 A subgraph G0 = (V0, E0) of G = (V, E) is a pure component if G0 is connected, E0 is induced by restricting E on V0, and if labels y have identical values on V0. A pure subgraph G′ = ∪q ℓ=1Gℓof G divides V into q disjoint sets V = ∪q ℓ=1Vℓsuch that each subgraph Gℓ= (Vℓ, Eℓ) is a pure component. Denote by λi(Gℓ) = λi(L(Gℓ)) the i-th smallest eigenvalue of L(Gℓ). If we remove all edges of G that connect nodes with different labels, then the resulting subgraph is a pure subgraph (but not the only one). For each pure component Gℓ, its first eigenvalue λ1(Gℓ) is always zero. The second eigenvalue λ2(Gℓ) > 0, and it measures how well-connected Gi is [2]. Theorem 3 Let the assumptions of Theorem 2 hold, and G′ = ∪q ℓ=1Gℓ(Gℓ= (Vℓ, Eℓ)) be a pure subgraph of G. For all p ≥1, there exist sample-independent λ and α, such that the generalization performance of (1), EZn P j∈¯ Zn err( ˆfj,·, yj)/(m −n), is bounded by Cp(a, b, c) np/(p+1)  s1/2 q X ℓ=1 sℓ(p)/m mp ℓ !1/2p + cut(LS, y)1/2 q X ℓ=1 sℓ(p)/m λ2(Gℓ)p !1/2p  2p/(p+1) , where mℓ= |Vℓ|, s = Pm j=1 S−1 j , and sℓ(p) = P j∈VℓSp j. Proof sketch. We simply upper bound trp(K) in terms of λ2(Gℓ) and sℓ, where K = (αS−1 + LS)−1. Substitute this estimation into Theorem 2 and optimize it over α. 2 To put this into perspective, suppose that we use unnormalized Laplacian regularizer on a zero-cut graph. Then S = I and cut(LS, y) = 0, and by letting p = 1 and p →∞in Theorem 3, we have: EZn X j∈¯ Zn err( ˆfj,·, yj) m −n ≤2 r b ac · q n and EZn X j∈¯ Zn err( ˆfj,·, yj) m −n ≤b ac · m n minℓmℓ . That is, in the zero-cut case, the generalization performance can be bounded as O( p q/n). We can also achieve a faster convergence rate of O(1/n), but it also depends on m/(minℓmℓ) ≥q. This implies that we will achieve better convergence at the O(1/n) level if the sizes of the components are balanced, while the convergence may behave like O( p q/n) otherwise. 3.1 Near zero-cut optimum scaling factors The above observation motivates a scaling matrix S so that it compensates for the unbalanced pure component sizes. From Definition 2 and Theorem 2 we know that good scaling factors should be approximately constant within each class. Here we focus on the case that scaling factors are constant within each pure component (Sj = ¯sℓwhen j ∈Vℓ) in order to derive optimum scaling factors. Let us define cut(G′, y) = P j,j′:yj̸=yj′ wj,j′ + P ℓ̸=ℓ′ P j∈Vℓ,j′∈Vℓ′ wj,j′ 2 . In Theorem 3, when we use cut(LS, y) ≤cut(G′, y)/ minℓ¯sℓand let p →∞and assume that cut(G′, y) is sufficiently small, the dominate term of the bound becomes maxℓ(¯sℓ/mℓ) n Pq ℓ=1 mℓ ¯sℓ, which can then be optimized with the choice ¯sℓ= mℓ, and the resulting bound becomes: 1 m −n X j∈¯ Zn err( ˆfj,·, yj) ≤b ac · 1 n √q + s cut(G′, y) u(G′) minℓmℓ !2 , where u(G′) = minℓ(λ2(Gℓ)/mℓ). Hence, if cut(G′, y) is small, then we should choose ¯sℓ∝mℓ for each pure component ℓso that the generalization performance is approximately (ac)−1b · q/n. The analysis provided here not only formally shows the importance of normalization in the learning theoretical framework but also suggests that the good normalization factor for each node j is approximately the size of the well-connected pure component that contains node j (assuming that nodes belonging to different pure components are only weakly connected). The commonly practiced degree-based normalization method Sj = degj(G) provides such good normalization factors under a simplified “box model” used in early studies e.g. [4]. In this model, each node connects to itself and all other nodes of the same pure component with edge weight wj,j′ = 1. The degree is thus degj(Gℓ) = |Vℓ| = mℓ, which gives the optimal scaling in our analysis. However, in general, the box model may not be a good approximation for practical problems. A more realistic approximation, which we call core-satellite model, will be introduced in the experimental section. For such a model, the degree-based normalization can fail because the degj(Gℓ) within each pure component Gℓis not approximately constant (thus raising cut(LS, y)), and it may not be proportional to mℓ. Our remedy is as follows. Let ¯K = (αI+L)−1 be the kernel matrix corresponding to the unnormalized Laplacian. Let vℓ∈Rm be the vector whose j-th entry is 1 if j ∈Vℓand 0 otherwise. Then it is easy to verify that for small α and near-zero cut(G′, y), we have α ¯K = Pq ℓ=1 vℓvT ℓ/mℓ+ O(1), and thus ¯Kj,j ∝m−1 ℓ for each j ∈Vℓ. Therefore the scaling factor Sj = 1/ ¯Kj,j is nearly optimal for all j. We call this method of normalization (Sj = 1/ ¯Kj,j, K = (αS−1 + LS)−1) K-scaling in this paper as it scales the kernel matrix K so that each Kj,j = 1. By contrast, we call the standard degree-based normalization (Sj = degj(G), K = (αI + LS)−1) L-scaling as it scales diagonals of LS to 1. Although K-scaling coincides with a common practice in standard kernel learning, it is important to notice that showing this method behaves well in the graph learning setting is non-trivial and novel. In fact, no one has proposed this normalization method in the graph learning setting before this work. Without the learning theoretical results developed here, it is not obvious whether this method should work better than the commonly practiced degree-based normalization. 4 Dimension Reduction Normalization and dimension reduction have been commonly used in spectral clustering such as [3, 4]. For semi-supervised learning, dimension reduction (without normalization) is known to improve performance [1, 6] while normalization (without dimension reduction) has also been explored [7]. An appropriate combination of normalization and dimension reduction can further improve performance. We shall first introduce dimension reduction with normalized Laplacian LS(G). Denote by Pr S(G) the projection operator onto the eigenspace of αS−1 + LS(G) corresponding to the r smallest eigenvalues. Now, we may define the following regularizer on the reduced subspace: f T ·,kK−1f·,k = f T ·,kK−1 0 f·,k Pr S(G)f·,k = f·,k, +∞ otherwise. (4) Note that we will focus on bounding the generalization complexity using the reduced dimensionality r. In such context, the choice of K0 is not important. For example, we may simply choose K0 = I. The benefit of dimension reduction in graph learning has been investigated in [6], under the spectral kernel design framework. Note that the normalization issue, which will change the eigenvectors and their ordering, wasn’t investigated there. The following theorem shows that the target vectors can be well approximated by its projection onto Pq S(G). We skip the proof due to the space limitation. Theorem 4 Let G′ = ∪q ℓ=1Gℓ(Gℓ= (Vℓ, Eℓ)) be a pure subgraph of G. Consider r ≥q: λr+1(LS(G)) ≥λr+1(LS(G′)) ≥minℓλ2(LS(Gℓ)). For each k, let ¯fj,k = δyj,k be the target (encoding of the true labels) for class k (j = 1, . . . , m). Then ∥Pr S(G) ¯f·,k −¯f·,k∥2 2 ≤δr(S)∥¯f·,k∥2 2, where δr(S) = ∥LS(G)−LS(G′))∥2+d(S) λr+1(LS(G)) , d(S) = maxℓ 1 2|Vℓ| P j,j′∈Vℓ(S−1/2 j −S−1/2 j′ )2. We can prove a generalization bound using Theorem 4. For simplicity, we only consider least squares loss φ(fj,·, yj) = PK k=1(fj,k −δk,yj)2 in (1) using regularization (4) and K0 = I. With p = 1, we have 1 m Pm j=1 φ( ¯fj,·, yj) ≤δr(S)2 + λm. It is also equivalent to take K0 = Pr S(G) due to the dimension reduction, so that we can use tr(K) = r. Now from Theorem 1 with a = 1/16, b = 0.5, c = 0.5, we have EZn 1 m−n P j∈¯ Zn err( ˆfj,·, yj) ≤16(δr(S)2+λm)+ r λnm. By optimizing over λ, we obtain EZn X j∈¯ Zn err( ˆfj,·, yj) m −n ≤16δr(S)2 + 32 p r/n. (5) The analysis of optimum scaling factors is analogous to Section 3.1, and the conclusions there hold. Compared to Theorem 3, the advantage of dimension reduction in (5) is that the quantity cut(LS, y) is replaced by ∥LS(G) −LS(G′)∥2, which is typically much smaller. Instead of a rigorous analysis, we shall just give a brief intuition. For simplicity we take S = I so that we can ignore the variations caused by S. The 2-norm of the symmetric error matrix LS(G) −LS(G′) is its largest eigenvalue, which is no more than the largest 1-norm of one of its row vectors. In contrast, cut(LS, y) behaves similar to the absolute sum of entries of the error matrix, which is m times more than the averaged 1-norm of its row vectors. Therefore if error is relatively uniform across rows, then cut(LS, y) can be at an order of m times more than ∥LS(G) −LS(G′)∥2. 5 Experiments We test the three types of the kernel matrix K (Unnormalized, normalized by K-scaling or Lscaling) with the two regularization methods: the first method is to use K without dimension reduction, and the second method reduces the dimension of K−1 to eigenvectors corresponding to the smallest r eigenvalues and regularizes with f T K−1f if Pr S(G)f = f and +∞otherwise. We are particularly interested in how well K-scaling performs. From m data points, n training labeled examples are randomly chosen while ensuring that at least one training example is chosen from each class. The remaining m −n data points serve as test data. The regularization parameter λ is chosen by cross validation on the n training labeled examples. We will show performance either when the rest of the parameters (α and dimensionality r) are also chosen by cross validation or when they are set to the optimum (oracle performance). The dimensionality r is chosen from K, K +5, K +10, · · · , 100 where K is the number of classes unless otherwise specified. Our focus is on small n close to the number of classes. Throughout this section, we conduct 10 runs with random training/test splits and report the average accuracy. We use the one-versus-all strategy with least squares loss φk(a, b) = (a −δk,b)2. Controlled data experiments The purpose of the controlled data experiments is to observe the correlation of the effectiveness of the normalization methods with graph properties. The graphs we generate contain 2000 nodes, each of which is assigned one of 10 classes. We show the results when dimension reduction is applied 40 60 80 100 graph1 graph2 graph3 Accuracy (%) Unnormalized L-scaling K-scaling 60 70 80 90 100 graph6 graph7 graph8 graph9 graph10 Accuracy (%) Unnormalized L-scaling K-scaling (a) Nearly-constant degrees. (b) Core-satellite graphs Figure 1: Classification accuracy (%). (a) Graphs with near constant within class degrees. (b) Core-satellite graphs. n = 40, m = 2000. With dimension reduction (dim ≤20; chosen by cross validation). to the three types of matrix K. The performance is averaged over 10 random splits with error bar representing one standard deviation. Figure 1 (a) shows classification accuracy on three graphs that were generated so that the node degrees (of either correct edges or erroneous edges) are close to constant within each class but vary across classes. On these graphs, both K-scaling and L-scaling significantly improve classification accuracy over the unnormalized baseline. There is not much difference between K-scaling’s and L-scaling’s. Observe that K-scaling and L-scaling perform differently on the graphs used in Figure 1 (b). These five graphs have the following properties. Each class consists of core nodes and satellite nodes. Core nodes of the same class are tightly connected with each other and do not have any erroneous edges. Satellite nodes are relatively weakly connected to core nodes of the same class. They are also connected to some other classes’ satellite nodes (i.e., introducing errors). This core-satellite model is intended to simulate real-world data in which some data points are close to the class boundaries (satellite nodes). For graphs generated in this manner, degrees vary within the same class since the satellite nodes have smaller degrees than the core nodes. Our analysis suggests that L-scaling will do poorly. Figure 1 (b) shows that on the five core-satellite graphs, K-scaling indeed produces higher performance than L-scaling. In particular, K-scaling does well even when L-scaling rather underperforms the unnormalized baseline. Real-world data experiments Our real-world data experiments use an image data set (MNIST) and a text data set (RCV1). The MNIST data set, downloadable from http://yann.lecun.com/exdb/mnist/, consists of hand-written digit image data (representing 10 classes, from digit “0” to “9”). For our experiments, we randomly choose 2000 images (i.e., m = 2000). Reuters Corpus Version 1 (RCV1) consists of news articles labeled with topics. For our experiments, we chose 10 topics (ranging from sports to labor issues; representing 10 classes) that have relatively large populations and randomly chose 2000 articles that are labeled with exactly one of those 10 topics. To generate graphs from the image data, as is commonly done, we first generate the vectors of the gray-scale values of the pixels, and produce the edge weight between the i-th and the j-th data points Xi and Xj by wi,j = exp(−||Xi −Xj||2/t) where t > 0 is a parameter (RBF kernels). To generate graphs from the text data, we first create the bag-of-word vectors and then set wi,j based on RBF as above. As our baseline, we test the supervised configuration by letting W + βI be the kernel matrix and using the same least squares loss function, where we use the oracle β which is optimal. Figures 2 (a-1,2) shows performancein relation to the number of labeled examples (n) on the MNIST data set. The comparison of the three bold lines (representing the methods with dimension reduction) in Figure 2 (a-1) shows that when the dimensionality and α are determined by cross validation, (a-1) MNIST, dim and α (a-2) MNIST, optimum (b-1) RCV1 (b-2) RCV1 by cross validation dim and α cross validation optimum 45 55 65 75 85 10 30 50 # of labeled examples accuracy (%) 45 55 65 75 85 10 30 50 # of labeled examples 35 45 55 65 75 10 50 90 # of labeled ex. Supervised baseline Unnormalized (w/o dim redu.) L-scaling (w/o dim redu.) K-scaling (w/o dim redu.) Unnormalized (w/ dim redu.) L-scaling (w/ dim redu.) K-scaling (w/ dim redu.) 35 45 55 65 75 10 50 90 # of labeled examples Figure 2: Classification accuracy (%) versus sample size n (m = 2000). (a-1) MNIST, dim and α determined by cross validation. (a-2) MNIST, dim and α set to the optimum. (b-1) RCV1, dim and α determined by cross validation. (b-2) RCV1, dim and α set to the optimum. K-scaling outperforms L-scaling, and L-scaling outperforms the unnormalized Laplacian. The performance differences among these three are statistically significant (p ≤0.01) based on the paired t test. The performance of the unnormalized Laplacian (with dimension reduction) is roughly consistent with the performance with similar (m, n) with heuristic dimension selection in [1]. Without dimension reduction, L-scaling and K-scaling still improve performance over the unnormalized Laplacian. The best performance is always obtained by K-scaling with dimension reduction. In Figure 2 (a-1), the unnormalized Laplacian with dimension reduction underperforms the unnormalized Laplacian without dimension reduction, indicating that dimension reduction rather degrades performance. By comparing Figure 2 (a-1) and (a-2), we observe that this seemingly counterintuitive performance trend is caused by the difficulty of choosing the right dimensionality by cross validation. Figure 2 (a-2) shows the performance at the oracle optimal dimensionality and α. As observed, if the optimal dimensionality is known (as in (a-2)), dimension reduction improves performance either with or without normalization by K-scaling and L-scaling, and all transductive configurations outperform the supervised baseline. We also note that the comparison of Figure 2 (a-1) and (a-2) shows that choosing good dimensionality by cross validation is much harder than choosing α by cross validation, especially when the number of labeled examples is small. On the RCV1 data set, the performance trend is similar to that of MNIST. Figures 2 (b-1,2) shows the performance on RCV1 using the RBF kernel (t = 0.25, 100NN). In the setting of Figure 2 (b-1) where the dimensionality and α were determined by cross validation, K-scaling with dimension reduction generally performs the best. By setting the dimensionality and α to the optimum, the benefit of K-scaling with dimension reduction is even clearer (Figure 2 (b-2)). Its performance differences from the second and third best ‘L-scaling (w/ dim redu.)’ and ‘Unnormalized (w/ dim redu.)’ are statistically significant (p ≤0.01) in both Figure 2 (b-1) and (b-2). In our experiments, K-scaling with dimension reduction consistently outperformed others. Without dimension reduction, K-scaling and L-scaling are not always effective. This is consistent with our analysis. On real data, cut is not near-zero, and the effect of normalization is unclear (Section 3.1); however, when dimension is reduced, ∥LS(G) −LS(G′)∥2 (corresponding to cut) can be much smaller (Section 4), which suggests that K-scaling should improve performance. 6 Conclusion We derived generalization bounds for learning on graphs with Laplacian regularization, using properties of the graph. In particular, we explained the importance of Laplacian normalization and dimension reduction for graph learning. We argued that the standard L-scaling normalization method has the undesirable property that the normalization factors can vary significantly within a pure component. An alternate normalization method, which we call K-scaling, is proposed to remedy the problem. Experiments confirm the superiority of the this normalization scheme. References [1] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds. Machine Learning, Special Issue on Clustering:209–239, 2004. [2] F. R. Chung. Spectral Graph Theory. Regional Conference Series in Mathematics. American Mathematical Society, Rhode Island, 1998. [3] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849–856, 2001. [4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell, 22:888–905, 2000. [5] M. Szummer and T. Jaakkola. Partially labeled classification with Markov random walks. In NIPS 2001, 2002. [6] T. Zhang and R. K. Ando. Analysis of spectral kernel design based semi-supervised learning. In NIPS, 2006. [7] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Schlkopf. Learning with local and global consistency. In NIPS 2003, pages 321–328, 2004. [8] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML 2003, 2003.
2006
173
3,003
An Oracle Inequality for Clipped Regularized Risk Minimizers Ingo Steinwart, Don Hush, and Clint Scovel Modelling, Algorithms and Informatics Group, CCS-3 Los Alamos National Laboratory Los Alamos, NM 87545 {ingo,dhush,jcs}@lanl.gov Abstract We establish a general oracle inequality for clipped approximate minimizers of regularized empirical risks and apply this inequality to support vector machine (SVM) type algorithms. We then show that for SVMs using Gaussian RBF kernels for classification this oracle inequality leads to learning rates that are faster than the ones established in [9]. Finally, we use our oracle inequality to show that a simple parameter selection approach based on a validation set can yield the same fast learning rates without knowing the noise exponents which were required to be known a-priori in [9]. 1 Introduction The theoretical understanding of support vector machines (SVMs) and related kernel-based methods has been substantially improved in recent years. For example using Talagrand’s concentration inequality and local Rademacher averages it has recently been shown that SVMs for classification can learn with rates up to n−1 under somewhat realistic assumptions on the data-generating distribution (see [9, 11] and the related work [2]). However, the so-called “shrinking technique” of [9, 11] for establishing such rates, requires the free parameters to be chosen a-priori, and in addition, the optimal values of these parameters depend on features of the data-generating distribution which are typically unknown. Consequently, [9, 11] do not provide a practical method for learning with fast rates. On the other hand, the oracle inequality in [2] only holds for distributions having Tsybakov noise exponent ∞, and hence it describes a situation which is rarely met in practice. The goal of this work is to overcome these shortcomings by establishing a general oracle inequality (see Theorem 3.1) for regularized empirical risk minimizers. The key ingredient of this oracle inequality is the observation that for most commonly used loss functions it is possible to “clip” the decision function of the algorithm before beginning with the theoretical analysis. In addition, a careful choice of the weighted empirical process Talagrand’s inequality is applied to, makes the “shrinking technique” superfluous. Finally, by explicitly dealing with ϵ-approximate minimizers of the regularized risk our results also apply to actual SVM algorithms. With the help of the general oracle inequality we then establish an oracle inequality for SVM type algorithms (see Theorem 2.1) as well as a simple oracle inequality for model selection (see Theorem 4.2). For the former, we show that it leads to improved rates for e.g. binary classification under the assumptions considered in [9] and a-priori known noise exponents. Using the model selection theorem we then show how our new oracle inequality for SVMs can be used to analyze a simple parameter selection procedure based on a validation set that achieves the same learning rates without prior knowledge on the noise exponents. The rest of this work is organized as follows: In Section 2 we present our oracle inequality for SVM type algorithms. We then discuss its implications and analyze the simple parameter selection procedure when using Gaussian RBF kernels. In Section 3 we then present and prove the general oracle inequality. The proof of Theorem 2.1 as well as the oracle inequality for model selection can be found in Section 4. 2 Main Results Throughout this work we assume that X is compact metric space, Y ⊂[−1, 1] is compact, P is a Borel probability measure on X × Y , and F is a set of functions over X such that 0 ∈F. Often F is a reproducing kernel Hilbert space (RKHS) H of continuous functions over X with closed unit ball BH. It is well-known that H can then be continuously embedded into the space of continuous functions C(X) equipped with the usual maximum-norm ∥.∥∞. In order to avoid constants we always assume that this embedding has norm 1, i.e. ∥.∥∞≤∥.∥H. Furthermore, L : Y × R →[0, ∞) always denotes a continuous function which is convex in its second variable such that L(y, 0) ≤1. The functions L will serve as loss functions and consequently let us recall that the associated L-risk of a measurable function f : X →R is defined by RL,P (f) = E(x,y)∼P L y, f(x)  . Note that the assumption L(y, 0) ≤1 immediately gives RL,P (0) ≤1. Furthermore, the minimal L-risk is denoted by R∗ L,P , i.e. R∗ L,P = inf{RL,P (f) | f : X →R measurable}, and a function attaining this infimum is denoted by f ∗ L,P . We always assume that such an f ∗ L,P exists. The learning schemes we are mainly interested in are based on an optimization problem of the form fP,λ := arg min f∈H  λ∥f∥2 H + RL,P (f)  , (1) where λ > 0. Note that if we identify a training set T = ((x1, y1), . . . , (xn, yn)) ∈(X × Y )n with its empirical measure, then fT,λ denotes the empirical estimators of the above learning scheme. Obviously, support vector machines (see e.g. [5]) and regularization networks (see e.g. [7]) are both learning algorithms which fall into the above category. One way to describe the approximation error of these learning schemes is the approximation error function a(λ) := λ∥fP,λ∥2 H + RL,P (fP,λ) −R∗ L,P , λ > 0, which has been discussed in some detail in [10]. Furthermore in order to deal with the complexity of the used RKHSs let us recall that for a subset A ⊂E of a Banach space E the covering numbers are defined by N(A, ε, E) := min n n ≥1 : ∃x1, . . . , xn ∈E with A ⊂ n [ i=1 (xi + εBE) o , ε > 0, where BE denotes the closed unit ball of E. Given a finite sequence T = ((x1, y1), . . . , (xn, yn)) ∈ (X × Y )n we write TX := (x1, . . . , xn). For our main results we are particularly interested in covering numbers in the Hilbert space L2(TX) which consists of all equivalence classes of functions f : X × Y →R and which is equipped with the norm ∥f∥L2(TX) :=  1 n n X i=1 f(xi) 2 1 2 . (2) In other words, L2(TX) is a L2-space with respect to the empirical measure of (x1, . . . , xn). Learning schemes of the form (1) typically produce functions fP,λ with limλ→0 ∥fP,λ∥∞= ∞ (see e.g. [10] for a precise statement). Unfortunately, this behaviour has a serious negative impact on the learning rates when directly employing standard tool’s such as Hoeffding’s, Bernstein’s or Talagrand’s inequality. On the other hand, when dealing with e.g. the hinge loss it is obvious that clipping the function fP,λ at −1 and 1 does not worsen the corresponding risks. Following this simple observation we will consider loss functions L that satisfy the clipping condition L(y, t) ≥ L(y, 1) if t ≥1 L(y, −1) if t ≤−1 , (3) for all y ∈Y . Recall that this type of loss function was already considered in [4, 11], but the clipping idea actually goes back to [1]. Moreover, it is elementary to check that most commonly used loss functions including the hinge loss and the least squares loss satisfy (3). Given a function f : X →R we now define its clipped version ˆf : X →[−1, 1] by ˆf(x) :=    1 if f(x) > 1 f(x) if f(x) ∈[−1, 1] −1 if f(x) < −1 . It is clear from (3) that we always have L(y, ˆf(x)) ≤L(y, f(x)) and consequently we obtain RL,P ( ˆf) ≤RL,P (f) for all distributions P. Finally, we also need the following Lipschitz condition |L|1 := sup y∈Y,−1≤t1,t2≤1 |L(y, t1) −L(y, t2)| |t1 −t2| ≤2. (4) With the help of these definitions we can now state our main result which establishes an oracle inequality for clipped versions of fT,λ: Theorem 2.1 Let P be a distribution on X × Y and let L be a loss function which satisfies (3) and (4). Let H be a RKHS of continuous functions on X. For a fixed element f0 ∈H we define a(f0) := λ∥f0∥2 H + RL,P (f0) −R∗ L,P B(f0) := sup x∈X,y∈Y L(y, f0(x)) . (5) In addition, we assume that we have a variance bound of the form EP L ◦ˆf −L ◦f ∗ L,P 2 ≤v EP (L ◦ˆf −L ◦f ∗ L,P ) ϑ (6) for constants v ≥1, ϑ ∈[0, 1] and all measurable f : X →R. Moreover, suppose that H satisfies sup T ∈(X×Y )n log N BH, ε, L2(TX)  ≤aε−2p , ε > 0, (7) for some constants p ∈(0, 1) and a ≥1. For fixed λ > 0 let fT,λ ∈H be a function that minimizes f 7→λ∥f∥2 H + RL,T (f) up to some ϵ > 0. Then there exists a constant Kp,v depending only on p and v such that for all τ ≥1 we have with probability not less than 1 −3e−τ that RL,P ( ˆfT,λ) −R∗ L,P ≤ Kp,va λpn  1 2−ϑ+p(ϑ−1) + Kp,va λpn + 5 32vτ n  1 2−ϑ + 140τ n + 14B(f0)τ 3n + 8a(f0) + 4ϵ. (8) The above oracle inequality has some interesting consequences as the following examples illustrate. We begin with an example that deals with a fixed kernel: Example 2.2 (Learning rates for single kernel) Assume that in Theorem 2.1 we have a Lipschitz continuous loss function such as the hinge loss. In addition assume that the approximation error function satisfies a(λ) ≤cλβ, λ > 0, for some constants c > 0 and β ∈(0, 1]. Setting f0 := fP,λ and optimizing (8) with respect to λ then shows that the corresponding SVM learns with rate n−γ, where γ := min n β β 2 −ϑ + p(ϑ −1)  + p, 2β β + 1 o . Recall that this learning rate has already been obtained in [11]. The next example investigates SVMs that use a Gaussian RBF kernel whose width may vary with the sample size: Example 2.3 (Classification with several Gaussian kernels) Let X be the unit ball in Rd and Y := {−1, 1}. Furthermore assume that we are interested in binary classification using the hinge loss and the Gaussian RKHSs Hσ that belong to the RBF kernels kσ(x1, x2) := e−σ2∥x1−x2∥2 with width σ > 0. If P has geometric noise exponent α ∈(0, ∞) in the sense of [9] then it was shown in [9] that there exists a function f0 ∈Hσ with ∥f0∥∞≤1 and aσ(f0) ≤c σdλ + σ−αd , σ > 0, λ > 0, where c > 0 is a constant independent of λ and σ. Moreover, [9, Thm. 2.1] shows that Hσ satisfies (7) for all p ∈(0, 1) with a := cp,d,δσ(1−p)(1+δ)d where δ > 0 can be arbitrarily chosen and cp,d,δ is a suitable constant. Now assume that P has Tsybakov noise exponent q ∈[0, ∞] in the sense of [9]. It was then shown in [9] that (6) is satisfied for ϑ := q q+1. Minimizing (8) with respect to σ and λ and choosing p and δ sufficiently small then yields that the corresponding SVM can learn with rate n−γ+ε, where γ := α(q + 1) α(q + 2) + q + 1 , and ε > 0 can be chosen arbitrarily small. Note that these rates are superior to those obtained in [9, Theorem 2.8]. In the above examples the optimal parameters λ and σ depend on the sample size n but not on the training samples T. However, these optimal parameters require us to know certain characteristics of the distribution such as the approximation exponent β or the noise exponents α and q. The following example shows that the oracle inequality of Theorem 2.1 can be used to find these optimal parameters in a data-dependent fashion which does not require any a-priori knowledge: Example 2.4 In this example we assume that our training set T consists of 2n samples. We write T0 for the first n samples and T1 for the last n samples. Let fT0,σ,λ be the SVM solution using a Gaussian kernel with width σ. Moreover, let Σ ⊂[1, n1/d) and Λ ⊂(0, 1] be finite sets with cardinality mΣ and mΛ, respectively. Under the assumptions of Example 2.3 the oracle inequality (8) then shows that with probability not less than 1 −3mΣmΛe−τ we have RL,P ( ˆfT0,σ,λ) −R∗ L,P ≤Kd,q,α,ε  σd λεn  q+1 q+2−ε + τ n  q+1 q+2 + σdλ + σ−αd  simultaneously for all σ ∈Σ and λ ∈Λ, where ε ∈(0, 1] is arbitrarily but fixed and Kd,q,α,ε is a suitable constant. Now using a simple model selection approach (see e.g. Theorem 4.2) for the second half T1 of our training set we find that with probability not less than 1 −e−τ we have RL,P ( ˆfT0,σ∗ T1,λ∗ T1) −R∗ L,P ≤ C τ + log(mΣmΛ) n  q+1 q+2 + C min σ∈Σ,λ∈Λ  σd λεn  q+1 q+2−ϵ + σdλ + σ−αd  , where C is a constant only depending on d, q, α, and ε, and (σ∗ T1, λ∗ T1) ∈Σ × Λ is a pair that minimizes the empirical risk RL,T1(.) over Σ × Λ. Now assume that Σn and Λn are 1/n- and 1/n2-nets of [1, n1/d) and (0, 1], respectively. Obviously, we can choose Σn and Λn such that mΣn ≤n2 and mΛn ≤n2, respectively. With such parameter sets it is then easy to check that we obtain exactly the rates we have found in Example 2.3, but without knowing the noise exponents α and q a-priori. 3 An oracle inequality for clipped penalized ERM Theorem 2.1 is a consequence of a far more general oracle inequality on clipped penalized empirical risk minimizers. Since this result is of its own interest we now present it together with its proof in detail. To this end recall that a subroot is a nondecreasing function ϕ : [0, ∞) →[0, ∞) such that ϕ(r)/√r is nonincreasing in r. Moreover, for a Rademacher sequence σ := (σ1, . . . , σn) with respect to the measure ν and a function h : Z →R we define Rσh : Zn →R by Rσh := n−1σ1h(z1) + · · · + σnh(zn)  . Now the general oracle inequality is: Theorem 3.1 Let P ̸= ∅be a set of (hyper)-parameters, F be a set of measurable functions f : X →R with 0 ∈F, and Ω: P × F →[0, ∞] be a function. Let P be a distribution on X × Y and L be a loss function which satisfies (3) and (4). For a fixed pair (p0, f0) ∈P × F we define aΩ(p0, f0) := Ω(p0, f0) + RL,P (f0) −R∗ L,P . Moreover, let us assume that the quantity B(f0) defined in (5) is finite. In addition, we assume that we have a variance bound of the form (6) for constants v ≥1, ϑ ∈[0, 1] and all measurable f : X →R. Furthermore, suppose that there exists a subroot ϕn with ET ∼P nEσ∼ν sup (p,f)∈P×F Ω(p,f)+EP (L◦ˆ f−L◦f ∗ L,P )≤r Rσ(L ◦ˆf −L ◦f ∗ L,P ) ≤ϕn(r) , r > 0. (9) Finally, let (pT,Ω, fT,Ω) be an ϵ-approximate minimizer of (p, f) 7→Ω(p, f) + RL,T (f). Then for all τ ≥1 and all r satisfying r ≥max n 120ϕn(r), 32vτ n  1 2−ϑ , 28τ n o (10) we have with probability not less than 1 −3e−τ that Ω(pT,Ω, fT,Ω) + RL,P ( ˆfT,Ω) −R∗ L,P ≤5r + 14B(f0)τ 3n + 8aΩ(p0, f0) + 4ϵ. Proof: We write B for B(f0). For T ∈(X × Y )n we now observe Ω(pT,Ω, fT,Ω) + RL,T ( ˆfT,Ω) − Ω(p0, f0) −RL,T (f0) ≤ϵ by the definition of (pT,Ω, fT,Ω), and hence we find Ω(pT,Ω, fT,Ω)+RL,P ( ˆfT,Ω)−R∗ L,P ≤ RL,P ( ˆfT,Ω) −RL,T ( ˆfT,Ω) + RL,T (f0) −RL,P (f0) + aΩ(p0, f0) + ϵ = RL,P ( ˆfT,Ω)−RL,P (f ∗ L,P)−RL,T ( ˆfT,Ω)+RL,T (f ∗ L,P) (11) +RL,T (f0)−RL,T ( ˆf0)−RL,P (f0)+RL,P ( ˆf0) (12) +RL,T ( ˆf0)−RL,T (f ∗ L,P )−RL,P ( ˆf0)+RL,P (f ∗ L,P ) (13) +aΩ(p0, f0) + ϵ . Let us first estimate the term in line (12). To this end we write h1 := L ◦f0 −L ◦ˆf0. Then our assumption on L guarantees h1 ≥0, and since we also have ∥h1∥∞≤B, we find ∥h1−EP h1∥∞≤ B. In addition, we obviously have EP (h1−EP h1)2 ≤EP h2 1 ≤BEP h1. Consequently, Bernstein’s inequality [6, Thm. 8.2] shows that with probability not less than 1 −e−τ we have ET h1 −EP h1 < r 2τB EP h1 n + 2Bτ 3n . Now using √ ab ≤a 2 + b 2 we find √2τBEP h1 · n−1 2 ≤EP h1 + Bτ 2n , and consequently we have P n T ∈Zn : RL,T (f0)−RL,T ( ˆf0)−RL,P (f0)+RL,P ( ˆf0) < EP h1+ 7Bτ 6n  ≥1−e−τ . (14) Let us now estimate the term in line (13). To this end we write h2 := L◦ˆf0−L◦f ∗ L,P . Then we have ∥h2∥∞≤3 and ∥h2 −EP h2∥∞≤6. In addition, our variance bound gives EP (h2 −EP h2)2 ≤ EP h2 2 ≤v(EP h2)ϑ, and consequently, Bernstein’s inequality shows that with probability not less than 1 −e−τ we have ET h2 −EP h2 < r 2τv(EP h2)ϑ n + 4τ n . Now, for q−1 + (q′)−1 = 1 the elementary inequality ab ≤aqq−1 + bq′(q′)−1 holds, and hence for q := 2 2−ϑ, q′ := 2 ϑ, a := √ 21−ϑϑϑτv · n−1 2 , and b := 2EP h2 ϑ ϑ/2 we obtain r 2τv(EP h2)ϑ n ≤  1 −ϑ 2 21−ϑϑϑvτ n  1 2−ϑ + EP h2. Since elementary calculations show that 2−ϑϑϑ 1 2−ϑ ≤1 we obtain r 2τv(EP h2)ϑ n ≤  1 −ϑ 2 2vτ n  1 2−ϑ + EP h2. Therefore we have with probability not less than 1 −e−τ that RL,T ( ˆf0) −RL,T (f ∗ L,P ) −RL,P ( ˆf0) + RL,P (f ∗ L,P ) < EP h2 +  1 −ϑ 2 2vτ n  1 2−ϑ + 4τ n . (15) Let us finally estimate the term in line (11). To this end we write hf := L ◦ˆf −L ◦f ∗ L,P , f ∈F. Moreover, for r > 0 we define Gr := n EP hf −hf Ω(p, f) + EP (hf) + r : (p, f) ∈P × F o . Then for gp,f := EP hf −hf Ω(p,f)+EP (hf )+r ∈Gr we have EP gp,f = 0 and ∥gp,f∥∞= sup z∈Z EP hf −hf(z) Ω(p, f) + EP (hf) + r = ∥EP hf −hf∥∞ Ω(p, f) + EP (hf) + r ≤6 r . In addition, the inequality aϑb2−ϑ ≤(a + b)2 and the variance bound assumption (6) implies that EP g2 p,f ≤ EP h2 f (EP (hf) + r)2 ≤ EP h2 f r2−ϑ(EP hf)ϑ ≤ v r2−ϑ . Now define Φ(r) := ET ∼P n sup (p,f)∈P×F EP hf −ET hf Ω(p, f) + EP (hf) + r. Standard symmetrization then yields ET ∼P n sup (p,f)∈P×F Ω(p,f)+EP (hf )≤r |EP hf −ET hf| ≤2ET ∼P nEσ∼ν sup (p,f)∈P×F Ω(p,f)+EP (hf )≤r |Rσhf| , and hence Lemma 3.2 proved below together with (9) shows Φ(r) ≤10ϕn(r)r−1, r > 0. Therefore applying Talagrand’s inequality in the version of [3] to the class Gr we obtain P n  T ∈Zn : sup g∈Gr ET g ≤30ϕn(r) r + r 2τv nr2−ϑ + 7τ nr  ≥1 −e−τ . Let us define εr := 30ϕn(r) r + 2τv nr2−ϑ 1/2 + 7τ nr. Then the above inequality gives with probability not less than 1 −e−τ that for all (p, f) ∈P × F we have EP hf −ET hf ≤εr · Ω(p, f) + EP hf  + 30ϕn(r) + r 2τvrϑ n + 7τ n , and consequently we have with probability not less than 1 −e−τ that RL,P ( ˆfT,Ω) −RL,P (f ∗ L,P ) −RL,T ( ˆfT,Ω) + RL,T (f ∗ L,P ) ≤ εr · Ω(pT,Ω, fT,Ω) + RL,P ( ˆfT,Ω) −RL,P (f ∗ L,P )  + 30ϕn(r) + r 2τvrϑ n + 7τ n . (16) Now observe that for the functions h1 and h2 which we defined when estimating (12) and (13) we have EP g + EP h = RL,P (f0) −R∗ L,P , (17) and hence we can combine our estimates (16), (14), and (15) of the terms (11), (12), and (13) to obtain that with probability not less than 1 −3e−τ we have (1−εr) Ω(pT,Ω, fT,Ω) + RL,P ( ˆfT,Ω) −R∗ L,P  ≤30ϕn(r)+ r 2τvrϑ n + (1−ϑ 2 ) 2vτ n  1 2−ϑ + (66+7B)τ 6n + aΩ(p0, f0)+RL,P (f0)−R∗ L,P +ϵ. In particular, for r satisfying the assumption (10) we have 30ϕn(r) r ≤ 1 4, 2τv nr2−ϑ 1/2 ≤ 1 4, and 7τ nr ≤1 4. This shows 1 −εr ≥1 4, and hence we obtain with probability not less than 1 −3e−τ that Ω(pT,Ω, fT,Ω) + RL,P ( ˆfT,Ω) −R∗ L,P ≤120ϕn(r) + r 32τvrϑ n + 2(2 −ϑ) 2vτ n  1 2−ϑ + 44τ n +14Bτ 3n + 4aΩ(p0, f0) + 4 RL,P (f0)−R∗ L,P  + 4ϵ. However we also have 120ϕn(r) ≤r, 32τvrϑ n 1/2 ≤r, 44τ n ≤ 5r 3 , and 2(2 −ϑ) 2vτ n  1 2−ϑ ≤ 2(2 −ϑ) r 4 ≤r, and hence we find the assertion. For the proof of Theorem 3.1 it remains to show the following lemma: Lemma 3.2 Let P and F be as in Theorem 3.1. Furthermore, let W : F →R and a : P × F → [0, ∞). Define Φ(r) := ET ∼P n sup f∈P×F ET W(f) −EP W(f) a(p, f) + r and suppose that there exists a subroot Ψ such that ET ∼P n sup (p,f)∈P×F a(p,f)≤r ET W(f) −EP W(f) ≤Ψ(r) , r > 0. Then we have Φ(r) ≤5 rΨ(r) for all r > 0. Proof: For x > 1, r > 0, and T ∈(X × Y )n we obtain by a standard peeling approach that sup (p,f)∈P×F |EP W(f) −ET W(f)| a(p, f) + r ≤ sup (p,f)∈P×F a(p,f)≤r |EP W(f) −ET W(f)| a(p, f) + r + ∞ X i=0 sup (p,f)∈P×F a(p,f)≥rxi a(p,f)≤rxi+1 |EP W(f) −ET W(f)| a(p, f) + r ≤ sup (p,f)∈P×F a(p,f)≤r |EP W(f) −ET W(f)| r + ∞ X i=0 sup (p,f)∈P×F a(p,f)≥rxi a(p,f)≤rxi+1 |EP W(f) −ET W(f)| rxi + r ≤ 1 r sup (p,f)∈P×F a(p,f)≤r |EP W(f) −ET W(f)| + 1 r ∞ X i=0 1 xi + 1 sup (p,f)∈P×F a(p,f)≤rxi+1 |EP W(f) −ET W(f)| = 1 r  Ψ(r) + ∞ X i=0 Ψ(rxi+1) xi + 1  . However since Ψ is a subroot we obtain that Ψ(rxi+1) ≤x i+1 2 Ψ(r) so that we obtain the assertion by setting x := 4. 4 Proof of Theorem 2.1 Before we begin the proof of Theorem 2.1 let us state the following proposition which follows directly from [8] (see also [9, Prop. 5.7]) together with simple considerations on covering numbers: Proposition 4.1 Let F := H be a RKHS, P := {p0} be a singleton, and Ω(p0, f) := λ∥f∥2. If (7) is satisfied then there exists a constant cp depending only on p such that (9) is satisfied for ϕn(r) := cp max  v 1 2 (1−p)r ϑ 2 (1−p) r λ  p 2  a n  1 2 ,  r λ  p 1+p  a n  1 1+p  . Proof of Theorem 2.1: From the covering bound assumption we observe that Proposition 4.1 implies we have the bound (9) with ϕn(r) defined by the righthand side of Proposition 4.1 and therefore Theorem 3.1 implies that Condition (10) becomes r ≥max n 120cpv 1 2 (1−p)r ϑ 2 (1−p) r λ  p 2  a n  1 2 , 120cp  r λ  p 1+p  a n  1 1+p , 32vτ n  1 2−ϑ , 28τ n o (18) and solving with respect to r yields the conclusion. Finally, for the parameter selection approach in Example 2.4 we need the following oracle inequality for model selection: Theorem 4.2 Let P be a distribution on X × Y and let L be a loss function which satisfies (3), (4), and the variance bound (6). Furthermore, let F := {f1, . . . , fm} be a finite set of functions mapping X into [−1, 1]. For T ∈(X × Y )n we define fT := arg min f∈F RL,T (f) . Then there exists a universal constant K such that for all τ ≥1 we have with probability not less than 1 −3e−τ that RL,P (fT ) −R∗ L,P ≤ 5 K log m n  1 2−ϑ + 5 32vτ n  1 2−ϑ + 5K log m + 154τ n +8 min f∈F(RL,P (f) −R∗ L,P ) . Proof: Since all functions fi already map into [−1, 1] we do not have to consider the clipping operator. For r > 0 we now define Fr := {f ∈F : RL,P (f) −R∗ L,P ≤r}. Then the cardinality of Fr is smaller than or equal to m and hence we have N(L ◦Fr −L ◦f ∗ L,P , ε, L2(T)) ≤m for all ε > 0. Using the technique of [8] (cf. also [9, Prop. 5.7]) we hence obtain that (9) is satisfied for ϕn(r) := c √n max p v log m rϑ/2, log m √n  , where c is a universal constant. Applying Theorem 3.1 then yields the assertion. References [1] P.L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inform. Theory, 44:525–536, 1998. [2] G. Blanchard, O. Bousquet, and P. Massart. Statistical performance of support vector machines. Technical Report, 2004. [3] O. Bousquet. A Bennet concentration inequality and its application to suprema of empirical processes. C. R. Math. Acad. Sci. Paris, 334:495–500, 2002. [4] D.R. Chen, Q. Wu, Y.M. Ying, and D.X. Zhou. Support vector machine soft margin classifiers: Error analysis. Journal of Machine Learning Research, 5:1143–1175, 2004. [5] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [6] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996. [7] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7:219–269, 1995. [8] S. Mendelson. Improving the sample complexity using global data. IEEE Trans. Inform. Theory, 48:1977–1991, 2002. [9] I. Steinwart and C. Scovel. Fast rates for support vector machines using Gaussian kernels. Annals of Statistics, to appear. [10] I. Steinwart and C. Scovel. Fast rates for support vector machines. In Proceedings of the 18th Annual Conference on Learning Theory, COLT 2005, pages 279–294. Springer, 2005. [11] Q. Wu, Y. Ying, and D.-X. Zhou. Multi-kernel regularized classifiers. J. Complexity, to appear.
2006
174
3,004
Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother Microsoft Research Cambridge [ankannan, jwinn, carrot]@microsoft.com Abstract Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ‘jigsaw pieces’, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1 Introduction Many computer vision tasks require the use of appearance and shape models to represent objects in the scene. The choices for appearance models range from histogram-based representations that throws away spatial information, to template-based representations that try to capture the entire spatial layout of the objects but cope poorly with articulation, deformation or variation in appearance. In the middle of this spectrum lie patch-based models that aim to find the right balance between the two extremes. However, a central problem with existing patch-based models is that there is no way to choose the shape and size of a patch; typically a predefined set of patch sizes and shapes (often rectangles or circles) are used. We believe that natural images can provide enough cues to allow patches to be discovered of varying shape and size corresponding to the shape and size of object parts present in the images. Indeed, we will show that the patches discovered by the jigsaw model can become strongly associated with semantic object parts. With this motivation, we introduce a generative model for a set of images that learns to extract irregularly shaped and sized patches from a latent image which are combined to generate each training image. We call this latent image a jigsaw as it contains all the necessary ‘jigsaw pieces’ that can be used to generate the target image set. We present an inference algorithm for learning the jigsaw and for finding the jigsaw pieces that make up each image. As our proposed jigsaw model is a generative model for an image, it can be readily used as a component in many computer vision applications for both image understanding and image synthesis. These include object recognition, detection, image segmentation and image classification, object synthesis, image de-noising, super resolution, texture transfer between images and image in-painting. In fact, the jigsaw model is likely to be useable as a direct replacement for a fixed patch model in any existing patch-based system. 2 Related work The closest work to ours is the epitome model of Jojic et al. [1]. This is a generative model for image patches, or alternatively a model for images if patches that share coordinates in the image are averaged together (although this averaging often leads to a blurry result). Epitomes are learned using a set of fixed shaped patches over a small range of sizes. In contrast, in the jigsaw model, the inference process chooses appropriately shaped and sized pieces from the training images when learning the jigsaw. The difference between these two models is illustrated in section 4. Our work also closely relates to the seminal work of Freeman et al. [2] that proposed a general machinery for inferring underlying scenes from images, with goals such as in optical flow estimation and super-resolution. They define a Markov random field over image patches and infer the hidden scene representation using belief propagation. Again, they use a set of fixed size image patches, hoping to reach a reasonable trade-off between capturing sufficient statistics in each patch, and disambiguating different kinds of features. Along these lines, Markov random field with larger cliques have also been used to capture the statistic of natural images, such as the field of experts model proposed in [3] which represents the field potentials as non-linear functions of linear filters. Again, the underlying linear filters are applied to patches of a fixed size. In the domain of image synthesis the work of Freeman et al. [2] has inspired many patch-based synthesis algorithms including super resolution, texture transfer, image in-painting or photo synthesis. They can be viewed as a data-driven way of sampling from the Markov random field with high-order cliques given by the overlapping patches. The texture synthesis and transfer algorithm of Efros et al. [4] constructs a new image by greedily selecting overlapping patches so that the seam transition is not visible. Whilst this work does allow different patch shapes, it does not learn patch appearance since it works from a supplied texture image. Recently a similar approach has been proposed in [5] for synthesising a collage image from a given set of input images, although in this case a probabilistic model is defined and optimised. Patch-based models are also widely applied in object recognition research [6, 7, 8, 9, 10]. These models use hand-selected patch shapes (typically rectangles) which can lead to poor results given that different object parts have different sizes and shapes. In fact, the use of fixed patches reduces accuracy when the object part is of different size and shape than the chosen patch; in this case, the patch model has to cope with the variability outside the object part. This effect is particularly evident when the part is at the edge of the object as the model then has to try and capture the variability of the background. In addition, such models ignore the shape of the object part which is frequently much more discriminative than appearance alone. The paper is structured as follows: In section 3 we introduce the probabilistic model and describe a method for performing learning and inference in the model. In section 4 we show results for synthetic and real data and present a comparison to the epitome model. Finally, in section 5, we discuss possible extensions to the model. 3 Probabilistic model This section describes the probabilistic model that we use to learn a jigsaw from a set of training images. We aim to learn a jigsaw such that, given an image set, pieces of the jigsaw image satisfy the following criteria: • each piece is similar in appearance and shape to several regions of the training images; • any of the training images can be approximately reconstructed using only pieces from the jigsaw (a piece may be used more than once in a single image); • pieces are as large as possible for a particular accuracy of reconstruction. Thus, while allowing the jigsaw pieces to have arbitrary shape, we ensure that such pieces are shared across the entire image set, exhaustively explain the input image set, and are also large enough to be discriminative. By meeting these criteria, we can capture both the appearance and the shape of repeated image structures, for example, eyes, noses and mouths in a set of face images. We define a jigsaw J to be an image such that each pixel z in J has an intensity value µ(z) and an associated variance λ−1(z) (so λ is the inverse variance, also called the precision). A set of spatially                       Figure 1: Graphical model showing how the jigsaw J is used to generate a set of images I1 . . . IN by combining the jigsaw pieces in different ways. Each image has a corresponding offset map L which defines the jigsaw pieces used to generate that image (see text for details). Notice that several jigsaw pieces can overlap and hence share parts of their appearance. grouped pixels in J is a jigsaw piece. We can combine many of these pieces to generate images, noting that pixels in the jigsaw be re-used in multiple pieces. Our probabilistic model is a generative image model which generates an image by joining together pieces of the jigsaw and then adding Gaussian noise of variance given by the jigsaw. For each image I, we have an associated offset map L of the same size which determines the jigsaw pieces used to make that image. This offset map defines a position in the jigsaw for each pixel in the image (more than one image pixel can map to the same jigsaw pixel). Each entry in the offset map is a two-dimensional offset li = (lx, ly), which maps a 2D point i in the image to a 2D point z in the jigsaw using z = (i −li) mod |J|, where |J| = (width, height) are the dimensions of the jigsaw. Notice that if two adjacent pixels in the image have the same offset label, then they map to adjacent pixels in the jigsaw. Figure 1 provides a schematic view of the overall probabilistic model, as it is used to generate a set of N face images. Given this mapping and the jigsaw, the probability distribution of an image is assumed to be independent for each pixel and is given by P(I | J, L) = Y i N(I(i); µ(i −li), λ(i −li)−1) (1) where the product is over image pixel positions and both subtractions are modulo |J|. We want the images to consist of coherent pieces of the jigsaw, and so we define a Markov random field on the offset map to encourage neighboring pixels to have the same offsets. P(L) = 1 Z exp  − X (i,j)∈E ψ(li, lj)   (2) where E is the set of edges in a 4-connected grid. The interaction potential ψ defines a Pott’s model on the offsets: ψ(li, lj) = γ δ(li ̸= lj) (3) where γ is a parameter which influences the typical size of the learned jigsaw pieces. Currently, γ is set to give the largest pieces whilst maintaining reasonable quality when the image is reconstructed from the jigsaw. When learning the jigsaw, it is possible for regions of the jigsaw to be unused, that is, to have no image pixels mapped to them. To allow for this case, we define a Normal-Gamma prior on µ and λ for each jigsaw pixel z, P(J) = Y z N(µ(z); µ0, (βλ(z))−1) Gamma(λ(z); a, b). (4) This prior means that the behaviour of the model is well defined for unused regions. For our experiments, we fix the hyperparameters µ to .5, β to 1, b to three times the inverse data variance and a to the square of b. The local interaction strength γ is set to 5 per channel. Inference and learning: The model defines the joint probability distribution on a jigsaw J, a set of images I1 . . . IN, and their offset maps L1 . . . LN to be P ¡ J, {I, L}N 1 ¢ = P(J) N Y n=1 P(In|J, Ln)P(L). (5) When learning a jigsaw, the image set I1 . . . IN is known and we aim to achieve MAP learning of the remaining variables. In other words, our goal is to find the jigsaw J and offset maps L1 . . . LN that maximise the joint probability (5). We achieve this in an iterative manner. First, the jigsaw is initialised by setting the precisions λ to the expected value under the prior b/a and the means µ to Gaussian noise with the same mean and variance as the data. Given this initialisation, the offset maps are updated for each image by applying the alpha-expansion graph-cut algorithm of [11] (note that our energy is submodular, also known as regular). Whilst this process will not necessarily find the most probable offset map, it is guaranteed to find at least a strong local minimum such that no single expansion move can increase (5). Given the inferred offset maps, the jigsaw J that maximises P ¡ J, {I, L}N 1 ¢ can be found analytically. This is achieved for a jigsaw pixel z, the optimal mean µ⋆and precision λ⋆by using µ⋆ = βµ0 + P x∈X(z) I(x) β + |X(z)| (6) λ−1⋆ = b + βµ2 0 −(β + |X(z)|)(µ⋆)2 + P x∈X(z) I(x)2 a + |X(z)| (7) where X(z) is the set of image pixels that are mapped to the jigsaw pixel z across all images. We iterate between finding the offset maps holding the jigsaw fixed, and updating the jigsaw using the recently updated offset maps. When inference has converged, we apply a clustering step to determine the jigsaw pieces (in future we plan to extend the model so that this clustering arises directly during learning). Regions of the image are placed in clusters according to the degree of overlap they have in the jigsaw. The degree of overlap is measured as the ratio of the intersection to the union of the two regions of the jigsaw the image regions map to. This has the effect of clustering image regions by both appearance and shape. Each cluster then corresponds to a region of the jigsaw with an (approximately) consistent shape that explains a large number of image regions. 4 Results A toy example: In this experiment, we applied our model to the hand-crafted 150x150 RGB image shown in Fig. 2a. This image was constructed by placing four distinct objects (star, triangle, square and circle), at random positions on a black background image, with the pixels from the more recently placed object replacing the previously drawn pixels. Hence, we can see substantial amount of occlusion of parts of these objects. Using this image as the only input, we would like our model to automatically infer the appearances and shapes of the objects present in the image. Existing patch-based models are not well-suited to analyzing this image for two reasons: first, there is no clear way to choose the appropriate patch shapes and sizes, and secondly, even if such a choice is known, it is difficult for these existing methods (such as epitome [1]) to learn the shape as they cannot allow for occlusion boundaries without having an explicit occlusion model. For instance, in [1], a separate shape epitome is learned in conjunction with the appearance epitome so that image patches can be explained as a two-layered composition of appearance patches using the shape patch. However, this type of image is difficult to model with a small number of layers due to the large (a) Input image (b) Input image showing segmentation into patches (c) Jigsaw mean (d) Jigsaw variance Figure 2: Toy example: (a) The input image (b) Input image with segmentation boundaries superimposed. Red boundary lines have been drawn on the edge of neighboring pixels that have differing offsets. This segmentation illustrates the different shaped jigsaw pieces found when learning the jigsaw shown in (c)-(d). (c) Jigsaw mean with the four most-used jigsaw pieces are outlined in white. (d) The jigsaw variance summed across the RGB channels; white is high, black is low. number of objects present. In contrast, our model can infer any number of overlapping objects, without any explicit modelling of layers or depth. This is because our learning algorithm has the freedom to appropriately adjust a patch’s shape and size to explain only a portion of an object without explicitly having to represent a global layer ordering. Moreover, we have the potential to infer the relative depth ordering of neighboring patches by treating rare transitions as occlusions. Fig. 2b-d shows the results of learning a jigsaw of this toy image. In fig. 2b, we show how the image decomposes into jigsaw pieces. When two neighboring pixels have different labels, they map to non-neighboring locations in the jigsaw. With this understanding, we can look at the change in the labels of the adjacent pixels and plot such a change as a red line. Hence, each region bounded by the red lines indicates a region from the input image being mapped to the jigsaw. From Fig. 2b, we can see that the model has discovered well-defined parts (in this example, objects) present in the image. This is further illustrated in the 36 × 36 learned jigsaw whose mean and variance are shown in Fig. 2c,d. The learned jigsaw has captured the shapes and appearances of the four objects and a black region for modelling the background. Under our Bayesian model, pixels in the jigsaw that have never been used in explaining the observation are set to µ0, which we have fixed to .5 (gray). We can obtain jigsaw pieces by doing the clustering step outlined in Section. 3. In Fig. 2c, we also show the four most-used jigsaw pieces thus obtained by outlining them in white. Comparison to epitome model: In this section, we compare the jigsaw model with the epitome model [1], as applied to the dog image in Fig. 3a. We learned a 32 × 32 epitome (Fig. 3d) using all the possible 7 × 7 patches from the input image. We then learned a 32 × 32 jigsaw (Fig. 3c) such that the average patch area was 49 pixels, the same as in the epitome model. This was achieved by modifying the compatibility parameter γ. Fig. 3b shows the segmentation of the image after (a) Input image (b) Image showing segmentation (c) Jigsaw mean (d) Epitome mean Reconstructions from: (e) Jigsaw (f) Epitome (no averaging) (g) Epitome (averaging 49 patches) Mean squared error: .0537 Mean squared error: .0711 Mean squared error: : .0541 Figure 3: Comparison between jigsaw and epitome. (a) The input image (b) The segmentation of the image given by the jigsaw model (c,d) The means of the learned jigsaw and epitome models (e) Reconstruction of the image using the jigsaw (f) Reconstruction from the epitome where each image pixel is reconstructed using only one fixed-size patch (g) Reconstruction from the epitome where each image pixel is the average of 49 patches. While this reconstruction has similar mean squared error to the jigsaw reconstruction, it is more blurry and less visually pleasing. learning, with patch boundaries overlaid in red. We can see that the pieces correspond to meaningful regions such as flowers, and also that patch boundaries tend to follow object boundaries. Comparing Figs. 3c & d, we find that the jigsaw is much less blurred than the epitome and also doesn’t have the epitome’s artificial ’block’ structure. Instead, the boundaries between different textures are placed to allocate the appropriate amount of jigsaw space to each texture, for example, entire flowers are represented as one coherent region. Epitome models can use multi-resolution learning to reduce, but not eliminate, block artifacts. However, whilst this technique can also be applied to jigsaw learning, it has not been found to be necessary in order to obtain a good solution. In Fig. 3e-g, we compare reconstructions of the input image from the learned jigsaw and epitome models. Since the jigsaw is a generative model for an image, we can reconstruct the image by mapping pixel colors from the jigsaw according to the offset map. When reconstructing from the epitome, we can choose to either use one patch per pixel, or to average a number of patches per pixel. The first approach is most comparable to the jigsaw reconstruction, as it requires only one offset per pixel. However, we find that, as shown in Fig. 3f, the reconstruction is very blocky. When we reconstruct the each pixel from the 49 overlapping patches (Fig. 3g), we find that the reconstruction is overly blurry compared to the jigsaw, despite having a similar mean squared reconstruction error. In addition, this method requires 49 parameters per pixel rather than one and hence is a significantly less compact representation of the image. The reconstruction from the jigsaw is noticeably less blurry and is more visibly pleasing as there is no averaging in the generative process and patch boundaries tend to occur at actual object boundaries. Modelling face images: We next applied the jigsaw model to a set of 100 face images from the Olivetti database at AT&T consisting of 10 different images of 10 people. Each of these grayscale images are of size 64×64 pixels. We set the jigsaw size to 128×128 pixels so that the jigsaw has only 1/25 of the area of the input images combined. Figure 4a shows the inferred segmentation of the images into different shaped and sized pieces (each row contains the images of one person). When the faces depict the same person with similar pose, the resulting segmentations for these images are typically similar, showing that similar jigsaw pieces are being used to explain each image. This can be seen, for instance, from the first row of images in that figure. Figure 4b shows the mean of the learned jigsaw which can be seen to contain a number of face ‘elements’ such as eyes, noses etc. To obtain the jigsaw pieces, we applied the clustering step outlined in Section. 3. The obtained clusters are shown in Figure 5(left), which also shows the (a) 100 face images showing learned segmentation (b) Jigsaw mean Figure 4: Face images: (a) A set of 100 images, each row containing ten different images of the same person, with the segmentation given by the jigsaw model shown in red. (b) Jigsaw learned from these face images, see Figure 5 for clustering results. sharing of these jigsaw pieces. With the jigsaw pieces known, we can now retrieve the regions from the image set that correspond to each jigsaw piece. In Figure 5 (right), we show a random selection of image regions corresponding to several of the most common jigsaw pieces (shown color-coded). What is surprising is that a particular jigsaw piece becomes very strongly associated with a particular face part (far more so than when clustering by appearance alone). Thus, by learning the shape of each jigsaw piece, our model has effectively identified small and large face parts of widely different Figure 5: Left: The learned face jigsaw of Fig. 4 with overlaid white outlines showing different overlapping jigsaw pieces. For clarity, pieces used five or fewer times are not shown. Areas of the jigsaw not used by the remaining pieces have been blacked out. Seven of the most frequently used jigsaw pieces are shown colored. Right: Unsupervised part learning. For each color-coded jigsaw piece in the left image, a column shows randomly chosen images from the image set, for which that piece was selected. Notice how these pieces are very strongly associated with different face parts – the model has achieved unsupervised discovery of two different nose shapes, eyes, eyebrows, cheeks etc, despite their widely different shapes and sizes. shapes and aspect ratios. We can also see from that figure that certain jigsaw pieces are conserved across different people – for example, the nose piece shown in the first column of that figure. 5 Discussion We have presented a generative jigsaw model which is capable of learning the shape, size and appearance of repeated regions in a set of images. We have also shown that, for a set of face images, the learned jigsaw pieces are strongly associated with particular face parts. Currently, we apply a post-hoc clustering step to learn the jigsaw pieces. This process can be incorporated into the model by extending the pixel offset to include a cluster label and learning the region of jigsaw used by each cluster. We are investigating how best to achieve this. While we chose a Gaussian as the model for pixel appearance, alternative models can be used, such as histograms, whilst retaining the ability to achieve translation-invariant clustering. Indeed, by using appearance models of other forms, we believe that our model could be used to find repeated structures in other domains such as audio and biology, as well as in images. Other transformations, such as rotation, scalings and flip, can be incorporated in the model with cost increasing linearly with the number of transformations. We can also extend the model to allow the jigsaw pieces to undergo deformation by favoring neighboring offsets that are similar as well as being identical, using a scheme similar to that of [12]. A practical issue with learning jigsaws is the computational requirement. Every iteration of learning involves solving as many binary graph cuts as there are pixels in the jigsaw. For instance, for the toy example, it took about 30 minutes to learn a 36 × 36 jigsaw from a 150 × 150 image. We have since developed a significantly faster inference algorithm based on sparse belief propagation. This speed-up allows the model to be applied to larger image sets and to learn larger jigsaws. Currently, our model does not explicitly account for multiple sources of appearance variability, such as illumination. This means that the same object under different illuminations, for example, will be modelled by different parts of the jigsaw. To account for this, we are investigating factored variants of the jigsaw which separate out different latent causes of appearance variability. Despite this limitation, however, we are already achieving very promising results when using the jigsaw for image synthesis, motion segmentation and object recognition. Acknowledgments: We acknowledge helpful discussions with Nebojsa Jojic, and thank the reviewers for their valuable feedback. References [1] N. Jojic, B. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In ICCV, 2003. [2] W. Freeman, E. Pasztor, and O. Carmichael. Learning low-level vision. IJCV, 40(1), 2000. [3] S. Roth and M. J. Black. Fields of experts: A framework for learning image priors. In Proceedings of IEEE CVPR, 2005. [4] A. Efros and W. Freeman. Image quilting for texture synthesis and transfer. In ACM Transactions on Graphics (Siggraph), 2001. [5] C. Rother, S. Kumar, V. Kolmogorov, and A. Blake. Digital tapestry. In Proc. Conf. Computer Vision and Pattern Recognition, 2005. [6] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In CVPR, volume 2, pages 264–271, June 2003. [7] B. Leibe and B. Schiele. Interleaved object categorization and segmentation. In BMVC, 2003. [8] E. Borenstein, E. Sharon, and S. Ullman. Combining top-down and bottom-up segmentation. In Proceedings IEEE workshop on Perceptual Organization in Computer Vision, CVPR 2004, 2004. [9] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In Proceedings of ECCV, 2003. [10] D. Huttenlocher, D. Crandall, and P. Felzenszwalb. Spatial priors for part-based recognition using statistical models. In Proceedings of IEEE CVPR, 2005. [11] Y Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 23(11), 2001. [12] J. Winn and J. Shotton. The layout consistent random field for recognizing and segmenting partially occluded objects. In Proceedings of IEEE CVPR, 2006.
2006
175
3,005
Large Margin Component Analysis Lorenzo Torresani Riya, Inc. lorenzo@riya.com Kuang-chih Lee Riya, Inc. kclee@riya.com Abstract Metric learning has been shown to significantly improve the accuracy of k-nearest neighbor (kNN) classification. In problems involving thousands of features, distance learning algorithms cannot be used due to overfitting and high computational complexity. In such cases, previous work has relied on a two-step solution: first apply dimensionality reduction methods to the data, and then learn a metric in the resulting low-dimensional subspace. In this paper we show that better classification performance can be achieved by unifying the objectives of dimensionality reduction and metric learning. We propose a method that solves for the low-dimensional projection of the inputs, which minimizes a metric objective aimed at separating points in different classes by a large margin. This projection is defined by a significantly smaller number of parameters than metrics learned in input space, and thus our optimization reduces the risks of overfitting. Theory and results are presented for both a linear as well as a kernelized version of the algorithm. Overall, we achieve classification rates similar, and in several cases superior, to those of support vector machines. 1 Introduction The technique of k-nearest neighbor (kNN) is one of the most popular classification algorithms. Several reasons account for the widespread use of this method: it is straightforward to implement, it generally leads to good recognition performance thanks to the non-linearity of its decision boundaries, and its complexity is independent of the number of classes. In addition, unlike most alternatives, kNN can be applied even in scenarios where not all categories are given at the time of training, such as, for example, in face verification applications where the subjects to be recognized are not known in advance. The distance metric defining the neighbors of a query point plays a fundamental role in the accuracy of kNN classification. In most cases Euclidean distance is used as a similarity measure. This choice is logical when it is not possible to study the statistics of the data prior to classification or when it is fair to assume that all features are equally scaled and equally relevant. However, in most cases the data is distributed in a way so that distance analysis along some specific directions of the features space can be more informative than along others. In such cases and when training data is available in advance, distance metric learning [5, 10, 4, 1, 9] has been shown to yield significant improvement in kNN classification. The key idea of these methods is to apply transformations to the data in order to emphasize the most discriminative directions. Euclidean distance computation in the transformed space is then equivalent to a non-uniform metric analysis in the original input space. In this paper we are interested in cases where the data to be used for classification is very highdimensional. An example is classification of imagery data, which often involves input spaces of thousands of dimensions, corresponding to the number of pixels. Metric learning in such highdimensional spaces cannot be carried out due to overfitting and high computational complexity. In these scenarios, even kNN classification is prohibitively expensive in terms of storage and computational costs. The traditional solution is to apply dimensionality reduction methods to the data and then learn a suitable metric in the resulting low-dimensional subspace. For example, Principal Component Analysis (PCA) can be used to compute a linear mapping that reduces the data to tractable dimensions. However, dimensionality reduction methods generally optimize objectives unrelated to classification and, as a consequence, might generate representations that are significantly less discriminative than the original data. Thus, metric learning within the subspace might lead to suboptimal similarity measures. In this paper we show that better performance can be achieved by directly solving for a low-dimensional embedding that optimizes a measure of kNN classification performance. Our approach is inspired by the solution proposed by Weinberger et al. [9]. Their technique learns a metric that attempts to shrink distances of neighboring similarly-labeled points and to separate points in different classes by a large margin. Our contribution over previous work is twofold: 1. We describe the Large Margin Component Analysis (LMCA) algorithm, a technique that solves directly for a low-dimensional embedding of the data such that Euclidean distance in this space minimizes the large margin metric objective described in [9]. Our approach solves for only D · d unknowns, where D is the dimensionality of the inputs and d is the dimensionality of the target space. By contrast, the algorithm of Weinberger et al. [9] learns a Mahalanobis distance of the inputs, which requires solving for a D × D matrix, using iterative semidefinite programming methods. This optimization is unfeasible for large values of D. 2. We propose a technique that learns Mahalanobis distance metrics in nonlinear feature spaces. Our approach combines the goal of dimensionality reduction with a novel ”kernelized” version of the metric learning objective of Weinberger et al. [9]. We describe an algorithm that optimizes this combined objective directly. We demonstrate that, even when data is low-dimensional and dimensionality reduction is not needed, this technique can be used to learn nonlinear metrics leading to significant improvement in kNN classification accuracy over [9]. 2 Linear Dimensionality Reduction for Large Margin kNN Classification In this section we briefly review the algorithm presented in [9] for metric learning in the context of kNN classification. We then describe how this approach can be generalized to compute low dimensional projections of the inputs via a novel direct optimization. A fundamental characteristic of kNN is that its performance does not depend on linear separability of classes in input space: in order to achieve accurate kNN classification it is sufficient that the majority of the k-nearest points of each test example have correct label. The work of Weinberger et al. [9] exploits this property by learning a linear transformation of the input space that aims at creating consistently labeled k-nearest neighborhoods, i.e. clusters where each training example and its k-nearest points have same label and where points differently labeled are distanced by an additional safety margin. Specifically, given n input examples x1, ..., xn in ℜD and corresponding class labels y1, ..., yn, the technique in [9] learns the D × D transformation matrix L that optimizes the following objective function: ǫ(L) = X ij ηij||L(xi −xj)||2 + c X ijl ηij(1 −yil)h(||L(xi −xj)||2 −||L(xi −xl)||2 + 1), (1) where ηij ∈{0, 1} is a binary variable indicating whether example xj is one the k-closest points of xi that share the same label yi, c is a positive constant, yil ∈{0, 1} is 1 iff (yi = yl), and h(s) = max(s, 0) is the hinge function. The objective ǫ(L) consists of two contrasting terms. The first aims at pulling closer together points sharing the same label and that were neighbors in the original space. The second term encourages distancing each example xi from differently labeled points by an amount equal to 1 plus the distance from xi to any of its k similarly-labeled closest points. This term corresponds to a margin condition similar to that of SVMs and it is used to improve generalization. The constant c controls the relative importance of these two competing terms and it can be chosen via cross validation. Upon optimization of ǫ(L), test example xq is classified according to the kNN rule applied to its projection x′ q = Lxq, using Euclidean distance as metric. Equivalently, such classification can be interpreted as kNN classification in the original input space under the Mahalanobis distance metric induced by matrix M = LT L. Although Equation 1 is non-convex in L, it can be rewritten as a semidefinite program ǫ(M) in terms of the metric M [9]. Thus, optimizing the objective in M guarantees convergence to the global minimum, regardless of initialization. When data is very high-dimensional, minimization of ǫ(M) using semidefinite programming methods is impractical because of slow convergence and overfitting problems. In such cases [9] propose applying dimensionality reduction methods, such as PCA, followed by metric learning within the resulting low-dimensional subspace. As outlined above, this procedure leads to suboptimal metric learning. In this paper we propose an alternative approach that solves jointly for dimensionality reduction and metric learning. The key idea is to choose the transformation L in Equation 1 to be a nonsquare matrix of size d×D, with d << D. Thus L defines a mapping from the high-dimensional input space to a low-dimensional embedding. Euclidean distance in this low-dimensional embedding is equivalent to Mahalanobis distance in the original input space under the rank-deficient metric M = LT L (M has now rank at most d). Unfortunately, optimization of ǫ(M) subject to rank-constraints on M leads to a minimization problem that is no longer convex [8] and that is awkward to solve. Here we propose an approach for minimizing the objective that differs from the one used in [9]. The idea is to optimize Equation 1 directly with respect to the nonsquare matrix L. We argue that minimizing the objective with respect to L rather than with respect to the rank-deficient D ×D matrix M, offers several advantages. First, our optimization involves only d·D rather than D2 unknowns, which considerably reduces the risk of overfitting. Second, the optimal rectangular matrix L computed with our method automatically satisfies the rank constraints on M without requiring the solution of difficult constrained minimization problems. Although the objective optimized by our method is also not convex, we experimentally demonstrate that our solution converges consistently to better metrics than those computed via the application of PCA followed by subspace distance learning (see Section 4). We minimize ǫ(L) using gradient-based optimizers, such as conjugate gradient methods. Differentiating ǫ(L) with respect to the transformation matrix L gives the following gradient for the update rule: ∂ǫ(L) ∂L = 2L X ij ηij(xi −xj)(xi −xj)T + 2cL X ijl ηij(1 −yil)  (xi −xj)(xi −xj)T −(xi −xl)(xi −xl)T  h′(||L(xi −xj)||2 −||L(xi −xl)||2 + 1) (2) We handle the non-differentiability of h(s) at s = 0, by adopting a smooth hinge function as in [8]. 3 Nonlinear Feature Extraction for Large Margin kNN Classification In the previous section we have described an algorithm that jointly solves for linear dimensionality reduction and metric learning. We now describe how to ”kernelize” this method in order to compute non-linear features of the inputs that optimize our distance learning objective. Our approach learns a low-rank Mahalanobis distance metric in a high dimensional feature space F, related to the inputs by a nonlinear map φ : ℜD →F. We restrict our analysis to nonlinear maps φ for which there exist kernel functions k that can be used to compute the feature inner products without carrying out the map, i.e. such that k(xi, xj) = φT i φj, where for brevity we denoted φi = φ(xi). We modify our objective ǫ(L) by substituting inputs xi with features φ(xi) into Equation 1. L is now a transformation from the space F into a low-dimensional space ℜd. We seek the transformation L minimizing the modified objective function ǫ(L). The gradient in feature space can now be written as: ∂ǫ(L) ∂L = 2 X ij ηijL(φi −φj)(φi −φj)T + 2c X ijl ηij(1 −yil)h′(sijl)L  (φi −φj)(φi −φj)T −(φi −φl)(φi −φl)T  (3) where sijl = (||L(φi −φj)||2 −||L(φi −φl)||2 + 1). Let Φ = [φ1, ..., φn]T . We consider parameterizations of L of the form L = ΩΦ, where Ωis some matrix allowing us to write L as a linear combination of the feature points. This form of nonlinear map is analogous to that used in kernel-PCA and it allows us to parameterize the transformation L in terms of only d · n parameters, the entries of the matrix Ω. We now introduce the following Lemma which we will later use to derive an iterative update rule for L. Lemma 3.1 The gradient in feature space can be computed as ∂ǫ(L) ∂L = ΓΦ, where Γ depends on features φi solely in terms of dot products (φT i φj). Proof Defining ki = Φφi = [k(x1, xi), ..., k(xn, xi)]T , non-linear feature projections can be computed as Lφi = ΩΦφi = Ωki. From this we derive: ∂ǫ(L) ∂L = 2Ω X ij ηij(ki −kj)(φi −φj)T + 2cΩ X ijl ηij(1 −yil)h′(sijl)  (ki −kj)(φi −φj)T −(ki −kl)(φi −φl)T  = 2Ω X ij ηij h E(ki−kj) i −E(ki−kj) j i Φ + 2cΩ X ijl ηij(1 −yil)h′(sijl) h E(ki−kj) i −E(ki−kj) j −E(ki−kl) i + E(ki−kl) l i Φ where Ev i = [0, ..., v, 0, ..0] is the n × n matrix having vector v in the i-th column and all 0 in the other columns. Setting Γ = 2Ω X ij ηij h E(ki−kj) i −E(ki−kj) j i + 2cΩ X ijl ηij(1 −yil)h′(sijl) h E(ki−kj) i −E(ki−kj) j −E(ki−kl) i + E(ki−kl) l i (4) proves the Lemma. This result allows us to implicitly solve for the transformation without ever computing the features in the high-dimensional space F: the key idea is to iteratively update Ωrather than L. For example, using gradient descent as optimization we derive update rule: Lnew = Lold −λ ∂ǫ(L) ∂L L=Lold = [Ωold −λΓold] Φ = ΩnewΦ (5) where λ is the learning rate. We carry out this optimization by iterating the update Ω←(Ω−λΓ) until convergence. For classification, we project points onto the learned low-dimensional space by exploiting the kernel trick: Lφq = Ωkq. 4 Experimental results We compared our methods to the metric learning algorithm of Weinberger et al. [9], which we will refer to as LMNN (Large Margin Nearest Neighbor). We use KLMCA (kernel-LMCA) to denote the nonlinear version of our algorithm. In all of the experiments reported here, LMCA was initialized using PCA, while KLMCA used the transformation computed by kernel-PCA as initial guess. The objectives of LMCA and KLMCA were optimized using the steepest descent algorithm. We experimented with more sophisticated minimization techniques, including the conjugate gradient method and the Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm [6], but no substantial improvement in performance or speed of convergence was achieved. The KLMCA algorithm was implemented using a Gaussian RBF kernel. The number of nearest neighbors, the weight c in Equation 1, and the variance of the RBF kernel, were all automatically tuned using cross-validation. The first part of our experimental evaluation focuses on classification results on datasets with highdimensionality, Isolet, AT&T Faces, and StarPlus fMRI: (a) 5 10 15 20 25 30 0 2 4 6 8 10 12 projection dimensions training error % AT&T Faces PCA + LMNN LMCA + kNN KLMCA + kNN 0 50 100 150 200 0 5 10 15 20 25 30 projection dimensions training error % Isolet PCA + LMNN LMCA + kNN KLMCA + kNN 0 10 20 30 0 5 10 15 projection dimensions training error % fMRI PCA + LMNN LMCA + kNN KLMCA + kNN (b) 5 10 15 20 25 30 0 2 4 6 8 10 12 projection dimensions testing error % AT&T Faces PCA + LMNN LMCA + kNN KLMCA + kNN 0 50 100 150 200 0 10 20 30 40 projection dimensions testing error % Isolet PCA + LMNN LMCA + kNN KLMCA + kNN 0 10 20 30 6 8 10 12 14 16 projection dimensions testing error % fMRI PCA + LMNN LMCA + kNN KLMCA + kNN Figure 1: Classification error rates on the high-dimensional datasets Isolet, AT&T Faces and StarPlus fMRI for different projection dimensions. (a) Training error. (b) Testing error. • Isolet1 is a dataset of speech features from the UC Irvine repository, consisting of 6238 training examples and 1559 testing examples with 617 attributes. There are 26 classes corresponding to the spoken letters to be recognized. • The AT&T Faces2 database contains 10 grayscale face images of each of 40 distinct subjects. The images were taken at different times, with varying illumination, facial expressions and poses. As in [9], we downsampled the original 112 × 92 images to size 38 × 31, corresponding to 1178 input dimensions. • The StarPlus fMRI3 dataset contains fMRI sequences acquired in the context of a cognitive experiment. In these trials the subject is shown for a few seconds either a picture or a sentence describing a picture. The goal is to recognize the viewing activity of the subject from the fMRI images. We reduce the size of the data by considering only voxels corresponding to relevant areas of the brain cortex and by averaging the activity in each voxel over the period of the stimulus. This yields data of size 1715 for subject ”04847,” on which our analysis was restricted. A total number of 80 trials are available for this subject. Except for Isolet, for which a separate testing set is specified, we computed all of the experimental results by averaging over 100 runs of random splitting of the examples into training and testing sets. For the fMRI experiment we used at each iteration 70% of the data for training and 30% for testing. For AT&T Faces, training sets were selected by sampling 7 images at random for each person. The remaining 3 images of each individual were used for testing. Unlike LMCA and KLMCA, which directly solve for low-dimensional embeddings of the input data, LMNN cannot be run on datasets of dimensionalities such as those considered here and must be trained on lower-dimensional representations of the inputs. As in [9], we applied the LMNN algorithm on linear projections of the data computed using PCA. Figure 1 summarizes the training and testing performances of kNN classification using the metrics learned by the three algorithms for different subspace dimensions. LMCA and KLMCA give considerably better classification accuracy than LMNN on all datasets, with the kernelized version of our algorithm always outperforming the linear version. The difference in accuracy between our algorithms and LMNN is particularly dramatic when a small number of projection dimensions is used. In such cases, LMNN is unable to find good metrics in the low-dimensional subspace computed by PCA. By contrast, LMCA and KLMCA solve for the low-dimensional subspace that optimizes the classification-related objective 1Available at http://www.ics.uci.edu/∼mlearn/MLRepository.html 2Available at http://www.cl.cam.ac.uk/Research/DTG/attarchive/facedatabase.html 3Available at http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-81/www/ (a) (b) (c) Figure 2: Image reconstruction from PCA and LMCA features. (a) Input images. (b) Reconstructions using PCA (left) and LMCA (right). (c) Absolute difference between original images and reconstructions from features for PCA (left) and LMCA (right). Red denotes large differences, blue indicates similar grayvalues. LMCA learns invariance to effects that are irrelevant for classification: non-uniform illumination, facial expressions, and glasses (training data contains images with and without glasses for same individuals). of Equation 1, and therefore achieve good performance even when projecting to very low dimensions. In our experiments we found that all three classification algorithms (LMNN, LMCA+kNN, and KLMCA+kNN) performed considerably better than kNN using the Euclidean metric in the PCA and KPCA subspaces. For example, using d = 10 in the AT&T dataset, kNN gives a 10.9% testing error rate when used on the PCA features, and a 9.7% testing error rate when applied to the nonlinear features computed by KPCA. While LMNN is applied to features in a low-dimensional space, LMCA and KLMCA learn a lowrank metric directly from the high-dimensional inputs. Consequently the computational complexity of our algorithms is higher than that of LMNN. However, we have found that LMCA and KLMCA converge to a minimum quite rapidly, typically within 20 iterations, and thus the complexity of these algorithms has not been a limiting factor even when applied to very high-dimensional datasets. As a reference, using d = 10 and K = 3 on the AT&T dataset, LMNN learns a metric in about 5 seconds, while LMCA and KLMCA converge to a minimum in 21 and 24 seconds, respectively. It is instructive to look at the preimages of LMCA data embeddings. Figure 2 shows comparative reconstructions of images obtained from PCA and LMCA features by inverting their linear mappings. The PCA and LMCA subspaces in this experiment were computed from cropped face images of size 50 × 50 pixels, taken from a set of consumer photographs. The dataset contains 2459 face images corresponding to 152 distinct individuals. A total of d = 125 components were used. The subjects shown in Figure 2 were not included in the training set. For a given target dimensionality, PCA has the property of computing the linear transformation minimizing the reconstruction error under the L2 norm. Unsurprisingly, the PCA face reconstructions are extremely faithful reproductions of the original images. However, PCA accurately reconstructs also visual effects, such as lighting variations and changes in facial expressions, that are unimportant for the task of face verification and that might potentially hamper recognition. By contrast, LMCA seeks a subspace where neighboring examples belong to the same class and points differently labeled are separated by a large margin. As a result, LMCA does not encode effects that are found to be insignificant for classification or that vary largely among examples of the same class. For the case of face verification, LMCA de-emphasizes changes in illumination, presence or absence of glasses and smiling expressions (Figure 2). When the input data does not require dimensionality reduction, LMNN and LMCA solve the same optimization problem, but LMNN should be preferred over LMCA in light of its guarantees of convergence to the global minimum of the objective. However, even in such cases, KLMCA can be used in lieu of LMNN in order to extract nonlinear features from the inputs. We have evaluated this use of KLMCA on the following low-dimensional datasets from the UCI repository: Bal, Wine, Iris, and Ionosphere. All of these datasets, except Ionosphere, have been previously used in [9] to assess the performance of LMNN. The dimensionality of the data in these sets ranges from 4 to 34. In order (a) 14.1 10 6.5 kNN w/ EUCL LMNN KLMCA + kNN BAL − training error % 30 1.1 17.1 kNN w/ EUCL LMNN KLMCA + kNN WINE − training error % 4.3 3.5 3 kNN w/ EUCL LMNN KLMCA + kNN IRIS − training error % 15.7 7.6 2.3 kNN w/ EUCL LMNN KLMCA + kNN IONO − training error % (b) 14.4 9.7 6.7 7.8 kNN w/ EUCL LMNN KLMCA + kNN SVM BAL − testing error % 30.1 2.6 17.6 19 kNN w/ EUCL LMNN KLMCA + kNN SVM WINE − testing error % 4.3 4.7 3.4 4.4 kNN w/ EUCL LMNN KLMCA + kNN SVM IRIS − testing error % 16.5 13.7 5.8 kNN w/ EUCL LMNN KLMCA + kNN IONO − testing error % Figure 3: kNN classification accuracy on low-dimensional datasets: Bal, Wine, Iris, and Ionosphere. (a) Training error. (b) Testing error. Algorithms are kNN using Euclidean distance, LMNN [9], kNN in the nonlinear feature space computed by our KLMCA algorithm, and multiclass SVM. to compare LMNN with KLMCA under identical conditions, KLMCA was restricted to compute a number of features equal to the input dimensionality, although in our experience using additional nonlinear features often results in better classification performance. Figure 3 summarizes the results of this comparison. Again, we averaged the errors over 100 runs with different 70/30 splits of the data for training and testing. On all datasets except on Wine, for which the mapping to the highdimensional space seems to hurt performance (note also the high error rate of SVM), KLMCA gives better classification accuracy than LMNN. Note also that the error rates of KLMCA are consistently lower than those reported in [9] for SVM under identical training and testing conditions. 5 Relationship to other methods Our method is most similar to the work of Weinberger et al. [9]. Our approach is different in focus as it specifically addresses the problem of kNN classification of very high-dimensional data. The novelty of our method lies in an optimization that solves for data reduction and metric learning simultaneously. Additionally, while [9] is limited to learning a global linear transformation of the inputs, we describe a kernelized version of our method that extracts non-linear features of the inputs. We demonstrate that this representation leads to significant improvements in kNN classification both on high-dimensional as well as on low-dimensional data. Our approach bears similarities with Linear Discriminant Analysis (LDA) [2], as both techniques solve for a low-rank Mahalanobis distance metric. However, LDA relies on the assumption that the class distributions are Gaussian and have identical covariance. These conditions are almost always violated in practice. Like our method, the Neighborhood Component Analysis (NCA) algorithm by Goldberger et al. [4] learns a lowdimensional embedding of the data for kNN classification using a direct gradient-based approach. NCA and our method differ in the definition of the objective function. Moreover, unlike our method, NCA provides purely linear embeddings of the data. A contrastive loss function analogous to the one used in this paper is adopted in [1] for training a similarity metric. A siamese architecture consisting of identical convolutional networks is used to parameterize and train the metric. In our work the metric is parameterized by arbitrary nonlinear maps for which kernel functions exist. Recent work by Globerson and Roweis [3] also proposes a technique for learning low-rank Mahalanobis metrics. Their method includes an extension for computing low-dimensional non-linear features using the kernel trick. However, this approach computes dimensionality reductions through a two-step solution which involves first solving for a possibly full-rank metric and then estimating the low-rank approximation via spectral decomposition. Besides being suboptimal, this approach is impractical for classification problems with high-dimensional data, as it requires solving for a number of unknowns that is quadratic in the number of input dimensions. Furthermore, the metric is trained with the aim of collapsing all examples in the same class to a single point. This task is difficult to achieve and not strictly necessary for good kNN classification performance. The Support Vector Decomposition Machine (SVDM) [7] is also similar in spirit to our approach. SVDM optimizes an objective that is a combination of dimensionality reduction and classification. Specifically, a linear mapping from input to feature space and a linear classifier applied to feature space, are trained simultaneously. As in our work, results in their paper demonstrate that this joint optimization yields better accuracy than that achieved by learning a low-dimensional representation and a classifier separately. Unlike our method, which can be applied without any modification to classification problems with more than two classes, SVDM is formulated for binary classification only. 6 Discussion We have presented a novel algorithm that simultaneously optimizes the objectives of dimensionality reduction and metric learning. Our algorithm seeks, among all possible low-dimensional projections, the one that best satisfies a large margin metric objective. Our approach contrasts techniques that are unable to learn metrics in high-dimensions and that must rely on dimensionality reduction methods to be first applied to the data. Although our optimization is not convex, we have experimentally demonstrated that the metrics learned by our solution are consistently superior to those computed by globally-optimal methods forced to search in a low-dimensional subspace. The nonlinear version of our technique requires us to compute the kernel distance of a query point to all training examples. Future research will focus on rendering this algorithm ”sparse”. In addition, we will investigate methods to further reduce overfitting when learning dimensionality reduction from very high dimensions. Acknowledgments We are grateful to Drago Anguelov and Burak Gokturk for discussion. We thank Aaron Hertzmann and the anonymous reviewers for their comments. References [1] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2005. [2] R. A. Fisher. The use of multiple measurements in taxonomic problems. Ann. Eugenics, 7:179–188, 1936. [3] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18. MIT Press, Cambridge, MA, 2006. [4] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, 2005. [5] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 18:607–616, 1996. [6] A. Mordecai. Nonlinear Programming: Analysis and Methods. Dover Publishing, 2003. [7] F. Pereira and G. Gordon. The support vector decomposition machine. In Proceedings of the International Conference on Machine Learning (ICML), 2006. [8] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference on Machine Learning (ICML), 2005. [9] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, 2006. [10] E. P. Xing, A. Y. Ng, M. I. Jordan, , and S. Russell. Distance metric learning, with application to clustering with side-information. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, 2002.
2006
176
3,006
Kernels on Structured Objects Through Nested Histograms Marco Cuturi Institute of Statistical Mathematics Minami-azabu 4-6-7, Minato ku, Tokyo, Japan. Kenji Fukumizu Institute of Statistical Mathematics Minami-azabu 4-6-7, Minato ku, Tokyo, Japan. Abstract We propose a family of kernels for structured objects which is based on the bag-ofcomponents paradigm. However, rather than decomposing each complex object into the single histogram of its components, we use for each object a family of nested histograms, where each histogram in this hierarchy describes the object seen from an increasingly granular perspective. We use this hierarchy of histograms to define elementary kernels which can detect coarse and fine similarities between the objects. We compute through an efficient averaging trick a mixture of such specific kernels, to propose a final kernel value which weights efficiently local and global matches. We propose experimental results on an image retrieval experiment which show that this mixture is an effective template procedure to be used with kernels on histograms. 1 Introduction Kernel methods have shown to be competitive with other techniques in classification or regression tasks where the input data lie in a vector space. Arguably, this success rests on two factors: first, the good ability of kernel algorithms, such as the support vector machine, to generalize and provide a sparse formulation for the underlying learning problem; second, the capacity of nonlinear kernels, such as the polynomial and gaussian kernels, to quantify meaningful similarities between vectors, notably non-linear correlations between their components. Using kernel machines with non-vectorial data (e.g., in bioinformatics, image and text analysis or signal processing) requires more arbitrary choices, both to represent the objects in a malleable form, and to choose suitable kernels on these representations. The challenge of using kernel methods on real-world data has thus recently fostered many proposals for kernels on complex objects, notably strings, trees, images or graphs to cite a few. In common practice, most of these objects can be regarded as structured aggregates of smaller components, and the coarsest approach to study such aggregates is to consider them directly as bags of components. In the field of kernel methods, such a representation has not only been widely adopted (Haussler, 1999; Joachims, 2002; Sch¨olkopf et al., 2004), but it has also spurred the proposal of kernels better suited to the geometry of the underlying histograms (Kondor & Jebara, 2003; Lafferty & Lebanon, 2005; Hein & Bousquet, 2005; Cuturi et al., 2005). However, one of the drawbacks of the bag-of-components representation is that it implicitly assumes that each component sampled in the object has been generated independently from an identical distribution. While this viewpoint may translate into adequate properties for some learning tasks, such as translation or rotation invariance when using histograms of colors to manipulate images (Chapelle et al., 1999), it may however appear too restrictive when such a strong invariance may just be too coarse to be of practical use. A possible way to cope with this limitation is to expand artificially the size of the components’ space, either by considering families of larger components to take into account more contextual information, or by considering histograms which index both components and their possible location in the object (R¨atsch & Sonnenburg, 2004). As one would expect, these histograms are usually sparse and need to be regularized using ad-hoc rules and prior knowledge (Leslie et al., 2003) before being directly compared using kernels on histograms. For sequential data, other state-of-the-art methods compute an optimal alignment between the sequences based on elementary operations such as substitutions, deletions and insertions of components. Such alignment scores may yield positive definite (p.d.) kernels if particular care is taken to adapt them (Vert et al., 2004) and have shown very competitive performances. However, their computational cost can be prohibitive when dealing with large datasets, and can only be applied to sequential data. Following these contributions, we propose t2.1 t1 t2 t2.2 t1 t2.1 t2.2 t2 Figure 1: From the bag of components representation to a set of nested bags, using a set of labels. in this paper new families of kernels which can be easily tuned to detect both coarse and fine similarities between the objects, in a range spanned from kernels which only consider coarse histograms to kernels which only detect strict local matches. To size such types of similarities between two objects, we elaborate on the elementary bag-of-components perspective to consider instead families of nested histograms (indexed by a set of hierarchical labels to be defined) to describe each object. In this framework, the root label corresponds to the global representation introduced before, while longer labels represent a specific condition under which the components have been sampled. We then define kernels that take into account mixtures of similarities, spanning from detailed resolutions which only compare the smallest bags to the coarsest one. This trade-off between fine and coarse perspectives sets an averaging framework to define kernels, which we introduce formally in Section 2. This theoretical framework would not be tractable without an efficient factorization detailed in Section 3 which yields computations which grow linearly in time and space with respect to the number of labels to evaluate the value of the kernel. We then provide experimental results in Section 4 on an image retrieval task which shows that the methodology improves the performance of kernel based state-of-the art techniques in this field with a low extra computational cost. 2 Kernels Defined through Hierarchies of Histograms In the kernel literature, structured objects are usually represented as histograms of components, e.g., images as histograms of colors and/or features, texts as bags of words and sequences as histograms of letters or n-grams. The obvious drawback of this representation is that it usually loses all the contextual information which may be useful to characterize each sampled component in the original object. One may instead create families of histograms, indexed by specific sampling conditions: • In image analysis, create color or feature histograms following a prior partition of the image into predefined patches, as in (Grauman & Darrell, 2005). Another possibility would be to define families of histograms, all for the same image, which would consider increasingly granular discretizations of the color space. • In sequence analysis, extract local histograms which may correspond to predefined regions of the original sequence, as in (Matsuda et al., 2005). A different option would be to associate to each histogram a context of arbitrary length, e.g. by considering the 26 histogram of letters sampled just after the letters {A, B, · · · , Z}, or the 26 × 26 histograms of letters after contexts {AA, AB, · · · , ZZ}. • In text analysis, use histograms of words found after grammatical categories of increasing complexity, such as verbs, nouns, articles or adverbs. • For synchronous time series (e.g. financial time series or gene expression profiles), define a reference series (e.g. an index or a specific gene) and decompose each of the subsequent series into histograms of values conditioned to the value of the reference series. We write L for an arbitrary index set to label such specific histograms. Structured objects are thus represented as a family µ of ML(X) def = (M b +(X))L, that is µ = {µt}t∈L where for each t ∈L, µt is a bounded measure of M b +(X). We write |µ| for P t∈L |µt|. 2.1 Local Similarities Between Measures To compare two objects under the light of any sampling condition t, that is comparing their respective decompositions as measures µt and µ′ t, we make use of an arbitrary p.d. kernel k on M b +(X) to which we will refer as the base kernel throughout the paper. For interpretation purposes only, we will assume in the following sections that k is an infinitely divisible kernel which can be written as k = e−1 λ ψ, λ > 0, where ψ is a negative definite (Berg et al., 1984) kernel on M b +(X), or equivalently −ψ is a conditionally p.d. kernel. Note also that k has to be p.d. not only on probability measures, but on any bounded measure. For two elements µ, µ′ of ML(X) and a given element t ∈L, the kernel kt(µ, µ′) def = k(µt, µ′ t) quantifies the similarity of µ and µ′ by measuring how similarly their components were observed with respect to label t. For two different labels s and t of L, ks and kt can be associated through polynomial combinations with positive coefficients to result in new kernels, notably their sum ks+kt or their product kskt. This is particularly adequate if some complementarity is assumed between s and t, so that their combination can provide new insights for a given learning task. If on the contrary these labels are assumed to be similar, then they can be regarded as a grouped label {s} ∪{t} and result in the kernel k{s}∪{t}(µ, µ′) def = k(µs + µt, µ′ s + µ′ t), which will measure the similarity of m and m′ under both s or t labels. Let us give an intuition for this definition by considering two texts A, B built up with words from a dictionary D. As an alternative to the general histograms of words θA and θB of M b +(D), one may consider for instance θA can, θA may and θB can, θB may, the respective histograms of words that follow the words can and may in texts A and B respectively. If one considers that can and may are different words, then the following kernel quantifies the similarity of A and B taking advantage of this difference: k{can},{may}(A, B) = k(θA can, θB can) × k(θA may, θB may). If on the contrary one decides that can and may are equivalent, an adequate kernel would first merge the histograms, and then compare them: k{can,may}(A, B) = k(θA can + θA may, θB can + θB may). The previous formula can be naturally extended to define kernels indexed on a set T ⊂L of grouped labels, through kT (µ, µ′) def = k (µT , µ′ T ) , where µT def = X t ∈T µt and µ′ T def = X t ∈T µ′ t. 2.2 Resolution Specific Kernels Having defined a family of kernels {kT , T ⊂L} which can detect conditional similarities between two elements of ML(X) given a subset T of L, we define in this section different ways to combine them to obtain a kernel which can take into account all of their histograms. Let P be a finite partition of L, that is a finite family P = (T1, ..., Tn) of sets of L, such that Ti ∩Tj = ∅if 1 ≤i < j ≤n and Sn i=1 Ti = L. We write P(L) for the set of all partitions of L. Consider now the kernel defined by a partition P as kP (µ, µ′) def = n Y i=1 kTi(µ, µ′). (1) The kernel kP quantifies the similarity between two objects by detecting their joint similarity under all possible labels of L, assuming a priori that certain labels can be grouped together, following the subsets Ti enumerated in the partition P. Note that there is some arbitrary in this definition since a simple multiplication of base kernels kTi is used to define kP , rather than any other polynomial combination. We follow in that sense the convolution kernels (Haussler, 1999) approach, and indeed, for each partition P, kP can be regarded as a convolution kernel. More precisely, the multiplicative structure of Equation (1) quantifies how similar two objects are given a partition P, in a way that imposes for the objects to be similar according to all subsets Ti. If the base kernel k can be written as k = e−1 λ ψ, where ψ is a negative definite kernel, then kP can be expressed as the exponential of minus ψP (µ, µ′) def = n X i=1 ψTi(µ, µ′) = n X i=1 ψ(µTi, µ′ Ti), a quantity which penalizes local differences between the decompositions of µ and µ′ over L, as opposed to the coarsest approach where P = {L} and only ψ(P t µt, P t µ′ t) is considered. Figure 2: A useful set of labels L for images which would focus on pixel localization can be represented by a grid, such as the 8 × 8 one represented above. In this case P3 corresponds to the 43 windows presented in the left image, P2 to the 16 larger squares obtained when grouping 4 small windows, P1 to the image divided into 4 equal parts and P0 is simply the whole image. Any partition P of the image which complies with the hierarchy P 3 0 in the example above, can in turn be used to represent an image as a family of sub-probability measures, which reduces in the case of two-color images to binary histograms as illustrated in the right-most image. For two images, these respective histograms can be directly compared through the kernel kP . As illustrated in Figure 2, where images are summarized through histograms indexed by patches, a partition of L reflects a given belief on how patches may or may not be associated or split to focus on local dissimilarities. Hence, all partitions contained in the set P(L) of all possible partitions1 are not likely to be equally meaningful given that some labels may a natural form of grouping. If the index is built to highlight differences in locations, one would naturally favor mergers between neighboring indexes. If one uses a Markovian analysis, that is consider histograms of components conditioned by contexts, a natural way to group contexts would be to group them according to their semantic or grammatical content for text analysis or according to their suffix for sequence analysis. Such meaningful partitions can be intuitively obtained when a hierarchical structure which groups elements of L together is known a priori. A hierarchy on L, such as the triadic hierarchy shown in Figure 3, is a family (Pd)D d=0 = {P0 = {L}, .., PD = {{t}, t ∈L}} of partitions of L. To provide a hierarchical information, the family (Pd)D d=0 is such that any subset present in a partition Pd is strictly included in a (unique by definition of a partition) subset from the coarser partition Pd−1. This is equivalent to stating that each subset T in a partition Pd is divided in Pd+1 as a partition of T which is not T itself. We write s(T ) for this partition (e.g., in Figure 3, s(1) = {11, · · · , 19}) and name its elements the siblings of T . Consider now the subset PD ⊂P(L) of all partitions of L obtained by using only sets contained in the collection P D 0 def = SD d=0 Pd, namely PD def = {P ∈P(L) s.t. ∀T ∈P, T ∈P D 0 }. The set PD contains both the coarsest and the finest resolutions, respectively P0 and PD, but also all variable resolutions for sets enumerated in P D 0 , as can be seen for instance in the third image of Figure 2. 1P(L) is quite a big space, since if L is a finite set of cardinal r, the cardinal of the set of partitions is known as the Bell Number of order r with Br = 1 e P∞ u=1 ur u! ∼ r→∞er ln r. 0 P1 5 4 7 1 2 3 6 9 8 P2 73 99 11 19 61 P0 Figure 3: A hierarchy generated by two successive triadic partitions. 2.3 Averaging Resolution Specific Kernels Each partition P contained in PD provides a resolution to compare two objects, which generates a large family of kernels kP when P spans PD. Some partitions are likely to be better suited for certain tasks, which may call for an efficient estimation scheme to select an optimal partition for a given task. This would be similar in spirit to estimating a maximum a posteriori model for the data and use it consequently to compare the objects. We take in this section a different direction which has a more Bayesian flavor by considering an averaging of such kernels based on a prior on the set of partitions. In practice, this averaging favours objects which share similarities under a large collection of resolutions, and may also be interpreted as a Bayesian averaging of convolution kernels (Haussler, 1999). Definition 1 Let L be an index set endowed with a hierarchy (Pd)D d=0, π be a prior measure on the corresponding set of partitions PD and k a base kernel on M b +(X) × M b +(X). The averaged kernel kπ on ML(X) × ML(X) is defined as kπ(µ, µ′) = X P ∈PD π(P) kP (µ, µ′). (2) As can be observed in Equation (2), the kernel automatically detects in the range of all partitions the ones which provide a good match between the compared objects, to increase subsequently the resulting similarity score. Also note that in an image-analysis context, the pyramid-matching kernel proposed in (Grauman & Darrell, 2005) only considers the original partitions of the hierarchy (Pd)D d=0, while Equation (2) considers all possible partitions of PD. This can be carried out with little cost if an adequate set of priors π is selected as seen below. 3 Kernel Computation We provide in this section hierarchies (Pd)D d=0 and priors π for which the computation of kπ is both meaningful and tractable, yielding namely a computational time to calculate kπ which is loosely upperbounded by D × card L × c(k) where c(k) is the time required to compute the base kernel. 3.1 Partitions Generated by Branching Processes All partitions P of PD can be generated through the following rule, starting from the initial root partition P := P0 = {L}. For each set T of P: 1. either leave the set as it is in P with probability 1 −εT , 2. either replace it by its siblings in s(T ) with probability εT , and reapply this rule to each sibling unless they belong to the finest partition PD. The resulting prior for PD depends on the overall coarseness of the considered partitions, and can be tuned through parameters εT to favor adaptively coarse or fine partitions. For a partition P ∈PD, π(P) = Q T ∈P (1 −εT ) Q T ∈ ◦ P (εT ), where the set ◦ P = {T ∈P D 0 s.t. ∃V ∈P, V ⊊T } gathers all coarser sets belonging to coarser resolutions than P, and can be regarded as the set of all ancestors in P D 0 of sets enumerated in P. 3.2 Factorization of kπ We use the branching-process prior can be used to factorize the formula in Equation (2): Proposition 2 For two elements µ, µ′ of ML(X), define for T spanning recursively all sets contained in PD, PD−1, ..., P0 the quantity KT below; then kπ(µ, µ′) = KL. KT = (1 −εT )kT (µ, µ′) + εT Y U ∈s(T ) KU. Proof The proof follows from a factorization which uses the branching process prior used for the tree generation, and can be derived from the proof of (Catoni, 2004, Proposition 5.2). The opposite figure underlines the importance of incorporating to each node KT a weighted product of the sibling kernel evaluations KU. The update rule for the computation of kπ takes into account the branching process prior by weighting the kernel kT with all values kti obtained for finer resolutions ti in s(T ). Kt3 KT = (1 −εT)k(µT, µ′ T) + εT Q Kti Kt2 Kt1 µ′ t3 µt3 µ′ t2 µt2 µ′ t1 µt1 µT = P µti µ′ T = P µ′ ti If the hierarchy of L is such that the cardinality of s(T ) is fixed to a constant α for any set T , typically α = 4 for images in the case described in Figure 2, then the computation of kπ is upperbounded by (αD+1 −1)c(k). This complexity is also upperbounded by the total amount of components considered in the compared objects, as in (Cuturi & Vert, 2005) for instance. 3.3 Choosing the Base Kernel Any kernel on M b +(X) can be used to comply with the terms of Definition 1 and apply an average scheme on families of measures. We also note that an even more general formulation can be obtained by using a different kernel kt for each label t of L, without altering the overall applicability of the factorization above. However, we only consider in this discussion a unique choice k for all t ∈L. First, one can note that kernels such as the information diffusion kernel (Lafferty & Lebanon, 2005) and variance based kernels (Kondor & Jebara, 2003; Cuturi et al., 2005) may not work in this setting since they are not p.d., nor sometimes defined, on the whole of M b +(X). The most adequate geometry of M b +(X), following the denormalization scheme proposed in (Amari & Nagaoka, 2001, p.47), may arguably be derived from the Riemannian embedding ν 7→√ν, where the Euclidian distance between two measures in this representation is equal to the geodesic distance between ν and ν′ in M b +(X) endowed with the Fisher metric, as expressed in ψH2 below. More generally, one can consider the whole family of kernels for bounded measures described in (Hein & Bousquet, 2005) to choose the base kernel k, namely the family of Hilbertian metrics ψ such that k = e−1 λ ψ. We thus use in our experiments the Jensen divergence, the χ2 distance, the total variation, and two variations of the Hellinger distance: ψJD(θ, θ′) = h θ + θ′ 2  −h(θ) + h(θ′) 2 , ψχ2(θ, θ′) = X i (θi −θ′ i)2 θi + θ′ i , ψT V (θ, θ′) = X i |θi −θ′ i|, ψH2(θ, θ′) = X i | p θi − p θ′ i|2, ψH1(θ, θ′) = X i | p θi − p θ′ i|. 4 Experiments in Image Retrieval We present in this section experiments inspired by the image retrieval task first considered in (Chapelle et al., 1999) and reused in (Hein & Bousquet, 2005). Our dataset was also extracted from the Corel Stock database and includes 12 families of labeled images, each class containing log2( 1 λ) ε 0 1/2 1 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22 0.23 2 0 -6 -12 Figure 4: Misclassification rate on the corel experiment, using the Hellinger H1 distance between histograms coupled with one-vs-all SVM classification (C = 100) as a function of λ and ε. 1 λ is taken in {2−12, · · · , 22} while ε spans {0, 0.1, · · · , 0.9, 1}. ε controls the granularity of the averaging kernel, ranging from the coarsest perspective (ε = 0) when only the global histogram is used, to the finest one (ε = 1) when only the finest histograms are considered. Dark values represent error rates which are greater or equal to 24%. The central values are roughly 14.5% while the best value obtained in the columns ε = 0 and ε = 1 are 18.4% and 17.3% respectively 100 color images of 256 × 384 pixels. The families depict images of bears, African specialty animals, monkeys, cougars, fireworks, mountains, office interiors, bonsais, sunsets, clouds, apes and rocks and gems. The database is randomly split into balanced sets of 800 training images and 400 test images. The task consists in classifying the test images with the rule learned by training 12 one-versus-all SVM’s on the learning fold. Note that previous work conducted in (Chapelle et al., 1999) illustrates the competitiveness of SVM’s in this context over other algorithms such as nearest neighbors. Our results are averaged over 3 random splits, using the Spider toolbox. We used 9 bits for the color of each pixel to reduce the size of the RGB color space to 83 = 512 from the original set of 2563 = 16, 777, 216 colors, and we defined centered grids of 4, 42 = 16 and 43 = 64 local patches. We provide results for each of the 5 considered kernels and for each considered depth D ranging from 1 to 3. Figure 5 presents 15 = 5×3 plots, where each plot displays the misclassification rate as a function of the width parameter 1 λ and the branching process prior ε set over all nodes of the tree. The constant C is set to 100, but other choices for C (1000 and 10) gave comparable plots, although a bit different in shape. By considering values of ε ranging from 0 to 1, we aim at giving a sketch of the robustness of the averaging approach, since the SVM’s seem to perform better when 0 < ε < 1 for a large span of λ values. For a better understanding of these plots, the reader may refer to Figure 4 which focuses on ψH1 and D = 2, noting that the color scales used for Figures 4 and 5 are the same. Finally, the Gaussian kernel was also tested but its very poor performance (with error rate above 22% for all parameters) illustrates once more that the Gaussian kernel is usually a poor choice to compare histograms directly. 5 Discussion The computation of averaged kernels can be performed almost as fast as kernels which only rely on fine resolutions, which along with their robustness and improved performance might advocate their use, notably as an extension of kernels based on arbitrary partitions (Grauman & Darrell, 2005; Matsuda et al., 2005). Principled ways of estimating in a semi-supervised setting both λ and ε, or preferably localized priors λT and εT , T ∈P D 0 , might give them an additional edge. This is a topic of current research, and we suggest to set these parameters through cross-validation at the moment, while H1 seems to be a reasonable choice to define the base kernel. Our approach is related to the Multiple Kernel Learning framework (Lanckriet et al., 2004), although we do not aim here at learning linear combinations of the kernels kT , but rather start from an hierarchical belief on them to propose an algebraic combination. Acknowledgments: This research was supported by the Function and Induction Research Project, Transdisciplinary Research Integration Center - Research Organization of Information and Systems. H1 H2 TV Xi2 JD D = 1 D = 2 D = 3 Figure 5: Error-rate results for different kernels and depths are displayed in the same way that in Figure 4, using the same colorscale across experiments. References Amari, S.-I., & Nagaoka, H. (2001). Methods of information geometry. AMS vol. 191. Berg, C., Christensen, J. P. R., & Ressel, P. (1984). Harmonic analysis on semigroups. No. 100 in Graduate Texts in Mathematics. Springer Verlag. Catoni, O. (2004). Statistical learning theory and stochastic optimization. No. 1851 in Lecture Notes in Mathematics. Springer Verlag. Chapelle, O., Haffner, P., & Vapnik, V. (1999). SVMs for histogram based image classification. IEEE Transactions on Neural Networks, 10, 1055. Cuturi, M., Fukumizu, K., & Vert, J.-P. (2005). Semigroup kernels on measures. JMLR, 6, 1169– 1198. Cuturi, M., & Vert, J.-P. (2005). The context-tree kernel for strings. Neural Networks, 18, 1111 – 1123. Grauman, K., & Darrell, T. (2005). The pyramid match kernel: Discriminative classification with sets of image features. ICCV (pp. 1458–1465). IEEE Computer Society. Haussler, D. (1999). Convolution kernels on discrete structures (Technical Report). UC Santa Cruz. CRL-99-10. Hein, M., & Bousquet, O. (2005). Hilbertian metrics and positive definite kernels on probability measures. Proceedings of AISTATS. Joachims, T. (2002). Learning to classify text using support vector machines: Methods, theory, and algorithms. Kluwer Academic Publishers. Kondor, R., & Jebara, T. (2003). A kernel between sets of vectors. Proc. of ICML’03 (pp. 361–368). Lafferty, J., & Lebanon, G. (2005). Diffusion kernels on statistical manifolds. JMLR, 6, 129–163. Lanckriet, G., Cristianini, N., Bartlett, P., El Ghaoui, L., & Jordan, M. (2004). Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5, 27–72. Leslie, C., Eskin, E., Weston, J., & Noble, W. S. (2003). Mismatch string kernels for svm protein classification. NIPS 15. MIT Press. Matsuda, A., Vert, J.-P., Saigo, H., Ueda, N., Toh, H., & Akutsu, T. (2005). A novel representation of protein sequences for prediction of subcellular location using support vector machines. Protein Sci., 14, 2804–2813. R¨atsch, G., & Sonnenburg, S. (2004). Accurate splice site prediction for caenorhabditis elegans, 277–298. MIT Press series on Computational Molecular Biology. MIT Press. Sch¨olkopf, B., Tsuda, K., & Vert, J.-P. (2004). Kernel methods in computational biology. MIT Press. Vert, J.-P., Saigo, H., & Akutsu, T. (2004). Local alignment kernels for protein sequences. In B. Sch¨olkopf, K. Tsuda and J.-P. Vert (Eds.), Kernel methods in computational biology. MIT Press.
2006
177
3,007
Learning with Hypergraphs: Clustering, Classification, and Embedding Dengyong Zhou†, Jiayuan Huang‡, and Bernhard Sch¨olkopf§ †NEC Laboratories America, Inc. 4 Independence Way, Suite 200, Princeton, NJ 08540, USA ‡School of Computer Science, University of Waterloo Waterloo ON, N2L3G1, Canada §Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 T¨ubingen, Germany {dengyong.zhou, jiayuan.huang, bernhard.schoelkopf}@tuebingen.mpg.de Abstract We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pairwise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs. 1 Introduction In machine learning problem settings, we generally assume pairwise relationships among the objects of our interest. An object set endowed with pairwise relationships can be naturally illustrated as a graph, in which the vertices represent the objects, and any two vertices that have some kind of relationship are joined together by an edge. The graph can be undirected or directed. It depends on whether the pairwise relationships among objects are symmetric or not. A finite set of points in Euclidean space associated with a kernel matrix is a typical example of undirected graphs. As to directed graphs, a well-known instance is the World Wide Web. A hyperlink can be thought of as a directed edge because given an arbitrary hyperlink we cannot expect that there certainly exists an inverse one, that is, the hyperlink based relationships are asymmetric [20]. However, in many real-world problems, representing a set of complex relational objects as undirected or directed graphs is not complete. For illustrating this point of view, let us consider a problem of grouping a collection of articles into different topics. Given an article, assume the only information that we have is who wrote this article. One may construct an undirected graph in which two vertices are joined together by an edge if there is at least one common author of their corresponding articles (Figure 1), and then an undirected graph based clustering approach is applied, e.g. spectral graph techniques [7, 11, 16]. The undirected graph may be further embellished by assigning to each edge a weight equal to the Figure 1: Hypergraph vs. simple graph. Left: an author set E = {e1, e2, e3} and an article set V = {v1, v2, v3, v4, v5, v6, v7}. The entry (vi, ej) is set to 1 if ej is an author of article vi, and 0 otherwise. Middle: an undirected graph in which two articles are joined together by an edge if there is at least one author in common. This graph cannot tell us whether the same person is the author of three or more articles or not. Right: a hypergraph which completely illustrates the complex relationships among authors and articles. number of authors in common. The above method may sound natural, but within its graph representation we obviously miss the information on whether the same person joined writing three or more articles or not. Such information loss is unexpected because the articles by the same person likely belong to the same topic and hence the information is useful for our grouping task. A natural way of remedying the information loss issue occurring in the above methodology is to represent the data as a hypergraph instead. A hypergraph is a graph in which an edge can connect more than two vertices [2]. In other words, an edge is a subset of vertices. In what follows, we shall unifiedly refer to the usual undirected or directed graphs as simple graphs. Moreover, without special mentioning, the referred simple graphs are undirected. It is obvious that a simple graph is a special kind of hypergraph with each edge containing two vertices only. In the problem of clustering articles stated before, it is quite straightforward to construct a hypergraph with the vertices representing the articles, and the edges the authors (Figure 1). Each edge contains all articles by its corresponding author. Even more than that, we can consider putting positive weights on the edges to encode our prior knowledge on authors’ work if we have. For instance, for a person working on a broad range of fields, we may assign a relatively small value to his corresponding edge. Now we can completely represent the complex relationships among objects by using hypergraphs. However, a new problem arises. How to partition a hypergraph? This is the main problem that we want to solve in this paper. A powerful technique for partitioning simple graphs is spectral clustering. Therefore, we generalize spectral clustering techniques to hypergraphs, more specifically, the normalized cut approach of [16]. Moreover, as in the case of simple graphs, a real-valued relaxation of the hypergraph normalized cut criterion leads to the eigendecomposition of a positive semidefinite matrix, which can be regarded as an analogue of the so-called Laplacian for simple graphs (cf. [5]), and hence we suggestively call it the hypergraph Laplacian. Consequently, we develop algorithms for hypergraph embedding and transductive inference based on the hypergraph Laplacian. There have actually existed a large amount of literature on hypergraph partitioning, which arises from a variety of practical problems, such as partitioning circuit netlists [11], clustering categorial data [9], and image segmentation [1]. Unlike the present work however, they generally transformed hypergraphs to simple ones by using the heuristics we discussed in the beginning or other domain-specific heuristics, and then applied simple graph based spectral clustering techniques. [9] proposed an iterative approach which was indeed designed for hypergraphs. Nevertheless it is not a spectral method. In addition, [6] and [17] considered propagating label distributions on hypergraphs. The structure of the paper is as follows. We first introduce some basic notions on hypergraphs in Section 2. In Section 3, we generalize the simple graph normalized cut to hypergraphs. As shown in Section 4, the hypergraph normalized cut has an elegant probabilistic interpretation based on a random walk naturally associated with a hypergraph. In Section 5, we introduce the real-valued relaxation to approximately obtain hypergraph normalized cuts, and also the hypergraph Laplacian derived from this relaxation. In section 6, we develop a spectral hypergraph embedding technique based on the hypergraph Laplacian. In Section 7, we address transductive inference on hypergraphs, this is, classifying the vertices of a hypergraph provided that some of its vertices have been labeled. Experimental results are shown in Section 8, and we conclude this paper in Section 9. 2 Preliminaries Let V denote a finite set of objects, and let E be a family of subsets e of V such that ∪e∈E = V. Then we call G = (V, E) a hypergraph with the vertex set V and the hyperedge set E. A hyperedge containing just two vertices is a simple graph edge. A weighted hypergraph is a hypergraph that has a positive number w(e) associated with each hyperedge e, called the weight of hyperedge e. Denote a weighted hypergraph by G = (V, E, w). A hyperedge e is said to be incident with a vertex v when v ∈e. For a vertex v ∈V, its degree is defined by d(v) = P {e∈E|v∈e} w(e). Given an arbitrary set S, let |S| denote the cardinality of S. For a hyperedge e ∈E, its degree is defined to be δ(e) = |e|. We say that there is a hyperpath between vertices v1 and vk when there is an alternative sequence of distinct vertices and hyperedges v1, e1, v2, e2, . . . , ek−1, vk such that {vi, vi+1} ⊆ei for 1 ≤i ≤k −1. A hypergraph is connected if there is a path for every pair of vertices. In what follows, the hypergraphs we mention are always assumed to be connected. A hypergraph G can be represented by a |V | × |E| matrix H with entries h(v, e) = 1 if v ∈e and 0 otherwise, called the incidence matrix of G. Then d(v) = P e∈E w(e)h(v, e) and δ(e) = P v∈V h(v, e). Let Dv and De denote the diagonal matrices containing the vertex and hyperedge degrees respectively, and let W denote the diagonal matrix containing the weights of hyperedges. Then the adjacency matrix A of hypergraph G is defined as A = HWHT −Dv, where HT is the transpose of H. 3 Normalized hypergraph cut For a vertex subset S ⊂V, let Sc denote the compliment of S. A cut of a hypergraph G = (V, E, w) is a partition of V into two parts S and Sc. We say that a hyperedge e is cut if it is incident with the vertices in S and Sc simultaneously. Given a vertex subset S ⊂V, define the hyperedge boundary ∂S of S to be a hyperedge set which consists of hyperedges which are cut, i.e. ∂S := {e ∈E|e ∩S ̸= ∅, e ∩Sc ̸= ∅}, and define the volume vol S of S to be the sum of the degrees of the vertices in S, that is,vol S := P v∈S d(v). Moreover, define the volume of ∂S by vol ∂S := X e∈∂S w(e)|e ∩S| |e ∩Sc| δ(e) . (1) Clearly, we have vol ∂S = vol ∂Sc. The definition given by Equation (1) can be understood as follows. Let us imagine each hyperedge e as a clique, i.e. a fully connected subgraph. For avoiding unnecessary confusion, we call the edges in such an imaginary subgraph the subedges. Moreover, we assign the same weight w(e)/δ(e) to all subedges. Then, when a hyperedge e is cut, there are |e ∩S| |e ∩Sc| subedges are cut, and hence a single sum term in Equation (1) is the sum of the weights over the subedges which are cut. Naturally, we try to obtain a partition in which the connection among the vertices in the same cluster is dense while the connection between two clusters is sparse. Using the above introduced definitions, we may formalize this natural partition as argmin ∅̸=S⊂V c(S) := vol ∂S µ 1 vol S + 1 vol Sc ¶ . (2) For a simple graph, |e ∩S| = |e ∩Sc| = 1, and δ(e) = 2. Thus the right-hand side of Equation (2) reduces to the simple graph normalized cut [16] up to a factor 1/2. In what follows, we explain the hypergraph normalized cut in terms of random walks. 4 Random walk explanation We associate each hypergraph with a natural random walk which has the transition rule as follows. Given the current position u ∈V, first choose a hyperedge e over all hyperedges incident with u with the probability proportional to w(e), and then choose a vertex v ∈e uniformly at random. Obviously, it generalizes the natural random walk defined on simple graphs. Let P denote the transition probability matrix of this hypergraph random walk. Then each entry of P is p(u, v) = X e∈E w(e)h(u, e) d(u) h(v, e) δ(e) . (3) In matrix notation, P = D−1 v HWD−1 e HT . The stationary distribution π of the random walk is π(v) = d(v) vol V , (4) which follows from that X u∈V π(u)p(u, v) = X u∈V d(u) vol V X e∈E w(e)h(u, e)h(v, e) d(u)δ(e) = 1 vol V X u∈V X e∈E w(e)h(u, e)h(v, e) δ(e) = 1 vol V X e∈E w(e) X u∈V h(u, e)h(v, e) δ(e) = 1 vol V X e∈E w(e)h(v, e) = d(v) vol V . We written c(S) = vol ∂S vol V µ 1 vol S/ vol V + 1 vol Sc/ vol V ¶ . From Equation (4), we have vol S vol V = X v∈S d(v) vol V = X v∈V π(v), (5) that is, the ratio vol S/ vol V is the probability with which the random walk occupies some vertex in S. Moreover, from Equations (3) and (4), we have vol ∂S vol V = X e∈∂S w(e) vol V |e ∩S| |e ∩Sc| δ(e) = X e∈∂S X u∈e∩S X v∈e∩Sc w(e) vol V h(u, e)h(v, e) δ(e) = X e∈∂S X u∈e∩S X v∈e∩Sc w(e) d(u) vol V h(u, e) d(u) h(v, e) δ(e) = X u∈S X v∈Sc d(u) vol V X e∈S w(e)h(u, e) d(u) h(v, e) δ(e) = X u∈S X v∈Sc π(u)p(u, v), that is, the ratio vol ∂S/ vol V is the probability with which one sees a jump of the random walk from S to Sc under the stationary distribution. From Equations (5) and (6), we can understand the hypergraph normalized cut criterion as follows: looking for a cut such that the probability with which the random walk crosses different clusters is as small as possible while the probability with which the random walk stays in the same cluster is as large as possible. It is worth pointing out that the random walk view is consistent with that for the simple graph normalized cut [13]. The consistency means that our generalization of the normalized cut approach from simple graphs to hypergraphs is reasonable. 5 Spectral hypergraph partitioning As in [16], the combinatorial optimization problem given by Equation (2) is NP-complete, and it can be relaxed (2) into a real-valued optimization problem argmin f∈R|V | 1 2 X e∈E X {u,v}⊆e w(e) δ(e) Ã f(u) p d(u) − f(v) p d(v) !2 subject to X v∈V f 2(v) = 1, X v∈V f(v) p d(v) = 0. We define the matrices Θ = D−1/2 v HWD−1 e HT D−1/2 v and ∆= I −Θ, where I denotes the identity matrix. Then it can be verified that X e∈E X {u,v}⊆e w(e) δ(e) Ã f(u) p d(u) − f(v) p d(v) !2 = 2f T ∆f. Note that this also shows that ∆is positive semi-definite. We can check that the smallest eigenvalue of ∆is 0, and its corresponding eigenvector is just √ d. Therefore, from standard results in linear algebra, we know that the solution to the optimization problem is an eigenvector Φ of ∆associated with its smallest nonzero eigenvalue. Hence, the vertex set is clustered into the two parts S = {v ∈V |Φ(v) ≥0} and Sc = {v ∈V |Φ(v) < 0}. For a simple graph, the edge degree matrix De reduces to 2I. Thus ∆= I −1 2D−1/2 v HWHT D−1/2 v = I −1 2D−1/2 v (Dv + A) D−1/2 v = 1 2 ³ I −D−1/2 v AD−1/2 v ´ , which coincides with the simple graph Laplacian up to a factor of 1/2. So we suggestively call ∆the hypergraph Laplacian. As in [20] where the spectral clustering methodology is generalized from undirected to directed simple graphs, we may consider generalizing the present approach to directed hypergraphs [8]. A directed hypergraph is a hypergraph in which each hyperedge e is an ordered pair (X, Y ) where X ⊆V is the tail of e and Y ⊆V \ X is the head. Directed hypergraphs have been used to model various practical problems from biochemical networks [15] to natural language parsing [12]. 6 Spectral hypergraph embedding As in the simple graph case [4, 10], it is straightforward to extend the spectral hypergraph clustering approach to k-way partitioning. Denote a k-way partition by (V1, · · · , Vk), where V1 ∪V2 ∪· · · ∪Vk = V, and Vi ∩Vj = ∅for all 1 ≤i, j ≤k. We may obtain a k-way partition by minimizing c(V1, · · · , Vk) = Pk i=1 vol ∂Vi vol Vi over all k-way partitions. Similarly, the combinatorial optimization problem can be relaxed into a real-valued one, of which the solution can be any orthogonal basis of the linear space spanned by the eigenvectors of ∆ associated with the k smallest eigenvalues. Theorem 1. Assume a hypergraph G = (V, E, w) with |V | = n. Denote the eigenvalues of the Laplacian ∆of G by λ1 ≤λ2 ≤· · · ≤λn. Define ck(G) = min c(V1, · · · , Vk), where the minimization is over all k-way partitions. Then Pk i=1 λi ≤ck(G). Proof. Let ri be a n-dimensional vector defined by ri(v) = 1 if v ∈Vi, and 0 otherwise. Then c(V1, · · · , Vk) = k X i=1 rT i (Dv −HWD−1 e HT )ri rT i Dvri Define si = D−1/2 v ri, and fi = si/∥si∥, where ∥· ∥denotes the usual Euclidean norm. Thus c(V1, · · · , Vk) = k X i=1 f T i ∆fi = tr F T ∆F, where F = [f1 · · · fk]. Clearly, F T F = I. If allowing the elements of ri to take arbitrary continuous values rather than Boolean ones only, we have ck(G) = min c(V1, · · · , Vk) ≥ min F T F =I tr F T ∆F = k X i=1 λi. The last equality is from standard results in linear algebra. This completes the proof. The above result also shows that the real-valued optimization problem derived from the relaxation is actually a lower bound of the original combinatorial optimization problem. Unlike 2-way partitioning however, it is unclear how to utilize multiple eigenvectors simultaneously to obtain a k-way partition. Many heuristics have been proposed in the situation of simple graphs, and they can be applied here as well. Perhaps the most popular one among them is as follows [14]. First form a matrix X = [Φ1 · · · Φk], where Φi’s are the eigenvectors of ∆associated with the k smallest eigenvalues. And then the row vectors of X are regarded as the representations of the graph vertices in k-dimensional Euclidian space. Those vectors corresponding to the vertices are generally expected to be well separated, and consequently we can obtain a good partition simply by running k-means on them once. [18] has resorted to a semidefinite relaxation model for the k-way normalized cut instead of the relatively loose spectral relaxation, and then obtained a more accurate solution. It sounds reasonable to expect that the improved solution will lead to improved clustering. As reported in [18], however, the expected improvement does not occur in practice. 7 Transductive inference We have established algorithms for spectral hypergraph clustering and embedding. Now we consider transductive inference on hypergraphs. Specifically, given a hypergraph G = (V, E, w), the vertices in a subset S ⊂V have labels in L = {1, −1}, our task is to predict the labels of the remaining unlabeled vertices. Basically, we should try to assign the same label to all vertices contained in the same hyperedge. It is actually straightforward to derive a transductive inference approach from a clustering scheme. Let f : V 7→R denote a classification function, which assigns a label sign f(v) to a vertex v ∈V. Given an objective functional Ω(·) from some clustering approach, one may choose a classification function by argmin f∈R|V | {Remp(f) + µΩ(f)}, where Remp(f) denotes a chosen empirical loss, such as the least square loss or the hinge loss, and the number µ > 0 the regularization parameter. Since in general normalized cuts are thought to be superior to mincuts, the transductive inference approach that we used in the later experiments is built on the above spectral hypergraph clustering method. Consequently, as shown in [20], with the least square loss function, the classification function is finally given by f = (I −ξΘ)−1y, where the elements of y denote the initial labels, and ξ is a parameter in (0, 1). For a survey on transductive inference, we refer the readers to [21]. 8 Experiments All datasets except a particular version of the 20-newsgroup one are from the UCI Machine Learning Depository. They are usually referred to as the so-called categorical data. Specifically, each instance in those datasets is described by one or more attributes. Each attribute takes only a small number of values, each corresponding to a specific category. Attribute values cannot be naturally ordered linearly as numerical values can [9]. In our experiments, we constructed a hypergraph for each dataset, where attribute values were regarded as hyperedges. The weights for all hyperedges were simply set to 1. How to choose suitable weights is definitely an important problem requiring additional exploration however. We also constructed a simple graph for each dataset, and the simple graph spectral clustering based approach [19] was then used as the baseline. Those simple graphs were constructed in the way discussed in the beginning of Section 1, which is essentially to define pairwise relationships among the objects by the adjacency matrices of hypergraphs. The first task we addressed is to embed the animals in the zoo dataset into Euclidean space. This dataset contains 100 animals with 17 attributes. The attributes include hair, feathers, eggs, milk, legs, tail, etc. The animals have been manually classified into 7 different categories. We embedded those animals into Euclidean space by using the eigenvectors of the hypergraph Laplacian associated with the smallest eigenvalues (Figure 2). For the animals having the same attributes, we randomly chose one as their representative to put in the figures. It is apparent that those animals are well separated in their Euclidean representations. Moreover, it deserves a further look that seal and dolphin are significantly −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 bear carp cavy clam crab deer dogfish dolphin dove flamingo flea frog girl gnat gorilla gull hawk honeybee housefly kiwi ladybird lion lobster mink newt octopus ostrich penguin pitviper platypus pony pussycat scorpion seahorse seal sealion seasnake seawasp slowworm squirrel starfish stingray swan toad tortoise tuatara vampire wasp worm −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 bear pussycat carp cavy clam crab deer dogfish dolphin flamingo flea frog girl gnat gorilla gull hawk honeybee housefly kiwi ladybird lion lobster mink newt octopus ostrich penguin pitviper platypus pony scorpion seahorse seal sealion seasnake seawasp slowworm squirrel stingray swan toad tortoise tuatara vampire wasp worm starfish dove Figure 2: Embedding the zoo dataset. Left panel: the eigenvectors with the 2nd and 3rd smallest eigenvalues; right panel: the eigenvectors with the 3rd and 4th smallest eigenvalues. Note that dolphin is between class 1 (denoted by ◦) containing the animals having milk and living on land, and class 4 (denoted by ⋄) containing the animals living in sea. 20 40 60 80 100 120 140 160 180 200 0.1 0.15 0.2 0.25 0.3 # labeled points test error hypergraph simple graph (a) mushroom ‘ 20 40 60 80 100 120 140 160 180 200 0.14 0.16 0.18 0.2 0.22 # labeled points test error hypergraph simple graph (b) 20-newsgroup 20 40 60 80 100 120 140 160 180 200 0.12 0.14 0.16 0.18 0.2 0.22 0.24 # labeled points test error hypergraph simple graph (c) letter 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.14 0.16 0.18 0.2 0.22 0.24 0.26 0.28 different value test error hypergraph simple graph (d) α (letter) Figure 3: Classification on complex relational data. (a)-(c) Results from both the hypergraph based approach and the simple graph based approach. (d) The influence of the α in letter recognition with 100 labeled instances. mapped to the positions between class 1 consisting of the animals having milk and living on land, and class 4 consisting of the animals living in sea. A similar observation also holds for seasnake. The second task is classification on the mushroom dataset that contains 8124 instances described by 22 categorical attributes, such as shape, color, etc. We remove the 11th attribute that has missing values. Each instance is labeled as edible or poisonous. They contain 4208 and 3916 instances separately. The third task is text categorization on a modified 20-newsgroup dataset with binary occurrence values for 100 words across 16242 articles (see http://www.cs.toronto.edu/~roweis). The articles belong to 4 different topics corresponding to the highest level of the original 20 newsgroups, with the sizes being 4605, 3519, 2657 and 5461 respectively. The final task is to guess the letter categories with the letter dataset, in which each instance is described by 16 primitive numerical attributes (statistical moments and edge counts). We used a subset containing the instances of the letters from A to E with the sizes being 789, 766, 736, 805 and 768 respectively. The experimental results of the above three tasks are shown in Figures 3(a)-3(c). The regularization parameter α is fixed at 0.1. Each testing error is averaged over 20 trials. The results show that the hypergraph based method is consistently better than the baseline. The influence of the α used in the letter recognition task is shown in Figure 3(d). It is interesting that the α influences the baseline much more than the hypergraph based approach. 9 Conclusion We generalized spectral clustering techniques to hypergraphs, and developed algorithms for hypergraph embedding and transductive inference. It is interesting to consider applying the present methodology to a broader range of practical problems. We are particularly interested in the following problems. One is biological network analysis [17]. Biological networks are mainly modeled as simple graphs so far. It might be more sensible to model them as hypergraphs instead such that complex interactions will be completely taken into account. The other is social network analysis. As recently pointed out by [3], many social transactions are supra-dyadic; they either involve more than two actors or they involve numerous aspects of the setting of interaction. So standard network techniques are not adequate in analyzing these networks. Consequently, they resorted to the concept of a hypergraph, and showed how the concept of network centrality can be adapted to hypergraphs. References [1] S. Agarwal, L. Zelnik-Manor J. Lim, P. Perona, D. Kriegman, and S. Belongie. Beyond pairwise clustering. In IEEE Conf. on Computer Vision and Pattern Recognition, 2005. [2] C. Berge. Hypergraphs. North-Holland, Amsterdam, 1989. [3] P. Bonacich, A.C. Holdren, and M. Johnston. Hyper-edges and multi-dimensional centrality. Social Networks, 26(3):189–203, 2004. [4] P.K. Chan, M.D.F. Schlag, and J. Zien. Spectral k-way ratio cut partitioning and clustering. IEEE Trans. on Computer Aided Design of Integrated Circuits and Systems, 13(9):1088–1096, 1994. [5] F. Chung. Spectral Graph Theory. Number 92 in CBMS Regional Conference Series in Mathematics. American Mathematical Society, Providence, RI, 1997. [6] A. Corduneanu and T. Jaakkola. Distributed information regularization on graphs. In Advances in Neural Information Processing Systems 17, Cambridge, MA, 2005. MIT Press. [7] M. Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(98):298– 305, 1973. [8] G. Gallo, G. Longo, and S. Pallottino. Directed hypergraphs and applications. Discrete Applied Mathematics, 42(2):177–201, 1993. [9] D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamical systems. VLDB Journal, 8(3-4):222–236, 2000. [10] M. Gu, H. Zha, C. Ding, X. He, and H. Simon. Spectral relaxation models and structure analysis for k-way graph clustering and bi-clustering. Technical Report CSE-01-007, Department of Computer Science and Engineering, Pennsylvania State University, 2001. [11] L. Hagen and A.B. Kahng. New spectral methods for ratio cut partitioning and clustering. IEEE Trans. on Computed-Aided Desgin of Integrated Circuits and Systems, 11(9):1074–1085, 1992. [12] D. Klein and C. Manning. Parsing and hypergraphs. In Proc. 7th Intl. Workshop on Parsing Technologies, 2001. [13] M. Meila and J. Shi. A random walks view of spectral segmentation. In Proc. 8th Intl. Workshop on Artificial Intelligence and Statistics, 2001. [14] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [15] J.S. Oliveira, J.B. Jones-Oliveira, D.A. Dixon, C.G. Bailey, and D.W. Gull. Hyperdigraph– Theoretic analysis of the EGFR signaling network: Initial steps leading to GTP: Ras complex formation. Journal of Computational Biology, 11(5):812–842, 2004. [16] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Tran. on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [17] K. Tsuda. Propagating distributions on a hypergraph by dual information regularization. In Proc. 22th Intl. Conf. on Machine Learning, 2005. [18] E.P. Xing and M.I. Jordan. On semidefinite relaxation for normalized k-cut and connections to spectral clustering. Technical Report CSD-03-1265, Division of Computer Science, University of California, Berkeley, 2003. [19] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, Cambridge, MA, 2004. MIT Press. [20] D. Zhou, J. Huang, and B. Sch¨olkopf. Learning from labeled and unlabeled data on a directed graph. In Proc. 22th Intl. Conf. on Machine Learning, 2005. [21] X. Zhu. Semi-supervised learning literature survey. Technical Report Computer Sciences 1530, University of Wisconsin - Madison, 2005.
2006
178
3,008
Learning Dense 3D Correspondence Florian Steinke∗, Bernhard Sch¨olkopf∗, Volker Blanz+ ∗Max Planck Institute for Biological Cybernetics, 72076 T¨ubingen, Germany {steinke, bs}@tuebingen.mpg.de +Universit¨at Siegen, 57068 Siegen, Germany blanz@mpi-sb.mpg.de Abstract Establishing correspondence between distinct objects is an important and nontrivial task: correctness of the correspondence hinges on properties which are difficult to capture in an a priori criterion. While previous work has used a priori criteria which in some cases led to very good results, the present paper explores whether it is possible to learn a combination of features that, for a given training set of aligned human heads, characterizes the notion of correct correspondence. By optimizing this criterion, we are then able to compute correspondence and morphs for novel heads. 1 Introduction Establishing 3D correspondence between surfaces such as human faces is a crucial element of classspecific representations of objects in computer vision and graphics. On faces, for example, corresponding points may be the tips of the noses in 3D scans of different individuals. Dense correspondence is a mapping or ”warp” from all points of a surface onto another surface (in some cases, including the present work, extending from the surface to the embedding space). Once this mapping is established, it is straightforward, for instance, to compute morphs between objects. More importantly, if correspondence mappings between a class of objects and a reference object have been established, we can represent each object by its mapping, leading to a linear representation that is able to describe also new objects of similar shape and texture (for further details, see [1]). The practical relevance of surface correspondence has been increasing over the last years. In computer graphics, applications involve morphing, shape modeling and animation. In computer vision, an increasing number of algorithms for face and object recognition based on 2D images or 3D scans, as well as shape retrieval in databases and 3D surface reconstruction from images, rely on shape representations that are built upon dense surface correspondence. Unlike existing algorithms that define some ad-hoc criteria for identifying corresponding points on two objects, we treat correspondence as a machine learning problem and propose a data-driven approach that learns the relevant criteria from a dataset of given object correspondences. In stereo vision and optical flow [2, 3], a correspondence is correct if and only if it maps a point in one scene to a point in another scene which stems from the same physical point. In contrast, correspondence between different objects is not a well-defined problem. When two faces are compared, only some anatomically unique features such as the corners of the eyes are clearly corresponding, while it may be difficult to define how smooth regions, such as the cheeks and the forehead, are supposed to be mapped onto each other. On a more fundamental level, however, even the problem of matching the eyes is difficult to cast in a formal way, and in fact this matching involves many of the basic problems of computer vision and feature detection. In a given application, the desired correspondence can be dependent on anatomical facts, measures of shape similarity, or the overall layout of features on the surface. However, it may also depend on the properties of human perception, on functional or semantic issues, on the context within a given object class or even on social convention. Due to the problematic and challenging nature of the correspondence problem, our correspondence learning algorithm may be a more appropriate approach than existing techniques, as it is often easier to provide a set of examples of the desired correspondences than a formal criterion for correct correspondence. In a nutshell, the main idea of our approach is as follows. Given two objects O1 and O2, we are seeking a correspondence mapping τ such that certain properties of x (relative to O1) are preserved in τ(x) (relative to O2) — they are invariant. These properties depend on the object class and as explained above, we cannot hope to characterize them comprehensively a priori. However, if we are given examples of correct and incorrect correspondences, we can attempt to learn properties which are invariant for correct correspondences, while for incorrect correspondences, they are not. We shall do this by providing a dictionary of potential properties (such as geometric features, or texture properties) and approximating a “true” property characterizing correspondence as an expansion in that dictionary. We will call this property warp-invariant feature and show that its computation can be cast as a problem of oriented PCA. The remainder of the paper is structured as follows: in Section 2 we review some related work, whereas in Section 3 we set up our general framework for computing correspondence fields. Following this, we explain in Section 4 how to learn the characteristic properties for correspondence and continue to explain two new feature functions in Section 5. We give implementation details and experimental results in Section 6 and conclude in Section 7. 2 Related Work The problem of establishing dense correspondence has been addressed in the domain of 2D images, on surfaces embedded in 3D space, and on volumetric data. In the image domain, correspondence from optical flow [2, 3] has been used to describe the transformations of faces with pose changes and facial expressions [4], and to describe the differences in the shapes of individual faces [5]. An algorithm for computing correspondence on parameterized 3D surfaces has been introduced for creating a class-specific representation of human faces [1] and bodies [6]. [7] propose a method that is designed to align three dimensional medical images using a mutual information criterion. Another interesting approach is [8]: they formulate the problem in a probabilistic setup and then apply standard graphical model inference algorithms to compute the correspondence. Their mesh based method uses a smoothness functional and features based on spin images. See the review [9] for an overview of a wide range of additional correspondence algorithms. Algorithms that are applied to 3D faces typically rely on surface parameterizations, such as cylindrical coordinates, and then compute optical flow on the texture map as well as the depth image [1]. This algorithm yields plausible results, to which we will compare our method. However, the approach cannot be applied unless a parameterization is possible and the distortions are low on all elements of the object class. Even for faces this is a problem, for example around the ears, which makes a more general real 3D approach preferable. One such algorithm is presented in [10]: here, the surfaces are embedded into the surrounding space and a 3D volume deformation is computed. The use of the signed distance function as a guiding feature ensures correct surface to surface mappings. We build on this approach that is more closely presented in Section 3. A common local geometric feature is surface curvature. Though implicit surface representations allow the extraction of such features [11], these differential geometric properties are inherently instable with respect to noise. [12] propose a related 3D geometric feature based on integrals and thus more stable to compute. We present a slightly modified version thereof which allows for a much easier computation of this feature from a signed distance function represented as a kernel expansion in comparison to a complete space voxelisation step required in [12]. 3 General Framework For Computing Correspondence In order to formalize our understanding of correspondence, let us assume that all the objects O of class O are embedded in X ⊆R3. Given a reference object Or and a target Ot the goal of computing a correspondence can then be expressed as determining the deformation function τ : X →X which maps each point x ∈X on Or to its corresponding point τ(x) on Ot. We further assume that we can construct a dictionary of so-called feature functions fi : X → R, i = 1, .., n capturing certain characteristic properties of the objects. [10] propose to use the signed distance function, which maps to each point x ∈X the distance to the objects surface — with positive sign outside the shape and negative sign inside. They also use the first derivative of the signed distance function, which can be interpreted as the surface normal. In section Section 5 we will propose two additional features which are characteristic for 3D shapes, namely a curvature related feature and surface texture. We assume that the warp-invariant feature can be represented or at least approximated by an expansion in this dictionary. Let γ : X →Rn be a weighting function describing the relative importance of the different elements of the dictionary at a given location in X. We then express the warpinvariant feature as fγ : X →R, fγ(x) = Pn i=1 γi(x)fi(x) with feature functions fi that are object specific; for the target object there is a slight modification in that the space-variant weighting γ(x) needs to refer to the coordinates of the reference object if we want to avoid comparing apples and oranges. We thus use f t γ(x) = Pn i=1 γi(τ −1(x))f t i (x), where we never have to evaluate τ −1 since we will only require f t γ(τ(x)) below. To determine a mapping τ which will establish correct correspondences between x and τ(x), we minimize the functional Creg ∥τ∥2 H + Z X f r γ(x) −f t γ(τ(x)) 2 dµ(x) (1) The first term expresses a prior belief in a smooth deformation. This is important in regions where the objects are not sufficiently characteristic to specify a good correspondence. As we will use a Support Vector framework to represent τ, smoothness can readily be expressed as the RKHS norm ∥τ∥H of the non-parametric part of the deformation function τ (see Section 6). The second term measures the local similarity of the warp-invariant feature function extracted on the reference object f r and on the target object f t and integrates it over the volume of interest. This formulation is a modification of [10] where two feature functions were chosen a priori (the signed distance and its derivative) and used instead of fγ. The motivation for this is that for a correct morph, these functions should be reasonably invariant. In contrast, the present approach starts from the notion of invariance and estimates a location-dependent linear combination of feature functions with a maximal degree of invariance for correct correspondences (cf. next section). We consider location-dependent linear combinations since one cannot expect that all the feature functions that define correspondence are equally important for all points of an object. For example color may be more characteristic around the lips or the eyes than on the forehead. This comes at the cost, however, of increasing the number of free parameters, leading to potential difficulties when performing model selection. As discussed above, it is unclear how to characterize and evaluate correspondence in a principled way. The authors of [10] propose a strategy based on a two-way morph: they first compute a deformation from the reference object to the target, and afterwards vice versa. A necessary condition for a correct morph is then that the concatenation of the two deformations yield a mapping close to the identity.1 Although this method can provide a partial quality criterion even when no ground truth is available, all model selection approaches based on such a criterion need to minimize (1) many times and the computation of a gradient with respect to the parameters is usually not possible. As the minimization is typically non-convex and rather expensive, the number of free parameters that can be optimized is small. For locally varying parameters as proposed here such an approach is not practical. We thus propose to learn the parameters from examples using an invariance criterion proposed in the next section. 4 Learning the optimal feature function We assume that a database of D objects that are already in correspondence is available. This could for example be achieved by manually picking many corresponding point pairs and training a regression to map all the points onto each other, or by (semi-)automatic methods optimized for the given object class (e.g., [1]). We can then determine the optimal approximation of the warp-invariant 1It is not a sufficient condition, since the concatenation of, say, two identity mappings will also yield the identity. feature function (as defined in the introduction) that characterizes correspondence using the basic features in our dictionary. The warp-invariant feature function should be such that it varies little or not at all for corresponding points, but its value should not be preserved (and have large variance) for random non-matching points. To approximate it, we propose to maximize the ratio of these variances over all weighting functions γ. Thus for each point x ∈X, we maximize Ed,zd f r γ(x) −f d γ (zd) 2 Ed f rγ(x) −f dγ (τd(x)) 2 (2) Here, f r γ, f d γ are the warp-invariant feature functions evaluated on the reference object and the d-th database object respectively. τd(x) is the point matching x on the d-th database object and zd is a random point sampled from it. We take the expectations over all objects in our database, as well as non corresponding points randomly sampled from the objects. Because of the linear dependence of fγ on γ one can rewrite the problem as the maximization of γ(x)T Cz(x)γ(x) γ(x)T Cτ(x)γ(x) (3) with the empirical covariances [Cτ(x)]i,j = D X d=1 f r i (x) −f d i (τd(x)  f r j (x) −f d j (τd(x)) T , (4) [Cz(x)]i,j = D X d=1 N X k=1 f r i (x) −f d i (zd,k)  f r j (x) −f d j (zd,k) T , (5) where we have drawn N random sample points from each object in the database. This problem is known as oriented PCA [13], and the maximizing vector γ(x) can be determined by solving the generalized eigenvalue problem Cτ(x)v(x) = λ(x)Cz(x)v(x). If v(x) is the normalized eigenvector corresponding to the maximal eigenvalue λ(x), we obtain the optimal weight vector γ(x) = ˜λ(x)v(x) using the scale factor ˜λ(x) = v(x)T Cτ(x)v(x) −1/2. Note that by using this scale factor ˜λ(x), the contribution of the feature function fγ in the objective (1) will vary locally compared to the regularizer: as τ(x) is somewhat arbitrary during the optimization of (1) the average local contribution will then approximately equal Ed,zd f r γ(x) −f d γ (zd) 2 = λ(x). This implies that if locally there exists a characteristic combination of features — λ(x) is high — it will have a big influence in (1). If not, the smoothness term ∥τ∥H gets relatively more weight implying that the local correspondence is mostly determined through more global contributions. Note, moreover that while we have described the above for the leading eigenvector only, nothing prevents us from computing several eigenvectors and stacking up the resulting warp-invariant feature functions f 1 γ, f 2 γ, . . . , f m γ into a vector valued warp-invariant feature function fγ : X →Rm which then is plugged into the optimization problem (1) using the two norm to measure deviations instead of the squared distance. 5 Basic Feature Functions In our dictionary of basic feature functions we included the signed distance function and its derivative. We added a curvature related feature, the ”signed balls”, and surface texture intensity. 5.1 Signed Balls Imagine a point x on a flat piece of a surface. Take a ball BR(x) with radius R centered at that point and compute the average of the signed distance function s : X →R over the ball’s volume: Is(x) = 1 VBR(x) Z BR(x) s(x′)dx′ −s(x) (6) If the surface around x is flat on the scale of the ball, we obtain zero. At points where the surface is bent outwards this value is positive, at concave points it is negative. The normalization to the value B 4mm B 28mm C 0mm C 5mm C 15mm Figure 1: The two figures on the left show the color-coded values of the ”signed balls” feature at different radii R. Depending on R, the feature is sensitive to small-scale structures or largescale structures only. Convex parts of the surface are assigned positive values (blue), concave parts negative (red). The three figures on the right show how the surface feature function that was trained with texture intensity extends off the surface (for clarity visualized in false colors) and becomes smoother. In the figure, the function is mapped on surfaces that are offset by 0, 5 and 15 mm. of the signed distance function at the center of the ball allows us to compute this feature function also for off-surface points, where the interpretation with respect to the other iso-surfaces does not change. Due to the integration, this feature is stable with respect to surface noise, while mean curvature in differential geometry may be affected significantly. Moreover, the integration involves a scale of the feature. We propose to represent the implicit surface function as in [10] where a compactly supported kernel expansion is trained to approximate the signed distance. In this case the integral and the kernel summation can be interchanged, so we only need to evaluate terms of the form R BR(x) k(xi, x′)dx′ and then add them in the same way as the signed distance function is computed. The value of this basic integral only depends on the distance between the kernel center xi and the test point x. It is compactly supported if the kernel k is. Therefore, we propose to pre-compute these values numerically for different distances and store them in a small lookup table. For the final expansion summation we can then just interpolate the two closest values. We obtained good interpolation results with about ten to twenty distance values. For the case where the surface looks locally like a sphere it is easy to show that in the limit of small balls the value of the ”signed balls” feature function is related to the differential geometric mean curvature H by Is(x) = 3π 20 H2R2 + O(R3). 5.2 Surface properties — Texture The volume deformation approach presented in Section 3 requires the use of feature functions defined on the whole domain X. In order to include information f|∂Ωwhich is just given on a surface ∂Ωof the object whose interior volume is Ω, e.g. the texture intensity, we propose to extended the surface feature f|∂Ωinto a differentiable feature function f : X →R such that f →f|∂Ωas we get closer to the surface. At larger distances from the surface, f should be smoother and tend towards the mean feature value. This is a desirable property during the optimization of (1) as it helps to avoid local minima. Finally, the feature function f and its gradient should be efficient to evaluate. We propose to use a multi-scale compactly supported kernel regression to determine f: at each scale, from coarse to fine, we select approximately equally spaced points on the surface at a distance related to the kernel width of that scale. Then we compute the feature value at these points averaged over a sphere of radius of the corresponding kernel support. With standard quadratic SVR regression we fit the remainder of what was achieved on larger scales to the training values. Due to the sub-sampling the kernel regressions do not contain too many kernel centers and the compact support of the kernel ensures sparse kernel matrices. Thus, efficient regression and evaluation is guaranteed. Because all kernel centers lie on the surface and reach to different extents into the volume X depending on the kernel size of their scale, we can model small-scale variations on the surface and close to it, whereas the regression function varies only on a larger scale further away from the surface. 6 Experiments Implementation. In order to optimize (1) we followed the approach of [10]: we represent the deformation τ as a multi-scale compactly supported kernel expansion, i.e., the j-th component, empty C B N horiz. N vert. N depth. Figure 2: Locations that are marked yellow show an above threshold, relative contribution (see text) of a given feature in the warp-invariant feature function. C is the surface intensity feature, B the signed balls feature (R = 6mm), N the surface normals in different directions. Note that points where color has a large contribution (yellow points in C) are clustered around regions with characteristic color information, such as the eyes or the mouth. j = 1, 2, 3, of τ is τ j(x) = xj + PS s=1 PNs i=1 αj i,sk(x, xi,s) with the compactly supported kernel function k : X ×X →R. The regularizer then is ∥τ∥2 H := PS s=1 P3 j=1 PNs i,l=1 αj i,sαj l,sk(xl,s, xi,s). We approximate the integral in (1) by sampling Ns kernel centers xi,s on each scale s = 1, . . . , S according to the measure µ(x) and minimize the resulting non-linear optimization problem in the coefficients αj i,s for each scale from coarse to fine using a second order Newton-like method [14]. As a test object class we used 3D heads with known correspondence [1]. 100 heads were used for the training object database and 10 to test our correspondence algorithm. As a reference head we used the mean head of the database. The faces are all in correspondence, so we can just linearly average the vertex positions and the texture images. However, the correspondence of the objects in the database is only defined on the surface. In order to extend it to the off-surface points xi,s, we generated these locations by first sampling points from the surface and then displacing them along their surface normals. This implied that we were able to identify the corresponding points also on other heads. For each kernel center xi,s, we learned the weighting vector γ(xi,s) as described in Section 4. In one run through the database we computed for each head the values of all proposed basic feature functions for all locations, corresponding to kernel centers on the reference head, as well as for 100 randomly sampled points z. The points z should be typical for possible target locations τ(xi,s) during the optimization of (1). Thus, we sampled points up to distances to the surface proportional to the kernel widths used for the deformation τ. We then estimated the empirical covariance matrices for each kernel center yielding the weight vectors via a small eigenvalue decomposition of size n × n where n is the number of used basic features. The parameters Creg — one for each scale — were determined by optimizing computed deformation fields from the reference head to some of the training database heads. We minimized the mismatch to the correspondence given in the database. Feature functions. In Figure 1, our new feature functions are visualized on an example head. Each feature extracts specific plausible information, and the surface color can be extended off the surface. Learned weights. In Figure 2, we have marked those points on the surface where a given feature has a high relative contribution in the warp-invariant feature function. As a measure of contribution we took the component of the weight vector γ(xi,s) that corresponds to the feature of interest and multiplied it with the standard deviation of this feature over all heads and all positions. Note that the weight vector is not invariant to rescaling the basic feature functions, unlike the proposed measure. Finally, we normalized the contributions of all features at a given point xi,s to sum to one, yielding the relative contribution. In the table below the relative contribution of each feature is listed. S N horiz. N vert. N depth. C B 8% B 3% average rel. contribution 0.832 0.092 0.023 0.038 0.008 0.006 0.003 max rel. contribution 0.997 0.701 0.429 0.446 0.394 0.272 0.333 Here and below, S is signed distance, N surface normals, C the proposed surface feature function trained with the intensity values on the faces, and B is the ”signed balls” feature with radii given by the percentage numbers scaled to the diameter of the head. The signed distance function is the best preserved feature (e.g. all surface points take the value zero up to small approximation errors). The resulting large weight of this feature is plausible as a surface-to-surface mapping is a necessary condition for a morph. However, combined with Figure 2 Reference Deformed Target Deformed Target Figure 3: The average head of the database – the reference – is deformed to match four of the target heads of the test set. Correct correspondence deforms the shape of the reference head to the target face with the texture of the mean face well aligned to the shape details. we can see that the method assigns plausible non-zero values also to other features where these can be assumed to be most characteristic for a good correspondence. Correspondence. We applied our correspondence algorithm to compute the correspondence to the test set of 10 heads. Some example deformations are shown in Figure 3 for a dictionary consisting of S, N (hor, vert, depth) C, B (radii 3% and 8%). Numerical evaluation of the morphs is difficult. We compare our method with the results of the correspondence algorithm of [1] on points that are uniformly drawn form the surface (first column) and for 24 selected marker points (second column). These markers were placed at locations around the eyes or the mouth where correspondence can be assumed to be better defined than for example on the forehead. Still, the error made by humans when picking these positions has turned out to be around 1–2mm. The table below shows mean results in mm for different settings. uniform markers error signed distance (a) all weights equal 5.97 4.49 1.49 (b) our method (independent of x) 3.74 1.48 0.05 (c) our method (1 eigenvector) 3.74 1.34 0.04 (d) our method (2 eigenvectors) 3.62 1.19 0.04 (e) our method (4 eigenvectors) 3.56 1.11 0.04 (f) our method (6 eigenvectors) 3.55 1.10 0.04 (g) our method (1 eigenvector, without B, C) 3.76 1.42 0.04 If all weights are equal independent of location or feature (a), the result is not acceptable. A careful weighting of each feature separately, but independent of location (b) — as could potentially be achieved by [10] — improves the quality of the correspondence. To obtain these weights we averaged the covariance matrices Cz(x), Cτ(x) over all points and applied the proposed algorithm in Section 4, but independent of x. However, a locally adapted weighting (c) outperforms the above methods and using more than one eigenvector (d-f) further enhances the correspondence. Note that although the results are not identical to [1], our algorithm’s accuracy is consistent with the human labeling on the scale of the latter’s presumed accuracy (1-2mm). For uniformly sampled points, the differences are slightly larger (4mm), but we need to bear in mind that that algorithm’s results cannot be considered ground truth. Experiment (g) which is identical to (c) but with the color and signed balls feature omitted demonstrates the usefulness of these additional basic feature functions. Computation times ranged between 5min and one hour and depended significantly on the number of scales used (here 4), the number of kernel centers generated, and the number of basic features included in the dictionary. For large radii R the signed balls feature becomes quite expensive to compute, since many summands of the signed distance function expansion have to be accumulated. Our method to select the important features for each point in advance, i.e. before the optimization is Reference 25% 50% 75% Target Figure 4: A morph between a human head and the head of the character Gollum (available from www.turbosquid.com). As Gollum’s head falls out of our object class (human heads), we assisted the training procedure with 28 manually placed markers. started, would allow for a potentially high speed-up: At locations where a certain feature has a very low weight, we could just omit it in the evaluation of the cost function (1). 7 Conclusion We have proposed a new approach to the challenging problem of defining criteria that characterize a valid correspondence between 3D objects of a given class. Our method learns an appropriate criterion from examples of correct correspondences. The approach thus applies machine learning to computer graphics at the early level of feature construction. The learning technique has been implemented efficiently in a correspondence algorithm for textured surfaces. In the future, we plan to test our method with other object classes. Even though we have concentrated in our experiments on 3D surface data, the method may be applicable also in other fields such as to align CT or MR scans in medical imaging. It would also be intriguing to explore the question whether our paradigm of learning the features characterizing correspondences might reflect some of the cognitive processes that are involved when humans learn about similarities within object classes. References [1] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH’99 Conference Proceedings, pages 187–194, Los Angeles, 1999. ACM Press. [2] B.D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In IJCAI81, pages 674–679, 1981. [3] B. K. P. Horn and B. G. Schunck. Determining optical flow. Artif. Intell., 17(1-3):185–203, 1981. [4] D. Beymer and T. Poggio. Image representations for visual learning. Science, 272:1905–1909, 1996. [5] T. Vetter and T. Poggio. Linear object classes and image synthesis from a single example image. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(7):733–742, 1997. [6] B. Allen, B. Curless, and Z. Popovic. The space of human body shapes: reconstruction and parameterization from range scans. In Proc. SIGGRAPH, pages 612–619, 2002. [7] D. Rueckert and A. F. Frangi. Automatic construction of 3-d statistical deformation models of the brain using nonrigid registration. IEEE Trans. on Medical Imaging, 22(8):1014–1025, 2003. [8] D. Anguelov, P. Srinivasan, H.-C. Pang, D. Koller, S. Thrun, and J. Davis. The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In Neural Information Processing Systems 17, pages 33–40. MIT Press, 2005. [9] M. Alexa. Recent advances in mesh morphing. Computer Graphics Forum, 21(2):173–196, 2002. [10] B. Sch¨olkopf, F. Steinke, and V. Blanz. Object correspondence as a machine learning problem. In Proceedings of the 22nd International Conference on Machine Learning (ICML 05), July 2005. [11] J.-P. Thirion and A Gourdon. Computing the differential characteristics of isointensity surfaces. Journal of Computer Vision and Image Understanding, 61(2):190–202, March 1995. [12] N. Gelfand, N. J. Mitra, L. J. Guibas, and H. Pottmann. Robust global registration. In Proc. Eurographics Symposium on Geometry Processing, pages 197–206, 2005. [13] K.I. Diamantaras and S.Y. Kung. Principal component neural networks: theory and applications. John Wiley & Sons, Inc., 1996. [14] D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Math. Program., 45(3):503–528, 1989.
2006
179
3,009
Detecting Humans via Their Pose Alessandro Bissacco Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 bissacco@cs.ucla.edu Ming-Hsuan Yang Honda Research Institute 800 California Street Mountain View, CA 94041 mhyang@ieee.org Stefano Soatto Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 soatto@cs.ucla.edu Abstract We consider the problem of detecting humans and classifying their pose from a single image. Specifically, our goal is to devise a statistical model that simultaneously answers two questions: 1) is there a human in the image? and, if so, 2) what is a low-dimensional representation of her pose? We investigate models that can be learned in an unsupervised manner on unlabeled images of human poses, and provide information that can be used to match the pose of a new image to the ones present in the training set. Starting from a set of descriptors recently proposed for human detection, we apply the Latent Dirichlet Allocation framework to model the statistics of these features, and use the resulting model to answer the above questions. We show how our model can efficiently describe the space of images of humans with their pose, by providing an effective representation of poses for tasks such as classification and matching, while performing remarkably well in human/non human decision problems, thus enabling its use for human detection. We validate the model with extensive quantitative experiments and comparisons with other approaches on human detection and pose matching. 1 Introduction Human detection and localization from a single image is an active area of research that has witnessed a surge of interest in recent years [9, 18, 6]. Simply put, given an image, we want to devise an automatic procedure that locates the regions that contain human bodies in arbitrary pose. This is hard because of the wide variability that images of humans exhibit. Given that it is impractical to explicitly model nuisance factors such as clothing, lighting conditions, viewpoint, body pose, partial and/or self-occlusions, one can learn a descriptive model of human/non human statistics. The problem then reduces to a binary classification task for which we can directly apply general statistical learning techniques. Consequently, the main focus of research on human detection so far has been on deriving a suitable representation [9, 18, 6], i.e. one that is most insensitive to typical appearance variations, so that it provides good features to a standard classifier. Recently local descriptors based on histograms of gradient orientations such as [6] have proven to be particularly successful for human detection tasks. The main idea is to use distributions of gradient orientations in order to be insensitve to color, brightness and contrast changes and, to some extent, local deformations. However, to account for more macroscopic variations, due for example to changes in pose, a more complex statistical model is warranted. We show how a special class of hierarchical Bayesian processes can be used as generative models for these features and applied to the problem of detection and pose classification. This work can be interpreted as an attempt to bridge the gap between the two related problems of human detection and pose estimation in the literature. In human detection, since a simple yes/no answer is required, there is no need to introduce a complex model with latent variables associated to physical quantities. In pose estimation, on the other hand, the goal is to infer these quantities and therefore a full generative model is a natural approach. Between these extremes lies our approach. We estimate a probabilistic model with a set of latent variables, which do not necessarily admit a direct interpretation in terms of configurations of objects in the image. However, these quantities are instrumental to both human detection and the pose classification problem. The main difficulty is in the representation of the pose information. Humans are highly articulated objects with many degrees of freedom, which makes defining pose classes a remarkably difficult problem. Even with manual labeling, how does one judge the distance between two poses or cluster them? In such situations, we believe that the only avenue is an unsupervised method. We propose an approach which allows for unsupervised clustering of images of humans and provides a low dimensional representation encoding essential information on their pose. The chief difference with standard clustering or dimensionality reduction techniques is that we derive a full probabilistic framework, which provides principled ways to combine and compare different models, as required for tasks such as human detection, pose classification and matching. 2 Context and Motivation The literature on human detection and pose estimation is too broad for us to review here. So we focus on the case of a single image, neglecting scenarios where temporal information or a background model are available and effective algorithms based on silhouettes [20, 12, 1] or motion patterns [18] can be applied. Detecting humans and estimating poses from single images is a fundamental problem with a range of sensible applications, such as image retrieval and understanding. It makes sense to tackle this problem as we know humans are capable of telling the locations and poses of people from the visual information contained in photographs. The question is how to represent such information, and the answer we give constitutes the main novelty of this work. Numerous representation schemes have been exploited for human detection, e.g., Haar wavelets [18], edges [9], gradient orientations [6], gradients and second derivatives [19] and regions from image segmentation [15]. With these representations, algorithms have been applied for the detection process such as template matching [9], support vector machine [19, 6], Adaboost [18], and grouping [15], to name a few. Most approaches to pose estimation are based on body part detectors, using either edge, shape, color and texture cues [7, 21, 15], or learned from training data [19]. The optimal configuration of the part assembly is then computed using dynamic programming as first introduced in [7], or by performing inference on a generative probabilistic model, using either Data Driven Markov Chain Monte Carlo, Belief Propagation or its non-Gaussian extensions [21]. These works focus on only one of the two problems, either detection or pose estimation. Our approach is different, in that our goal is to extract more information than a simple yes/no answer, while at the same time not reaching the full level of detail of determining the precise location of all body parts. Thus we want to simultaneously perform detection and pose classification, and we want to do it in an unsupervised manner. In this aspect, our work is related to the constellation models of Weber et al. [23], although we do not have an explicit decomposition of the object in parts. We start from the representation [6] based on gradient histograms recently applied to human detection with excellent results, and derive a probabilistic model for it. We show that with this model one can successfully detect humans and classify their poses. The statistical tools used in this work, Latent Dirichlet Allocation (LDA) [3] and related algorithms [5, 4], have been introduced in the text analysis context and recently applied to the problem of recognition of object and action classes [8, 22, 2, 16]. Contrary to most approaches (all but [8]) where the image is treated as a “bag of features” and all spatial information is lost, we encode the location and orientation of edges in the basic elements (words) so that this essential information is explicitly represented by the model. 3 A Probabilistic Model for Gradient Orientations We first describe the features that we use as the basic representations of images, and then propose a probabilistic model with its application to the feature generation process. 3.1 Histogram of Oriented Gradients Local descriptors based on gradient orientations are one of the most successful representations for image-based detection and matching, as was firstly demonstrated by Lowe in [14]. Among the various approaches within this class, the best performer for humans appears to be [6]. This descriptor is obtained by computing weighted histograms of gradient orientations over a grid of spatial neighborhoods (cells), which are then grouped in overlapping regions (blocks) and normalized for brightness and contrast changes. Assume that we are given a patch of 64 × 128 pixels, we divide the patch into cells of 8 × 8 pixels, and for each cell a gradient orientation histogram is computed. The histogram represents a quantization in 9 bins of gradient orientations in the range 0◦−180◦. Each pixel contributes to the neighboring bins, both in orientation and space, by an amount proportional to the gradient magnitude and linearly decreasing with the distance from the bin center. These cells are grouped in 2 × 2 blocks, and the contribution of each pixel is also weighted by a Gaussian kernel with σ = 8, centered in the block. Finally the vectors v of cell histograms within one block are normalized in L2 norm: ¯v = v/(||v||2 + ϵ). The final descriptor is a collection of histograms from overlapping blocks (each cell shared by 4 blocks). The main characteristic of such a representation is robustness to local deformations, illumination changes and, to a limited extent, viewpoint and pose changes due to coarsening of the histograms. In order to handle the larger variations typical of human body images, we need to complement this representation with a model. We propose a probabilistic model that can accurately describe the generation process of these features. 3.2 Latent Dirichlet Allocation Latent Dirichlet Allocation (LDA) [3] is a hierachical model for sparse discrete mixture distributions, where the basic elements (words) are sampled from a mixture of component distributions, and each component defines a discrete distribuition over the set of words. We are given a collection of documents where words w, the basic units of our data, take values in a dictionary of W unique elements w ∈{ 1, · · · , W }. A document w = ( w1, w2, · · · , wW ) is a collection of word counts wj: PW j=1 wj = N. The standard LDA model does not include the distribution of N, so it can be omitted in what follows. The corpus D = { w1, w2, · · · , wM } is a collection of M documents. The LDA model introduces a set of K latent variables, called topics. Each word in the document is assumed to be generated by one of the topics. Under this model, the generative process for each document w in the corpus is as follows: 1. Choose θ ∼Dirichlet(α). 2. For each word j = 1, · · · , W in the dictionary, choose a word count wj ∼p(wj|θ, β). where the word counts wj are drawn from a discrete distribution conditioned on the topic proportions θ: p(wj|θ, β) = βj.θ. Recently several variants to this model have been developed, notably the Multinomial PCA [4], where the discrete distributions are replaced by multinomials, and the Gamma-Poisson process [5], where the number of words θi from each component are independent Gamma samples and p(wj|θ, β) is Poisson. The hyperparameter α ∈RK + represents the prior on the topic distribution, θ ∈RK + are the topic proportions, and β ∈R+ W ×K are the parameters of the word distributions conditioned on topics. In the context of this work, words correspond to oriented gradients, and documents as well as corpus correspond to images and a set of images respectively. The topic derived by the LDA model is the pose of interest in this work. Here we can safely assume that the topic distributions β are deterministic parameters, later for the purpose of inference we will treat them as random variables and assign them a Dirichlet prior: β.k ∼Dirichlet(η) , where β.k denotes the k-th column of β. Then the likelihood of a document w is: p(w|α, β) = Z p(θ|α) W Y n=1 p(wn|θ, β)dθ (1) where documents are represented as a continuous mixture distribution. The advantage over standard mixture of discrete distributions [17], is that we allow each document to be generated by more than one topic. 3.3 A Bayesian Model for Gradient Orientation Histograms Now we can show how the described two-level Bayesian process finds a natural application in modeling the spatial distribution of gradient orientations. Here we consider the histogram of oriented gradients [6] as the basic feature from which we build our generative model, but let us point out that the framework we introduce is more general and can be applied to any descriptor based on histograms1. In this histogram descriptor, we have that each bin represents the intensity of the gradient at a particular location, defined by a range of orientations and a local neighborhood (cell). Thus the bin height denotes the strength and number of the edges in the cell. The first thing to notice in deriving a generative models for this class of features is that, since they represent a weighted histogram, they have non-negative elements. Thus a proper generative model for these descriptors imposes non-negativity constraints. As we will see in the experiments, a linear approach such as Non-negative Matrix Factorization [13] leads to extremely poor performance, probably due to the high curvature of the space. On the opposite end, representing the nonlinearity of the space with a set of samples by Vector Quantization is feasible only using a large number of samples, which is against our goal of deriving an economical representation of the pose. We propose using the Latent Dirichlet Allocation model to represent the statistics of the gradient orientation features. In order to do so we need to quantize feature values. While not investigated in the original paper [6], quantization is common practice for similar histogram-based descriptors, such as [14]. We tested the effect of quantization on the performance of the human detector based on Histogram of Oriented Gradient descriptors and linear Support Vector Machines described in [6]. As evident in Figure 1, with 16 or more discrete levels we practically obtain the same performance as with the original continuous descriptors. Thus in what follows we can safely assume that the basic features are collections of small integers, the histogram bin counts wj. Thus, if we quantize histogram bins and assign a unique word to each bin, we obtain a representation for which we can directly apply the LDA framework. Analogous to document analysis, an orientation histogram computed on an image patch is a document w represented as a bag of words ( w1, · · · , wW ), where the word counts wj are the bin heights. We assume that such a histogram is generated by a mixture of basic components (topics), where each topic z induces a discrete distribution p(r|β.z) on bins representing a typical configuration of edges common to a class of elements in the dataset. By summing the contributions from each topic we obtain the total count wj for each bin, distributed according to p(wj|θ, β). The main property of such feature formation process, desirable for our applications, is the fact that topics combine additively. That is, the same bin may have contributions from multiple topics, and this models the fact that the bin height is the count of edges in a neighborhood which may include parts generated by different components. Finally, let us point out that by assigning a unique word to each bin we model spatial information, encoded in the word identity, whereas most previous approaches (e.g. [22]) using similar probabilistic models for object class recognition did not exploit this kind of information. 4 Probabilistic Detection and Pose Estimation The first application of our approach is human detection. Notice that our main goal is to develop a model to represent the statistics of images for human pose classification. We use the human detection problem as a convenient testbed for validating the goodness of our representation, since for this application large labelled datasets and efficient algorithms are available. By no means we intend to compete with state-of-the-art discriminative approaches for human detection alone, which are optimized to represent the decision boundary and thus are supposed to perform better than generative approaches in binary classification tasks. However, if the generative model is good at capturing the statistics of human images we expect it to perform well also in discriminating humans from the background. In human detection, given a set of positive and negative examples and a previously unseen image Inew, we are asked to choose between two hypotheses: either it contains a human or it is a background image. The first step is to compute the gradient histogram representation w(I) for the test and training images. Then we learn a model for humans and background images and use a threshold 1Notice that, due to the particular normalization procedure applied, the histogram features we consider here do not have unit norm (in fact, they are zero on uniform regions). on the likelihood ratio2 for detection: L = P(w(Inew)|Human) P(w(Inew)|Background) (2) For the the LDA (and related models [5, 4], the likelihoods p(w(I)|α, β) are computed as in (1), where α, β are model parameters and can be learned from data. In practice, we can assume α is known and compute an estimate of β from the training corpus. In doing so, we can choose from two main inference algorithms: mean field or variational inference [3] and Gibbs sampling [10]. Mean field algorithms provide a lower bound on the likelihood, while Gibbs sampling gives statistics based on a sequential sampling scheme. As shown in Figure 1, in our experiments Gibbs sampling exhibited superior performance over mean field in terms of classification accuracy. We have experimented with two variations, a direct method and Rao-Blackwellised sampling (see [4] for details). Both methods gave similar performance, here we report the results obtained using the direct method, whose main iteration is as follows: 1. For each document wi = (wi,1, · · · , wi,W ): First sample θ(i) ∼p(θ|wi, α, β), and then sample v(i) j. ∼Multinomial(βj.θ(i), wi,j) 2. For each topic k: Sample β.k ∼Dirichlet(P i v(i) .k + η) In pose classification, we start from a set of unlabeled training examples of human poses and learn the topic distribution β. This defines a probabilistic mapping to the topic variables, which can be seen as an economical representation encoding essential information of the pose. That is, from a image Inew, we estimate the topic proportions ˆθ(Inew) as: ˆθ(Inew) = Z θp(θ|w(Inew), α, β)dθ (3) Pose information can be recovered by matching the new image Inew to an image I in the training set. For matching, ideally we would like to compute the matching score as Sopt(I, Inew) = P(w(Inew)|w(I), α, β), i.e. the posterior probability of the test image Inew given the training image I and the model α, β. However this would be computationally expensive as for each pair I, Inew it requires computing an expectation of the form (3), thus we opted for a suboptimal solution. For each training document I, in the learning step we compute the posterior topic proportions ˆθ(I) as in (3). Then the matching score S between Inew and I is given by the dot product between the two vectors ˆθ(I) and ˆθ(Inew): S(I, Inew) =< ˆθ(I), ˆθ(Inew) > (4) The computation of this score requires only a dot product between low dimensional unit vectors ˆθ, so our approach represent an efficient method for matching and clustering poses in large datasets. 5 Experiments We first tested the efficacy of our model for the human detection task. We used the dataset provided by [6], consisting of 2340 64×128 images of pedestrians in various configurations and 1671 images of outdoor scenes not containing humans. We collected negative examples by random sampling 10 patches from each of the first 1218 non-human images. These, together with 1208 positive examples and their left-right reflections, constituted our first training set. We used the learned model to classify remaining 1132 positive and on 5889 patches randomly extracted from the residual background images. We first computed the histograms of oriented gradients from the image patches following the procedure outlined in Section 3.1. These feature are quantized so that they can be represented by our discrete stochastic model. We tested the effect of different quantization levels on the performances of the boosted SVM classifier [6]: a initial training on the provided dataset is followed by a boosting round where the trained classifier is applied to the background images to find false positive; these hard examples are then added to for a second training of the classifier. As Figure 1 shows, the effect of quantization is significant only if we use less than 4 bits. Therefore, we chose to discretize the features to 16 quantization levels. 2Ideally we would like to use the posterior ratio R = P(Human|Inew)/P(Background|Inew). However notice that R is equal to (2) if we assume equal priors P(Human) = P(Background). Given the number of topics K and the prior hyperparameters α, η, we learned topic distributions β and topic proportions ˆθ(I) using either Gibbs sampling or Mean Field. We tested both Gamma [5] and Dirichlet [3, 4] distributions for topic priors, obtaining best results with the multinomial model [4] with scalar priors αi = a, ηi = b, in these experiments a = 2/K and b = 0.5. The number of topics K is an important parameter that should be carefully chosen based on considerations on modeling power and complexity. With a higher number of topics we can more accurately fit the data, which can be measured by the increase in the likelihood of the training set. This does not come for free: we have a larger number of parameters and an increased computational cost for learning. Eventually, an excessive topic number causes overfitting, which can be measured as the likelihood in the test dataset decreases. For the INRIA data, experimental evaluations suggested that a good tradeoff is obtained with K = 24. We learned two models, one for positive and one for negative examples. For learning we run the Gibbs sampling algorithm described in Section 4 for a total number of 300 samples per document, including 50 samples to compute the likelihoods (1). We also trained the model using the Mean Field approximation, but as we can see in Figures 1 and 4 the results using Gibbs sampling are better. For details on the implementation we refer to [4]. We then obtain a detector by computing the likelihood ratio (2) and comparing it with a threshold. In Figure 1 we show the performances of our detector on the INRIA dataset, where for the sake of comparison with other approaches boosting is not performed. We show the results for: • Linear SVM classifier: Trained as described, using the SVMLight software package. • Vector Quantization: Positive and negative models learned as collections of K clusters using the K-Means algorithm. Then the decision rule is Nearest Neighbor, that is whether the closest cluster belongs to positive or negative model. • Non-negative Matrix Factorization: Feature vectors are collected in a matrix V , and the factorization that minimizes ||Y −WH||2 2 with W, H nonnegative is computed using the multiplicative update algorithm of [13]. Using an analogy with the LDA model, the columns of W contain the topic distributions, while the columns of H represent the component weights. A classifier is obtained as the difference of the residuals of the feature projections on the positive and negative models. From the plot we see how the results of our approach are comparable with the performance of the Linear SVM, while being far superior to the other generative approaches. We would like to stress that a sole comparison on detection performance with state-of-the discriminative classifiers would be inappropriate, since our model targets pose classification which is harder than binary detection. A fair comparison should divide the dataset in classes and compare our model with a multiclass classifier. But then we would face the difficult problem of how to label human poses. For the experiments on pose classification and matching, we used the CMU Mobo dataset [11]. It consists of sequences of subjects performing different motion patterns, each sequence taken from 6 different views. In the experiments we used 22 sequences of fast walking motion, picking the first 100 frames from each sequence. In the first experiment we trained the model with all the views and set the number of topics equal to the number of views, K = 6. As expected, each topic distribution represents a view and by assigning every image I to the topic k with highest proportion k = arg maxk ˆθk(I) we correctly associated all the images from the same view to the same topic. To obtain a more challenging setup, we restricted to a single view and tested the classification performance of our approach in matching poses. We learned a model with K = 8 topics from 16 training sequences, and used the remaining 6 for testing. In Figure 2 we show sample topics distributions from this model. In Figure 3, for each test sequence we display a sample frame and the associated top ten matches from the training data according to the score (4). We can see how the pose is matched against change of appearance and motion style, specifically a test subject pose is matched to similar poses of different subjects in the training set. This shows how the topic representation factors out most of the appearance variations and retains only essential information on the pose. In order to give a quantitative evaluation of the pose matching performance and compare with other approaches, we labeled the dataset by mapping the set of walking poses to the interval [0, 1]. We manually assigned 0 to the frames at the beginning of the double support phase, when the swinging 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 0.01 0.02 0.05 0.1 0.2 Effect of Histogram Quantization on Human Detection false positives per window (FPPW) miss rate Continous 32 Levels 16 Levels 8 Levels 4 Levels 10 −3 10 −2 10 −1 0.01 0.02 0.05 0.1 0.2 0.5 Detector Performance Comparison false positives per window (FPPW) miss rate NMF VQ LDA Gibbs LDA MF Linear SVM Figure 1: Human detection results. (Left) Effect on human detection performances of quantizing the histogram of oriented gradient descriptor [6] for a boosted linear SVM classifier based on these features. Here we show false positive vs. false negative curves on log scale. We can see that for 16 quantization levels or more the differences are negligible, thus validating our discrete approach. (Right) Performances of five detectors using HOG features trained without boosting and tested on the INRIA dataset: LDA detectors learned by Gibbs Sampling and Mean Field, Vector Quantization, Non-negative Matrix Factorization - all with K = 24 components/codewords - and Linear SVM. We can see how the Gibbs LDA outperform by far the other unsupervised clustering techniques and scores comparably with the Linear SVM, which is specifically optimized for the simpler binary classification problem. Figure 2: Topics distributions and clusters. We show sample topics (2 out of 8) from the LDA model trained on the single view Mobo sequences. For each topic k, we show 12 images in 2 rows. The first column shows the distribution of local orientations associated with topic k: (top) visualization of the orientations and (bottom) average gradient intensities for each cell. The right 5 columns show the top ten images in the dataset with highest topic proportion ˆθk, shown below each image. We can see that topics are tightly related to pose classes. foot touches the ground, and 1 to the frames where the legs are approximately parallel. We labeled the remaining frames automatically using linear interpolation between keyframes. The average interval between keyframes is 8.1 frames, this motivates our choice of the number of topics K = 8. For each test frame, we computed the pose error as the difference between the associated pose value and the average pose of the best top 10 matches in the training dataset. We obtained an average error of 0.16, corresponding to 1.3 frames. In Figure 4 we show the average pose error per test sequence obtained with our approach compared with Vector Quantization, where the pose is obtained as average of labels associated with the closest clusters, and Non-negative Matrix Factorization, where as in LDA similarity of poses is computed as dot product of the component weights. In all the models we set equal number of components/clusters to K = 8. We can see that our approach performs best in all testing sequences. In Figure 4 we also show the average pose error when matching test frames to a single train sequence. Although the different appearance affects the matching performance, overall the results shows how our approach can be successfully applied to automatically match poses of different subjects. 6 Conclusions We introduce a novel approach to human detection, pose classification and matching from a single image. Starting from a representation robust to a limited range of variations in the appearance of humans in images, we derive a generative probabilistic model which allows for automatic discovery of pose information. The model can successfully perform detection and provides a low dimensional representation of the pose. It automatically clusters the images using representative distributions and allows for an efficient approach to pose matching. Our experiments show that our approach matches or exceeds the state of the art in human detection, pose classification and matching. Figure 3: Pose matching examples. On the left one sample frame from test sequences, on the right the top 10 matches in the training set based on the similarity score (4), reported below the image. We can see how our approach allows to match poses even despite large changes in appearance, and the same pose is correctly matched across different subjects. 0 0.1 0.2 0.3 0.4 0.5 Average Pose Error LDA Gibbs LDA MF NMF VQ Figure 4: Pose matching error. (Left) Average pose error in matching test sequences to the training set, for our model (both Gibbs and Mean Field learning), Non-Negative Matrix Factorization and Vector Quantization. We see how our model trained with Gibbs sampling model clearly outperforms the other approaches. (Right) Average pose error in matching test and training sequence pairs with our approach, where each row is a test sequence and each column a training sequence. The highest error corresponds to about 2 frames, while the mean error is 0.16 and amounts to approximately 1.3 frames. Acknowledgments This work was conducted while the first author was an intern at Honda Research Institute in 2005. Work at UCLA was supported by AFOSR F49620-03-1-0095 and ONR N00014-03-1-0850:P0001. References [1] A. Agarwal and B. Triggs. 3d human pose from silhouettes by relevance vector regression. CVPR, 2004. [2] A. Agarwal and B. Triggs. Hyperfeatures: Multilevel local coding for visual recognition. ECCV, 2006. [3] D. Blei, A. Ng, and M. Jordan. Latent drichlet allocation. Journal on Machine Learning Research, 2003. [4] W. Buntine and A. Jakulin. Discrete principal component analysis. HIIT Technical Report, 2005. [5] J. Canny. GaP: a factor model for discrete data. ACM SIGIR, pages 122–129, 2004. [6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005. [7] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient matching of pictorial structures. CVPR, 2000. [8] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from Google’s image search. Proc. ICCV, pages 1816–1823, 2005. [9] D. M. Gavrila and V. Philomin. Real-time object detection for smart vehicles. Proc. ICCV, 1999. [10] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proc. National Academy of Science, 2004. [11] R. Gross and J. Shi. The cmu motion of body dataset. Technical report, CMU, 2001. [12] G.Shakhnarovich,P.Viola, andT.Darrell. Fast pose estimation with parameter-sensitive hashing. ICCV, 2003.. [13] D. Lee and H. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 1999. [14] D. G. Lowe. Object recognition from local scale-invariant features. Proc. ICCV, pages 1150–1157, 1999. [15] G. Mori, X. Ren, A. A. Efros, and J. Malik. Recovering human body configurations: Combining segmentation and recognition. Proc. CVPR, 2:326–333, 2004. [16] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. Proc. BMVC, 2006. [17] K. Nigam, A. K. McCallum, S. Thurn, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, pages 1–34, 2000. [18] P.Viola,M.Jones, andD.Snow. Detecting pedestrians using patterns of motion and appearance. ICCV, 2003 . [19] R. Ronfard, C. Schmid, and B. Triggs. Learning to parse pictures of people. ECCV, 2002. [20] R. Rosales and S. Sclaroff. Inferring body without tracking body parts. Proc. CVPR, 2:506–511, 2000. [21] L. Sigal, M. Isard, B. H. Sigelman, and M. Black. Attractive people: Assembling loose-limbed models using non-parametric belief propagation. Proc. NIPS, pages 1539–1546, 2003. [22] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering object categories in image collections. Proc. ICCV, 2005. [23] M. Weber, M. Welling, and P. Perona. Toward automatic discovery of object categories. CVPR, 2000.
2006
18
3,010
Fast Computation of Graph Kernels S.V. N. Vishwanathan svn.vishwanathan@nicta.com.au Statistical Machine Learning, National ICT Australia, Locked Bag 8001, Canberra ACT 2601, Australia Research School of Information Sciences & Engineering Australian National University, Canberra ACT 0200, Australia Karsten M. Borgwardt borgwardt@dbs.ifi.lmu.de Institute for Computer Science, Ludwig-Maximilians-University Munich Oettingenstr. 67, 80538 Munich, Germany Nicol N. Schraudolph nic.schraudolph@nicta.com.au Statistical Machine Learning, National ICT Australia Locked Bag 8001, Canberra ACT 2601, Australia Research School of Information Sciences & Engineering Australian National University, Canberra ACT 0200, Australia Abstract Using extensions of linear algebra concepts to Reproducing Kernel Hilbert Spaces (RKHS), we define a unifying framework for random walk kernels on graphs. Reduction to a Sylvester equation allows us to compute many of these kernels in O(n3) worst-case time. This includes kernels whose previous worst-case time complexity was O(n6), such as the geometric kernels of G¨artner et al. [1] and the marginal graph kernels of Kashima et al. [2]. Our algebra in RKHS allow us to exploit sparsity in directed and undirected graphs more effectively than previous methods, yielding sub-cubic computational complexity when combined with conjugate gradient solvers or fixed-point iterations. Experiments on graphs from bioinformatics and other application domains show that our algorithms are often more than 1000 times faster than existing approaches. 1 Introduction Machine learning in domains such as bioinformatics, drug discovery, and web data mining involves the study of relationships between objects. Graphs are natural data structures to model such relations, with nodes representing objects and edges the relationships between them. In this context, one often encounters the question: How similar are two graphs? Simple ways of comparing graphs which are based on pairwise comparison of nodes or edges, are possible in quadratic time, yet may neglect information represented by the structure of the graph. Graph kernels, as originally proposed by G¨artner et al. [1], Kashima et al. [2], Borgwardt et al. [3], take the structure of the graph into account. They work by counting the number of common random walks between two graphs. Even though the number of common random walks could potentially be exponential, polynomial time algorithms exist for computing these kernels. Unfortunately for the practitioner, these kernels are still prohibitively expensive since their computation scales as O(n6), where n is the number of vertices in the input graphs. This severely limits their applicability to large-scale problems, as commonly found in areas such as bioinformatics. In this paper, we extend common concepts from linear algebra to Reproducing Kernel Hilbert Spaces (RKHS), and use these extensions to define a unifying framework for random walk kernels. We show that computing many random walk graph kernels including those of G¨artner et al. [1] and Kashima et al. [2] can be reduced to the problem of solving a large linear system, which can then be solved efficiently by a variety of methods which exploit the structure of the problem. 2 Extending Linear Algebra to RKHS Let φ : X →H denote the feature map from an input space X to the RKHS H associated with the kernel κ(x, x′) = ⟨φ(x), φ(x′)⟩H. Given an n by m matrix X ∈X n×m of elements Xij ∈X, we extend φ to matrix arguments by defining Φ : X n×m →Hn×m via [Φ(X)]ij := φ(Xij). We can now borrow concepts from tensor calculus to extend certain linear algebra operations to H: Definition 1 Let A ∈X n×m, B ∈X m×p, and C ∈Rm×p. The matrix products Φ(A)Φ(B) ∈Rn×p and Φ(A) C ∈Hn×p are [Φ(A)Φ(B)]ik := X j ⟨φ(Aij), φ(Bjk)⟩H and [Φ(A) C]ik := X j φ(Aij) Cjk. Given A ∈Rn×m and B ∈Rp×q the Kronecker product A ⊗B ∈Rnp×mq and vec operator are defined as A ⊗B :=   A11B A12B . . . A1mB ... ... ... ... An1B An2B . . . AnmB  , vec(A) :=   A∗1 ... A∗m  , (1) where A∗j denotes the j-th column of A. They are linked by the well-known property: vec(ABC) = (C⊤⊗A) vec(B). (2) Definition 2 Let A ∈X n×m and B ∈X p×q. The Kronecker product Φ(A) ⊗Φ(B) ∈Rnp×mq is [Φ(A) ⊗Φ(B)]ip+k,jq+l := ⟨φ(Aij), φ(Bkl)⟩H . (3) It is easily shown that the above extensions to RKHS obey an analogue of (2): Lemma 1 If A ∈X n×m, B ∈Rm×p, and C ∈X p×q, then vec(Φ(A) B Φ(C)) = (Φ(C)⊤⊗Φ(A)) vec(B). (4) If p = q = n = m, direct computation of the right hand side of (4) requires O(n4) kernel evaluations. For an arbitrary kernel the left hand side also requires a similar effort. But, if the RKHS H is isomorphic to Rr, in other words the feature map φ(·) ∈Rr, the left hand side of (4) is easily computed in O(n3r) operations. Our efficient computation schemes described in Section 4 will exploit this observation. 3 Random Walk Kernels Random walk kernels on graphs are based on a simple idea: Given a pair of graphs perform a random walk on both of them and count the number of matching walks [1, 2, 3]. These kernels mainly differ in the way the similarity between random walks is computed. For instance, G¨artner et al. [1] count the number of nodes in the random walk which have the same label. They also include a decay factor to ensure convergence. Kashima et al. [2], and Borgwardt et al. [3] on the other hand, use a kernel defined on nodes and edges in order to compute similarity between random walks, and define an initial probability distribution over nodes in order to ensure convergence. In this section we present a unifying framework which includes the above mentioned kernels as special cases. 3.1 Notation We use ei to denote the i-th standard basis (i.e., a vector of all zeros with the i-th entry set to one), e to denote a vector with all entries set to one, 0 to denote the vector of all zeros, and I to denote the identity matrix. When it is clear from context we will not mention the dimensions of these vectors and matrices. A graph G ∈G consists of an ordered and finite set of n vertices V denoted by {v1, v2, . . . , vn}, and a finite set of edges E ⊂V × V . A vertex vi is said to be a neighbor of another vertex vj if they are connected by an edge. G is said to be undirected if (vi, vj) ∈E ⇐⇒(vj, vi) ∈E for all edges. The unnormalized adjacency matrix of G is an n×n real matrix P with Pij = 1 if (vi, vj) ∈E, and 0 otherwise. If G is weighted then P can contain non-negative entries other than zeros and ones, i.e., Pij ∈(0, ∞) if (vi, vj) ∈E and zero otherwise. Let D be an n×n diagonal matrix with entries Dii = P j Pij. The matrix A := PD−1 is then called the normalized adjacency matrix, or simply adjacency matrix. A walk w on G is a sequence of indices w1, w2, . . . wt+1 where (vwi, vwi+1) ∈E for all 1 ≤i ≤t. The length of a walk is equal to the number of edges encountered during the walk (here: t). A graph is said to be connected if any two pairs of vertices can be connected by a walk; here we always work with connected graphs. A random walk is a walk where P(wi+1|w1, . . . wi) = Awi,wi+1, i.e., the probability at wi of picking wi+1 next is directly proportional to the weight of the edge (vwi, vwi+1). The t-th power of the transition matrix A describes the probability of t-length walks. In other words, [At]ij denotes the probability of a transition from vertex vi to vertex vj via a walk of length t. We use this intuition to define random walk kernels on graphs. Let X be a set of labels which includes the special label ϵ. Every edge labeled graph G is associated with a label matrix L ∈X n×n, such that Lij = ϵ iff (vi, vj) /∈E, in other words only those edges which are present in the graph get a non-ϵ label. Let H be the RKHS endowed with the kernel κ : X × X →R, and let φ : X →H denote the corresponding feature map which maps ϵ to the zero element of H. We use Φ(L) to denote the feature matrix of G. For ease of exposition we do not consider labels on vertices here, though our results hold for that case as well. Henceforth we use the term labeled graph to denote an edge-labeled graph. 3.2 Product Graphs Given two graphs G(V, E) and G′(V ′, E′), the product graph G×(V×, E×) is a graph with nn′ vertices, each representing a pair of vertices from G and G′, respectively. An edge exists in E× iff the corresponding vertices are adjacent in both G and G′. Thus V× = {(vi, v′ i′) : vi ∈V ∧v′ i′ ∈V ′}, (5) E× = {((vi,v′ i′), (vj,v′ j′)) : (vi, vj)∈E ∧(v′ i′, v′ j′)∈E′}. (6) If A and A′ are the adjacency matrices of G and G′, respectively, the adjacency matrix of the product graph G× is A× = A ⊗A′. An edge exists in the product graph iff an edge exits in both G and G′, therefore performing a simultaneous random walk on G and G′ is equivalent to performing a random walk on the product graph [4]. Let p and p′ denote initial probability distributions over vertices of G and G′. Then the initial probability distribution p× of the product graph is p× := p ⊗p′. Likewise, if q and q′ denote stopping probabilities (i.e., the probability that a random walk ends at a given vertex), the stopping probability q× of the product graph is q× := q ⊗q′. If G and G′ are edge-labeled, we can associate a weight matrix W× ∈Rnn′×nn′ with G×, using our Kronecker product in RKHS (Definition 2): W× = Φ(L) ⊗Φ(L′). As a consequence of the definition of Φ(L) and Φ(L′), the entries of W× are non-zero only if the corresponding edge exists in the product graph. The weight matrix is closely related to the adjacency matrix: assume that H = R endowed with the usual dot product, and φ(Lij) = 1 if (vi, vj) ∈E or zero otherwise. Then Φ(L) = A and Φ(L′) = A′, and consequently W× = A×, i.e., the weight matrix is identical to the adjacency matrix of the product graph. To extend the above discussion, assume that H = Rd endowed with the usual dot product, and that there are d distinct edge labels {1, 2, . . . , d}. For each edge (vi, vj) ∈E we have φ(Lij) = el if the edge (vi, vj) is labeled l. All other entries of Φ(L) are set to 0. κ is therefore a delta kernel, i.e., its value between any two edges is one iff the labels on the edges match, and zero otherwise. The weight matrix W× has a non-zero entry iff an edge exists in the product graph and the corresponding edges in G and G′ have the same label. Let lA denote the adjacency matrix of the graph filtered by the label l, i.e., lAij = Aij if Lij = l and zero otherwise. Some simple algebra (omitted for the sake of brevity) shows that the weight matrix of the product graph can be written as W× = d X l=1 lA ⊗lA′. (7) 3.3 Kernel Definition Performing a random walk on the product graph G× is equivalent to performing a simultaneous random walk on the graphs G and G′ [4]. Therefore, the (in+j, i′n′ +j′)-th entry of Ak × represents the probability of simultaneous k length random walks on G (starting from vertex vi and ending in vertex vj) and G′ (starting from vertex v′ i′ and ending in vertex v′ j′). The entries of W× represent similarity between edges. The (in + j, i′n′ + j′)-th entry of W k × represents the similarity between simultaneous k length random walks on G and G′ measured via the kernel function κ. Given the weight matrix W×, initial and stopping probability distributions p× and q×, and an appropriately chosen discrete measure µ, we can define a random walk kernel on G and G′ as k(G, G′) := ∞ X k=0 µ(k) q⊤ ×W k ×p×. (8) In order to show that (8) is a valid Mercer kernel we need the following technical lemma. Lemma 2 ∀k ∈N0 : W k ×p× = vec[Φ(L′)kp′ (Φ(L)kp)⊤]. Proof By induction over k. Base case: k = 0. Since Φ(L′)0 = Φ(L)0 = I, using (2) we can write W 0 ×p× = p× = (p ⊗p′) vec(1) = vec(p′ 1 p⊤) = vec[Φ(L′)0p′ (Φ(L)0p)⊤]. Induction from k to k + 1: Using Lemma 1 we obtain W k+1 × p× = W×W k ×p× = (Φ(L) ⊗Φ(L′)) vec[Φ(L′)kp′ (Φ(L)kp)⊤] = vec[Φ(L′)Φ(L′)kp′ (Φ(L)kp)⊤Φ(L)⊤] = vec[Φ(L′)k+1p′ (Φ(L)k+1p)⊤]. Lemma 3 If the measure µ(k) is such that (8) converges, then it defines a valid Mercer kernel. Proof Using Lemmas 1 and 2 we can write q⊤ ×W k ×p× = (q ⊗q′) vec[Φ(L′)kp′ (Φ(L)kp)⊤] = vec[q′⊤Φ(L′)kp′ (Φ(L)kp)⊤q] = (q⊤Φ(L)kp)⊤ | {z } ψk(G)⊤ (q′⊤Φ(L′)kp′) | {z } ψk(G′) . Each individual term of (8) equals ψk(G)⊤ψk(G′) for some function ψk, and is therefore a valid kernel. The lemma follows since a convex combination of kernels is itself a valid kernel. 3.4 Special Cases A popular choice to ensure convergence of (8) is to assume µ(k) = λk for some λ > 0. If λ is sufficiently small1 then (8) is well defined, and we can write k(G, G′) = X k λkq⊤ ×W k ×p× = q⊤ ×(I −λW×)−1p×. (9) Kashima et al. [2] use marginalization and probabilities of random walks to define kernels on graphs. Given transition probability matrices P and P ′ associated with graphs G and G′ respectively, their kernel can be written as (see Eq. 1.19, [2]) k(G, G′) = q⊤ ×(I −T×)−1p×, (10) 1The values of λ which ensure convergence depends on the spectrum of W×. where T× := (vec(P) vec(P ′)⊤) ⊙(Φ(L) ⊗Φ(L′)), using ⊙to denote element-wise (Hadamard) multiplication. The edge kernel ˆκ(Lij, L′ i′j′) := PijP ′ i′j′κ(Lij, L′ i,j′) with λ = 1 recovers (9). G¨artner et al. [1] use the adjacency matrix of the product graph to define the so-called geometric kernel k(G, G′) = n X i=1 n′ X j=1 ∞ X k=0 λk[Ak ×]ij. (11) To recover their kernel in our framework, assume an uniform distribution over the vertices of G and G′, i.e., set p = q = 1/n and p′ = q′ = 1/n′. The initial as well as final probability distribution over vertices of G× is given by p× = q× = e /(nn′). Setting Φ(L) := A, and hence Φ(L′) = A′ and W× = A×, we can rewrite (8) to obtain k(G, G′) = ∞ X k=0 λkq⊤ ×Ak ×p× = 1 n2n′2 n X i=1 n′ X j=1 ∞ X k=0 λk[Ak ×]ij, which recovers (11) to within a constant factor. 4 Efficient Computation In this section we show that iterative methods, including those based on Sylvester equations, conjugate gradients, and fixed-point iterations, can be used to greatly speed up the computation of (9). 4.1 Sylvester Equation Methods Consider the following equation, commonly known as the Sylvester or Lyapunov equation: X = SXT + X0. (12) Here, S, T, X0 ∈Rn×n are given and we need for solve for X ∈Rn×n. These equations can be readily solved in O(n3) time with freely available code [5], e.g. Matlab’s dlyap method. The generalized Sylvester equation X = d X i=1 SiXTi + X0 (13) can also be solved efficiently, albeit at a slightly higher computational cost of O(dn3). We now show that if the weight matrix W× can be written as (7) then the problem of computing the graph kernel (9) can be reduced to the problem of solving the following Sylvester equation: X = X i iA′λ X iA⊤+ X0, (14) where vec(X0) = p×. We begin by flattening the above equation: vec(X) = λ X i vec(iA′X iA⊤) + p×. (15) Using Lemma 1 we can rewrite (15) as (I −λ X i iA ⊗iA′) vec(X) = p×, (16) use (7), and solve for vec(X): vec(X) = (I −λW×)−1p×. (17) Multiplying both sides by q⊤ × yields q⊤ ×vec(X) = q⊤ ×(I −λW×)−1p×. (18) The right-hand side of (18) is the graph kernel (9). Given the solution X of the Sylvester equation (14), the graph kernel can be obtained as q⊤ ×vec(X) in O(n2) time. Since solving the generalized Sylvester equation takes O(dn3) time, computing the graph kernel in this fashion is significantly faster than the O(n6) time required by the direct approach. Where the number of labels d is large, the computational cost may be reduced further by computing matrices S and T such that W× ≈S ⊗T. We then simply solve the simple Sylvester equation (12) involving these matrices. Finding the nearest Kronecker product approximating a matrix such as W× is a well-studied problem in numerical linear algebra and efficient algorithms which exploit sparsity of W× are readily available [6]. 4.2 Conjugate Gradient Methods Given a matrix M and a vector b, conjugate gradient (CG) methods solve the system of equations Mx = b efficiently [7]. While they are designed for symmetric positive semi-definite matrices, CG solvers can also be used to solve other linear systems efficiently. They are particularly efficient if the matrix is rank deficient, or has a small effective rank, i.e., number of distinct eigenvalues. Furthermore, if computing matrix-vector products is cheap — because M is sparse, for instance — the CG solver can be sped up significantly [7]. Specifically, if computing Mv for an arbitrary vector v requires O(k) time, and the effective rank of the matrix is m, then a CG solver requires only O(mk) time to solve Mx = b. The graph kernel (9) can be computed by a two-step procedure: First we solve the linear system (I −λW×) x = p×, (19) for x, then we compute q⊤ ×x. We now focus on efficient ways to solve (19) with a CG solver. Recall that if G and G′ contain n vertices each then W× is a n2 × n2 matrix. Directly computing the matrix-vector product W×r, requires O(n4) time. Key to our speed-ups is the ability to exploit Lemma 1 to compute this matrix-vector product more efficiently: Recall that W× = Φ(L) ⊗Φ(L′). Letting r = vec(R), we can use Lemma 1 to write W×r = (Φ(L) ⊗Φ(L′)) vec(R) = vec(Φ(L′)R Φ(L)⊤). (20) If φ(·) ∈Rr for some r, then the above matrix-vector product can be computed in O(n3r) time. If Φ(L) and Φ(L′) are sparse, however, then Φ(L′)R Φ(L)⊤can be computed yet more efficiently: if there are O(n) non-ϵ entries in Φ(L) and Φ(L′), then computing (20) requires only O(n2) time. 4.3 Fixed-Point Iterations Fixed-point methods begin by rewriting (19) as x = p× + λW×x. (21) Now, solving for x is equivalent to finding a fixed point of the above iteration [7]. Letting xt denote the value of x at iteration t, we set x0 := p×, then compute xt+1 = p× + λW×xt (22) repeatedly until ||xt+1 −xt|| < ε, where || · || denotes the Euclidean norm and ε some pre-defined tolerance. This is guaranteed to converge if all eigenvalues of λW× lie inside the unit disk; this can be ensured by setting λ < 1/ξmax, where ξmax is the largest-magnitude eigenvalue of W×. The above is closely related to the power method used to compute the largest eigenvalue of a matrix [8]; efficient preconditioners can also be used to speed up convergence [8]. Since each iteration of (22) involves computation of the matrix-vector product W×xt, all speed-ups for computing the matrix-vector product discussed in Section 4.2 are applicable here. In particular, we exploit the fact that W× is a sum of Kronecker products to reduce the worst-case time complexity to O(n3) in our experiments, in contrast to Kashima et al. [2] who computed the matrix-vector product explicitly. 5 Experiments To assess the practical impact of our algorithmic improvements, we compared our techniques from Section 4 with G¨artner et al.’s [1] direct approach as a baseline. All code was written in MATLAB Release 14, and experiments run on a 2.6 GHz Intel Pentium 4 PC with 2 GB of main memory running Suse Linux. The Matlab function dlyap was used to solve the Sylvester equation. By default, we used a value of λ = 0.001, and set the tolerance for both CG solver and fixed-point iteration to 10−6 for all our experiments. We used Lemma 1 to speed up matrix-vector multiplication for both CG and fixed-point methods (cf. Section 4.2). Since all our methods are exact and produce the same kernel values (to numerical precision), we only report their runtimes below. We tested the practical feasibility of the presented techniques on four real-world datasets whose size mandates fast graph kernel computation; two datasets of molecular compounds (MUTAG and PTC), and two datasets with hundreds of graphs describing protein tertiary structure (Protein and Enzyme). Graph kernels provide useful measures of similarity for all these graphs; please refer to the addendum for more details on these datasets, and applications for graph kernels on them. Figure 1: Time (in seconds on a log-scale) to compute 100×100 kernel matrix for unlabeled (left) resp. labelled (right) graphs from several datasets. Compare the conventional direct method (black) to our fast Sylvester equation, conjugate gradient (CG), and fixed-point iteration (FP) approaches. 5.1 Unlabeled Graphs In a first series of experiments, we compared graph topology only on our 4 datasets, i.e., without considering node and edge labels. We report the time taken to compute the full graph kernel matrix for various sizes (number of graphs) in Table 1 and show the results for computing a 100 × 100 sub-matrix in Figure 1 (left). On unlabeled graphs, conjugate gradient and fixed-point iteration — sped up via our Lemma 1 — are consistently about two orders of magnitude faster than the conventional direct method. The Sylvester approach is very competitive on smaller graphs (outperforming CG on MUTAG) but slows down with increasing number of nodes per graph; this is because we were unable to incorporate Lemma 1 into Matlab’s black-box dlyap solver. Even so, the Sylvester approach still greatly outperforms the direct method. 5.2 Labeled Graphs In a second series of experiments, we compared graphs with node and edge labels. On our two protein datasets we employed a linear kernel to measure similarity between edge labels representing distances (in ˚angstr¨oms) between secondary structure elements. On our two chemical datasets we used a delta kernel to compare edge labels reflecting types of bonds in molecules. We report results in Table 2 and Figure 1 (right). On labeled graphs, our three methods outperform the direct approach by about a factor of 1000 when using the linear kernel. In the experiments with the delta kernel, conjugate gradient and fixedpoint iteration are still at least two orders of magnitude faster. Since we did not have access to a generalized Sylvester equation (13) solver, we had to use a Kronecker product approximation [6] which dramatically slowed down the Sylvester equation approach. Table 1: Time to compute kernel matrix for given number of unlabeled graphs from various datasets. dataset MUTAG PTC Enzyme Protein nodes/graph 17.7 26.7 32.6 38.6 edges/node 2.2 1.9 3.8 3.7 #graphs 100 230 100 417 100 600 100 1128 Direct 18’09” 104’31” 142’53” 41h* 31h* 46.5d* 36d* 12.5y* Sylvester 25.9” 2’16” 73.8” 19’30” 48.3” 36’43” 69’15” 6.1d* Conjugate 42.1” 4’04” 58.4” 19’27” 44.6” 34’58” 55.3” 97’13” Fixed-Point 12.3” 1’09” 32.4” 5’59” 13.6” 15’23” 31.1” 40’58” ∗: Extrapolated; run did not finish in time available. Table 2: Time to compute kernel matrix for given number of labeled graphs from various datasets. kernel delta linear dataset MUTAG PTC Enzyme Protein #graphs 100 230 100 417 100 600 100 1128 Direct 7.2h 1.6d* 1.4d* 25d* 2.4d* 86d* 5.3d* 18y* Sylvester 3.9d* 21d* 2.7d* 46d* 89.8” 53’55” 25’24” 2.3d* Conjugate 2’35” 13’46” 3’20” 53’31” 124.4” 71’28” 3’01” 4.1h Fixed Point 1’05” 6’09” 1’31” 26’52” 50.1” 35’24” 1’47” 1.9h ∗: Extrapolated; run did not finish in time available. 6 Outlook and Discussion We have shown that computing random walk graph kernels is essentially equivalent to solving a large linear system. We have extended a well-known identity for Kronecker products which allows us to exploit the structure inherent in this problem. From this we have derived three efficient techniques to solve the linear system, employing either Sylvester equations, conjugate gradients, or fixed-point iterations. Experiments on real-world datasets have shown our methods to be scalable and fast, in some instances outperforming the conventional approach by more than three orders of magnitude. Even though the Sylvester equation method has a worst-case complexity of O(n3), the conjugate gradient and fixed-point methods tend to be faster on all our datasets. This is because computing matrix-vector products via Lemma 1 is quite efficient when the graphs are sparse, so that the feature matrices Φ(L) and Φ(L′) contain only O(n) non-ϵ entries. Matlab’s black-box dlyap solver is unable to exploit this sparsity; we are working on more capable alternatives. An efficient generalized Sylvester solver requires extensive use of tensor calculus and is part of ongoing work. As more and more graph-structured data becomes available in areas such as biology, web data mining, etc., graph classification will gain importance over coming years. Hence there is a pressing need to speed up the computation of similarity metrics on graphs. We have shown that sparsity, low effective rank, and Kronecker product structure can be exploited to greatly reduce the computational cost of graph kernels; taking advantage of other forms of structure in W× remains a challenge. Now that the computation of random walk graph kernels is viable for practical problem sizes, it will open the doors for their application in hitherto unexplored domains. The algorithmic challenge now is how to integrate higher-order structures, such as spanning trees, in graph comparisons. Acknowledgments National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia’s Ability and the ICT Center of Excellence program. This work is supported by the IST Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778, and by the German Ministry for Education, Science, Research and Technology (BMBF) under grant no. 031U112F within the BFAM (Bioinformatics for the Functional Analysis of Mammalian Genomes) project, part of the German Genome Analysis Network (NGFN). References [1] T. G¨artner, P. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In B. Sch¨olkopf and M. K. Warmuth, editors, Proc. Annual Conf. Comput. Learning Theory. Springer, 2003. [2] H. Kashima, K. Tsuda, and A. Inokuchi. Kernels on graphs. In K. Tsuda, B. Sch¨olkopf, and J. Vert, editors, Kernels and Bioinformatics, Cambridge, MA, 2004. MIT Press. [3] K. M. Borgwardt, C. S. Ong, S. Schonauer, S. V. N. Vishwanathan, A. J. Smola, and H. P. Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(Suppl 1):i47–i56, 2005. [4] F. Harary. Graph Theory. Addison-Wesley, Reading, MA, 1969. [5] J. D. Gardiner, A. L. Laub, J. J. Amato, and C. B. Moler. Solution of the Sylvester matrix equation AXB⊤+ CXD⊤= E. ACM Transactions on Mathematical Software, 18(2):223–231, 1992. [6] C. F. V. Loan. The ubiquitous kronecker product. Journal of Computational and Applied Mathematics, 123:85 – 100, 2000. [7] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research, 1999. [8] G. H. Golub and C. F. Van Loan. Matrix Computations. John Hopkins University Press, Baltimore, MD, 3rd edition, 1996.
2006
180
3,011
Multi-dynamic Bayesian Networks Karim Filali and Jeff A. Bilmes Departments of Computer Science & Engineering and Electrical Engineering University of Washington Seattle, WA 98195 {karim@cs,bilmes@ee}.washington.edu Abstract We present a generalization of dynamic Bayesian networks to concisely describe complex probability distributions such as in problems with multiple interacting variable-length streams of random variables. Our framework incorporates recent graphical model constructs to account for existence uncertainty, value-specific independence, aggregation relationships, and local and global constraints, while still retaining a Bayesian network interpretation and efficient inference and learning techniques. We introduce one such general technique, which is an extension of Value Elimination, a backtracking search inference algorithm. Multi-dynamic Bayesian networks are motivated by our work on Statistical Machine Translation (MT). We present results on MT word alignment in support of our claim that MDBNs are a promising framework for the rapid prototyping of new MT systems. 1 INTRODUCTION The description of factorization properties of families of probabilities using graphs (i.e., graphical models, or GMs), has proven very useful in modeling a wide variety of statistical and machine learning domains such as expert systems, medical diagnosis, decision making, speech recognition, and natural language processing. There are many different types of graphical model, each with its own properties and benefits, including Bayesian networks, undirected Markov random fields, and factor graphs. Moreover, for different types of scientific modeling, different types of graphs are more or less appropriate. For example, static Bayesian networks are quite useful when the size of set of random variables in the domain does not grow or shrink for all data instances and queries of interest. Hidden Markov models (HMMs), on the other hand, are such that the number of underlying random variables changes depending on the desired length (which can be a random variable), and HMMs are applicable even without knowing this length as they can be extended indefinitely using online inference. HMMs have been generalized to dynamic Bayesian networks (DBNs) and temporal conditional random fields (CRFs), where an underlying set of variables gets repeated as needed to fill any finite but unbounded length. Probabilistic relational models (PRMs) [5] allow for a more complex template that can be expanded in multiple dimensions simultaneously. An attribute common to all of the above cases is that the specification of rules for expanding any particular instance of a model is finite. In other words, these forms of GM allow the specification of models with an unlimited number of random variables (RVs) using a finite description. This is achieved using parameter tying, so while the number of RVs increases without bound, the number of parameters does not. In this paper, we introduce a new class of model we call multi-dynamic Bayesian networks. MDBNs are motivated by our research into the application of graphical models to the domain of statistical machine translation (MT) and they have two key attributes from the graphical modeling perspective. First, an MDBN generalizes a DBN in that there are multiple “streams” of variables that can get unrolled, but where each stream may be unrolled by a differing amount. In the most general case, connecting these different streams together would require the specification of conditional probability tables with a varying and potentially unlimited number of parents. To avoid this problem and retain the template’s finite description length, we utilize a switching parent functionality (also called value-specific independence). Second, in order to capture the notion of fertility in MT-systems (defined later in the text), we employ a form of existence uncertainty [7] (that we call switching existence), whereby the existence of a given random variable might depend on the value of other random variables in the network. Being fully propositional, MDBNs lie between DBNs and PRMs in terms of expressiveness. While PRMs are capable of describing any MDBN, there are, in general, advantages to restricting ourselves to a more specific class of model. For example, in the DBN case, it is possible to provide a bound on inference costs just by looking at attributes of the DBN template only (e.g., the left or right interfaces [12, 2]). Restricting the model can also make it simpler to use in practice. MDBNs are still relatively simple, while at the same time making possible the easy expression of MT systems, and opening doors to novel forms of probabilistic inference as we show below. In section 2, we introduce MDBNs, and describe their application to machine translation showing how it is possible to represent even complex MT systems. In section 3, we describe MDBN learning and decoding algorithms. In section 4, we present experimental results in the area of statistical machine translation, and future work is discussed in section 5. 2 MDBNs A standard DBN [4] template consists of a directed acyclic graph G = (V, E) = (V1 ∪V2, E1 ∪ E2 ∪E→ 2 ) with node set V and edge set E. For t ∈{1, 2}, the sets Vt are the nodes at slice t, Et are the intra-slice edges between nodes in Vt, and E→ t are the inter-slice edges between nodes in V1 and V2. To unroll a DBN to length T, the nodes V2 along with the edges adjacent to any node in V2 are cloned T −1 times (where parameters of cloned variables are constrained to be the same as the template) and re-connected at the corresponding places. An MDBN with K streams consists of the union of K DBN templates along with a template structure specifying rules to connect the various streams together. An MDBN template is a directed graph G = (V, E) = ( [ k V (k), [ k E(k) ∪E(k) ↕↕) where (V (k), E(k)) is the kth DBN, and the edges E(k) ↕↕are rules specifying how to connect stream k to the other streams. These rules are general in that they specify the set of edges for all values of Tk. There can be arbitrary nesting of the streams such as, for example, it is possible to specify a model that can grow along several dimensions simultaneously. An MDBN also utilizes “switching existence”, meaning some subset of the variables in V bestow existence onto other variables in the network. We call these variables existence bestowing (or ebnodes). The idea of bestowing existence is well defined over a discrete space, and is not dissimilar to a variable length DBN. For example, we may have a joint distribution over lengths as follows: p(X1, . . . , XN, N) = p(X1, . . . , Xn|N = n)p(N = n) where here N is an eb-node that determines the number of other random variables in the DGM. Our notion of eb-nodes allows us to model certain characteristics found within machine translation systems, such as “fertility” [3], where a given English word is cloned a random number of times in the generative process that explains a translation from French into English. This random cloning might happen simultaneously at all points along a given MDBN stream. This means that even for a given fixed stream length Ti = ti, each stream could have a randomly varying number of random variables. Our graphical notation for eb-nodes consists of the eb-node as a square box containing variables whose existence is determined by the eb-node. We start by providing a simple example of an expanded MDBN for three well known MT systems, namely the IBM models 1 and 2 [3], and the “HMM” model [15].1 We adopt the convention in [3] that our goal is to translate from a string of French words F = f of length M = m into a string of English words E = e of length L = l — of course these can be any two languages. The basic generative (noisy channel) approach when translating from French to English is to represent the joint 1We will refer to it as M-HMM to avoid confusion with regular HMMs. distribution P(f, e) = P(f|e)P(e). P(e) is a language model specifying the prior over the word string e. The key goal is to produce a finite-description length representation for P(f|e) where f and e are of arbitrary length. A hidden alignment string, a, specifies how the English words align to the French word, leading to P(f|e) = P a P(f, a|e). Figure 1(a) is a 2-stream MDBN expanded representation of the three models, in this case ℓ= 4 and m = 3. As shown, it appears that the fan-in to node fi will be ℓand thus will grow without bound. However, a switching mechanism whereby P(fi|e, ai) = P(fi|eai) limits the number of parameters regardless of L. This means that the alignment variable ai indicates the English word eai that should be aligned to French word fi. The variable e0 is a null word that connects to French words not explained by any of e1, . . . , eℓ. The graph expresses all three models — the difference is that, in Models 1 and 2, there are no edges between aj and aj+1. In Model 1, p(aj = ℓ) is uniform on the set {1, . . . , L}; in Model 2, the distribution over aj is a function only of its position j, and on the English and French lengths ℓand m respectively. In the M-HMM model, the ai variables form a first order Markov chain. e0 e1 e2 f1 f3 f2 a1 a2 e3 e4 a3 l m (a) Models 1,2 and M-HMM m f1 a1 φ0 φ1 φ2 φ3 e1 e2 e3 f2 a2 f3 a3 f4 a4 f5 a5 f6 a6 m’ τ01 τ02 τ11 τ12 τ13 τ21 π01 π02 π11 π12 π13 π21 u v x y w ℓ (b) Expanded M3 graph Figure 1: Expanded 2-stream MDBN description of IBM Models 1 and 2, and the M-HMM model for MT; and the expanded MDBN description of IBM Model 3 with fertility assignment φ0 = 2, φ1 = 3, φ2 = 1, φ3 = 0. From the above, we see that it would be difficult to express this model graphically using a standard DBN since L and M are unequal random variables. Indeed, there are two DBNs in operation, one consisting of the English string, and the other consisting of the French string and its alignment. Moreover, the fully connected structure of the graph in the figure can represent the appropriate family of model, but it also represents models whose parameter space grows without bound — the switching function allows the model template to stay finite regardless of L and M. With our MDBN descriptive abilities complete, it is now possible to describe the more complex IBM models 3, and 4[3] (an MDBN for Model3 is depicted in fig. 1(b)). The top most random variable, ℓ, is a hidden switching existence variable corresponding to the length of the English string. The box abutting ℓincludes all the nodes whose existence depends on the value of ℓ. In the figure, ℓ= 3, thus resulting in three English words e1, e2, and e3 connected using a second-order Markov chain. To each English word ei corresponds a conditionally dependent fertility eb-node φi, which indicates how many times ei is used by words in the French string. Each φi in turn controls the existence of a set of variables under it. Given the fertilities (the figure depicts the case φ1 = 3, φ2 = 1, φ3 = 0), for each word ei, φi French word variables are granted existence and are denoted by τi1, τi2, . . . , τiφi, what is called the tablet [3] of ei. The values taken by the τ variables need to match the actual observed French sequence f1, . . . , fm. This is represented as a shared constraint between all the f, π, and τ variables which have incoming edges into the observed variable v. v’s conditional probability table is such that it is one only when the associated constraint is satisfied2. The variable 2This type of encoding of constraints corresponds to the standard mechanism used by Pearl [14]. A naive implementation, however, would enumerate a number of configurations exponential in the number of constrained variables, while typically only a small fraction of the configurations would have positive probability. πi,k ∈{1, . . . , m} is a switching dependency parent with respect to the constraint variable v and determines which fj participates in an equality constraint with τi,k. The bottom variable m is a switching existence node (observed to be 6 in the figure) with corresponding French word sequence and alignment variables. The French sequence participates in the v constraint described above, while the alignment variables aj ∈{1, . . . , ℓ}, j ∈1, . . . , m constrain the fertilities to take their unique allowable values (for the given alignment). Alignments also restrict the domain of permutation variables, π, using the constraint variable x. Finally, the domain size of each aj has to lie in the interval [0, ℓ] and that is enforced by the variable u. The dashed edges connecting the alignment a variables represent an extension to implement an M3/M-HMM hybrid. The null submodel involving the deterministic node m′(= Pℓ i=1 φi) and eb-node φ0 accounts for French words that are not explained by any of the English words e1, . . . , eℓ. In this submodel, successive permutation variables are ordered and this constraint is implemented using the observed child w of π0i and π0(i+1). Model 4 [3] is similar to Model 3 except that the former is based on a more elaborate distortion model that uses relative instead of absolute positions both within and between tablets. 3 Inference, Parameter Estimation and MPE Multi-dynamic Bayesian Networks are amenable to any type of inference that is applicable to regular Bayesian networks as long as switching existence relationships are respected and all the constraints (aggregation for example) are satisfied. Unfortunately DBN inference procedures that take advantage of the repeatable template and can preprocess it offline, are not easy to apply to MDBNs. A case in point is the Junction Tree algorithm [11]. Triangulation algorithms exist that create an offline triangulated version of the input graph and do not re-triangulate it for each different instance of the input data [12, 2]. In MDBNs, due to the flexibility to unroll templates in several dimensions and to specify dependencies and constraints spanning the entire unrolled graph, it is not obvious how we can exploit any repetitive patterns in a Junction Tree-style offline triangulation of the graph template. In section 4, we discuss sampling inference methods we have used. Here we discuss our extension to a backtracking search algorithm with the same performance guarantees as the JT algorithm, but with the advantage of easily handling determinism, existence uncertainty, and constraints, both learned and explicitly stated. Value Elimination (VE) ([1]), is a backtracking Bayesian network inference technique that caches factors associated with portions of the search tree and uses them to avoid iterating again over the same subtrees. We follow the notation introduced in [1] and refer the reader to that paper for details about VE inference. We have extended the VE inference approach to handle explicitly encoded constraints, existence uncertainty, and to perform approximate local domain pruning (see section 4). We omit these details as well as others in the original paper and briefly describe the main data structure required by VE and sketch the algorithm we refer to as FirstPass (fig. 1) since it constitutes the first step of the learning procedure, our main contribution in this section. A VE factor, F, is such that we can write the following marginal of the joint distribution X X=x P(X = x, Y = y, Z) = F.val × f(Z) such that (X∪Y)∩Z = ∅, F.val is a constant, and f(Z) a function of Z only. Y is a set of variables previously instantiated in the current branch of search tree to the value vector y. The pair (Y, y) is referred to as a dependency set (F.Dset). X is referred to as a subsumed set (F.Sset). By caching the tuple (F.Dset, F.Sset, F.val), we avoid recomputing the marginal again whenever (1) F.Dset is active, meaning all nodes stored in F.Dset are assigned their cached values in the current branch of the search tree; and (2) none of the variables in F.Sset are assigned yet. FirstPass (alg. 1) visits nodes in the graph in Depth First fashion. In line 7, we get the values of all Newly Single-valued (NSV) CPTs i.e., CPTs that involve the current node, V , and in which all We use a general directed domain pruning constraint. Deterministic relationships then become a special case of our constraint whereby the domain of the child variable is constrained to a single value with probability one. C A B B D C D c(A=0)=(1/P(e))*(F7.tau*P(A=0)*F5.val)=(1/P(e))(P(A=0)*P(E=e|A=0))=P(A=0|E=e) c(C=0,B=0)=(1/P(e))*F3.tau*P(C=0|B=0)*F1.val =(1/P(e) * (P(A=0,B=0)+P(A=1,B=0)) * P(C=0|B=0) * F1.val =(1/P(e)) * P(B=0) * P(C=0|B=0) * F1.val =(1/P(e)) * P(B=0) * P(C=0|B=0) * F1.val =(1/P(e)) * P(C=0,B=0) * F1.val =P(C=0,B=0,E=e)/P(e)=P(C=0,B=0|E=e) F5.val=P(B=0|A=0)*F3.val+P(B=1|A=0)*F4.val F3.val=P(C=0|B=0)*F1.val+P(C=1|B=0)*F2.val F4.val=P(C=0|B=1)*F1.val+P(C=1|B=1)*F2.val F5.tau = F7.tau * P(A=0) F6.tau = F7.tau * P(A=1) F7.tau = 1.0 = P(Evidence)/F7.val F3.tau = F5.tau * P(B=0|A=0) + F6.tau * P(B=0|A=1) = P(B=0) F1.tau = F3.tau * P(C=0|B=0) + F4.tau * P(C=0|B=1) = P(C=0) F2.tau = F3.tau * P(C=1|B=0) + F4.tau * P(C=1|B=1) = P(C=1) F4.tau = F5.tau * P(B=1|A=0) + F6.tau * P(B=1|A=1) = P(B=1) *F2 *F1 F2 F1 F3: Dset={B=0} Sset={C,D} F4 *F3 *F4 F5: Dset={A=0} Sset={B,C,D} F7: Dset={} Sset={A,B,C,D} val=P(E=e) F6 Factor values needed for c(A=0) and c(C=0,B=0) computation: F1.val=P(D=0|C=0)P(E=e|D=0)+P(D=1|C=0)P(E=e|D=1) F2.val=P(D=0|C=1)P(E=e|D=0)+P(D=1|C=1)P(E=e|D=1) Tau values propagated recursively First pass Second pass Variable traversal order: A, B, C, and D. Factors are numbered by order of creation. *Fi denotes the activation of factor i. 0 1 Figure 2: Learning example using the Markov chain A →B →C →D →E, where E is observed. In the first pass, factors (Dset, Sset and val) are learned in a bottom up fashion. Also, the normalization constant P(E = e) (probability of evidence) is obtained. In the second pass, tau values are updated in a top-down fashion and used to calculate expected counts c(F.head, pa(F.head)) corresponding to each F.head (the figure shows the derivations for (A=0) and (C=0,B=0), but all counts are updated in the same pass). other variables are already assigned (these variables and their values are accumulated into Dset). We also check for factors that are active, multiply their values in, and accumulate subsumed vars in Sset (to avoid branching on them). In line 10, we add V to the Sset. In line 11, we cache a new factor F with value F.val = sum. We store V into F.head, a pointer to the last variable to be inserted into F.Sset, and needed for parameter estimation described below. F.Dset consists of all the variables, except V , that appeared in any NSV CPT or the Dset of an activated factor at line 6. Regular Value Elimination is query-based, similar to variable elimination and recursive conditioning—what this means is that to answer a query of the type P(Q|E = e), where Q is query variable and E a set of evidence nodes, we force Q to be at the top of the search tree, run the backtracking algorithm and then read the answers to the queries P(Q = q|E = e), q ∈Dom[Q], along each of the outgoing edges of Q. Parameter estimation would require running a number of queries on the order of the number of parameters to estimate. We extend VE into an algorithm that allows us to obtain Expectation Maximization sufficient statistics in a single run of Value Elimination plus a second pass, which can never take longer than the first one (and in practice is much faster). This two-pass procedure is analogous to the collect-distribute evidence procedure in the Junction Tree algorithm, but here we do this via a search tree. Let θX=x|pa(X)=y be a parameter associated with variable X with value x and parents Y = pa(X) when they have value y. Assuming a maximum likelihood learning scenario3, to estimate θX=x|pa(X)=y, we need to compute f(X = x, pa(X) = y, E = e) = X W\{X,pa(X)} P(W, X = x, pa(X) = y, E = e) which is a sum of joint probabilities of all configurations that are consistent with the assignment {X = x, pa(X) = y}. If we were to turn off factor caching, we would enumerate all such variable configurations and could compute the sum. When standard VE factors are used, however, this is no longer possible whenever X or any of its parents becomes subsumed. Fig. 2 illustrates an example of a VE tree and the factors that are learned in the case of a Markov chain with an evidence node at the end. We can readily estimate the parameters associated with variables A and B as they are not subsumed along any branch. C and D become subsumed, however, and we cannot obtain the correct counts along all the branches that would lead to C and D in the full enumeration case. To address this issue, we store a special value, F.tau, in each factor. F.tau holds the sum over all path probabilities from the first level of the search tree to the level at which the factor F was 3For Bayesian networks the likelihood function decomposes such that maximizing the expectation of the complete likelihood is equivalent to maximizing the “local likelihood” of each variable in the network. either created or activated. For example, F6.tau in fig. 2 is simply P(A = 1). Although we can compute F3.tau directly, we can also compute it recursively using F5.tau and F6.tau as shown in the figure. This is because both F5 and F6 subsume F3: in the context {F5.Dset}, there exists a (unique) value dsub of F5.head4 s.t. F3 becomes activable. Likewise for F6. We cannot compute F1.tau directly, but we can, recursively, from F3.tau and F4.tau by taking advantage of a similar subsumption relationship. In general, we can show that the following recursive relationship holds: F.tau ← X F pa∈Fpa F pa.tau × NSVF pa.head=dsub × Q Fact∈Fact Fact.val F.val (1) where Fpa is the set of factors that subsume F, Fact is the set of all factors (including F) that become active in the context of {F pa.Dset, F pa.head = dsub} and NSVF pa.head=dsub is the product of all newly single valued CPTs under the same context. For top-level factors (not subsumed by any factor), F.tau = Pevidence/F.val, which is 1.0 when there is a unique top-level factor. Alg. 2 is a simple recursive computation of eq. 1 for each factor. We visit learned factors in the reverse order in which they were learned to ensure that, for any factor F ′, F ′.tau is incremented (line 13) by any F that might have activated F ′ (line 12). For example, in fig. 2, F4 uses F1 and F2, so F4.tau needs to be updated before F1.tau and F2.tau. In line 11, we can increment the counts for any NSV CPT entries since F.tau will account for the possible ways of reaching the configuration {F.Dset, F.head = d} in an equivalent full enumeration tree. Algorithm 1: FirstPass(level) Input: Graph G Output: A list of learned factors and Pevidence Select var V to branch on 1 if V ==NONE then return 2 Sset={}, Dset={} 3 for d ∈Dom[V ] do 4 V ←d 5 prod = productOfAllNSVsAndActiveFactors(Dset, Sset) 6 if prod != 0 then FirstPass(level+1) 7 sum += prod 8 Sset = Sset ∪{V } 9 cacheNewFactor(F.head ←V ,F.val ←sum, F.Sset ←Sset, F.Dset ←Dset); 10 Algorithm 2: SecondPass() Input: F: List of factors in the reverse order learned in the first pass and Pevidence. Result: Updated counts foreach F ∈F do 1 if F.Dset = {} then 2 F.tau ←Pevidence/F.val 3 else 4 F.tau ←0.0 5 Assign vars in F.Dset to their values 6 V ←F.head (last node to have been subsumed in this factor) 7 foreach d ∈Dom[V ] do 8 prod = productOfAllNSVsAndActiveFactors() 9 prod∗= F.tau 10 foreach newly single-valued CPT C do count(C.child,C.parents)+=prod/Pevidence 11 F ′=getListOfActiveFactors() 12 for F ′ ∈F ′ do F ′.tau+ = prod/F ′.val 13 Most Probable Explanation We compute MPE using a very similar two-pass algorithm. In the first pass, factors are used to store a maximum instead of a summation over variables in the Sset. We also keep track of the value of F.head at which the maximum is achieved. In the second pass, we recursively find the optimal variable configuration by following the trail of factors that are activated when we assign each F.head variable to its maximum value starting from the last learned factor. 4Recall, F.head is the last variable to be added to a newly created factor in line 10 of alg. 1 4 MACHINE TRANSLATION WORD ALIGNMENT EXPERIMENTS A major motivation for pursuing the type of representation and inference described above is to make it possible to solve computationally-intensive real-world problems using large amounts of data, while retaining the full generality and expressiveness afforded by the MDBN modeling language. In the experiments below we compare running times of MDBNs to GIZA++ on IBM Models 1 through 4 and the M-HMM model. GIZA++ is a special-purpose optimized MT word alignment C++ tool that is widely used in current state-of-the-art phrase-based MT systems [10] and at the time of this writing is the only publicly available software that implements all of the IBM Models. We test on French-English 107 hand-aligned sentences5 from a corpus of the European parliament proceedings (Europarl [9]) and train on 10000 sentence pairs from the same corpus and of maximum number of words 40. The Alignment Error Rate (AER) [13] evaluation metric quantifies how well the MPE assignment to the hidden alignment variables matches human-generated alignments. Several pruning and smoothing techniques are used by GIZA and MDBNs. GIZA prunes low lexical (P(f|e)) probability values and uses a default small value for unseen (or pruned) probability table entries. For models 3 and 4, for which there is no known polynomial time algorithm to perform the full E-step or compute MPE, GIZA generates a set of high probability alignments using an MHMM and hill-climbing and collects EM counts over these alignments using M3 or M4. For MDBN models we use the following pruning strategy: at each level of the search tree we prune values which, together, account for the lowest specified percentage of the total probability mass of the product of all newly active CPTs in line 6 of alg. 1. This is a more effective pruning than simply removing low-probability values of each CPD because it factors in the joint contribution of multiple active variables. Table 1 shows a comparison of timing numbers obtained GIZA++ and MDBNs. The runtime numbers shown are for the combined tasks of training and decoding; however, training time dominates given the difference in size between train and test sets. For models 1 and 2 neither GIZA nor MDBNs perform any pruning. For the M-HMM, we prune 60% of probability mass at each level and use a Dirichlet prior over the alignment variables such that long-range transitions are exponentially less likely than shorter ones.6 This model achieves similar times and AER to GIZA’s. Interestingly, without any pruning, the MDBN M-HMM takes 160 minutes to complete while only marginally improving upon the pruned model. Experimenting with several pruning thresholds, we found that AER would worsen much more slowly than runtime decreases. Models 3 and 4 have treewidth equal to the number of alignment variables (because of the global constraints tying them) and therefore require approximate inference. Using Model 3, and a drastic pruning threshold that only keeps the value with the top probability at each level, we were able to achieve an AER not much higher than GIZA’s. For M4, it achieves a best AER of 31.7% while we do not improve upon Model3, most likely because a too restrictive pruning. Nevertheless, a simple variation on Model3 in the MDBN framework achieves a lower AER than our regular M3 (with pruning still the same). The M3-HMM hybrid model combines the Markov alignment dependencies from the M-HMM model with the fertility model of M3. MCMC Inference Sampling is widely used for inference in high-treewidth models. Although MDBNs support Likelihood Weighing, it is very inefficient when the probability of evidence is very small, as is the case in our MT models. Besides being slow, Markov chain Monte Carlo can be problematic when the joint distribution is not positive everywhere, in particular in the presence of determinism and hard constraints. Techniques such as blocking Gibbs sampling [8] try to address the problem. Often, however, one has to carefully choose a problem-dependent proposal distribution. We used MCMC to improve training of the M3-HMM model. We were able to achieve an AER of 32.8% (down from 39.1%) but using 400 minutes of uniprocessor time. 5 CONCLUSION The existing classes of graphical models are not ideally suited for representing SMT models because “natural” semantics for specifying the latter combine flavors of different GM types on top of standard directed Bayesian network semantics: switching parents found in Bayesian Multinets [6], aggregation relationships such as in Probabilistic Relational Models [5], and existence uncertainty [7]. We 5Available at http://www.cs.washington.edu/homes/karim 6French and English have similar word orders. On a different language pair, a different prior might be more appropriate. With a uniform prior, the MDBN M-HMM has 36.0% AER. Model GIZA++ MDBN Init M1 M-HMM M1 M-HMM M1 1m45s (47.7%) N/A 3m20s (48.0%) N/A M2 2m02s (41.3%) N/A 5m30s (41.0%) N/A M-HMM 4m05s (35.0%) N/A 4m15s (33.0%) N/A M3 2m50 (45%) 5m20s (38.5%) 12m (43.6%) 9m (42.5%) M4 5m20s (34.8%) 7m45s (31.7%) 25m (43.6%) 23m (42.6%) M3-HMM N/A 9m30 (41.0%) 9m15s (39.1%) MCMC 400m (32.8%) Table 1: MDBN VE-based learning versus GIZA++ timings and %AER using 5 EM iterations. The columns M1 and M-HMM correspond to the model that is used to initialize the model in the corresponding row. The last row is a hybrid Model3-HMM model that we implemented using MDBNs and is not expressible using GIZA. have introduced a generalization of dynamic Bayesian networks to easily and concisely build models consisting of varying-length parallel asynchronous and interacting data streams. We have shown that our framework is useful for expressing various statistical machine translation models. We have also introduced new parameter estimation and decoding algorithms using exact and approximate searchbased probability computation. While our timing results are not yet as fast as a hand-optimized C++ program on the equivalent model, we have shown that even in this general-purpose framework of MDBNs, our timing numbers are competitive and usable. Our framework can of course do much more than the IBM and HMM models. One of our goals is to use this framework to rapidly prototype novel MT systems and develop methods to statistically induce an interlingua. We also intend to use MDBNs in other domains such as multi-party social interaction analysis. References [1] F. Bacchus, S. Dalmao, and T. Pitassi. Value elimination: Bayesian inference via backtracking search. In UAI-03, pages 20–28, San Francisco, CA, 2003. Morgan Kaufmann. [2] J. Bilmes and C. Bartels. On triangulating dynamic graphical models. In Uncertainty in Artificial Intelligence: Proceedings of the 19th Conference, pages 47–56. Morgan Kaufmann, 2003. [3] P. F. Brown, J. Cocke, S. A. Della Piettra, V. J. Della Piettra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85, June 1990. [4] T. Dean and K. Kanazawa. Probabilistic temporal reasoning. AAAI, pages 524–528, 1988. [5] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In IJCAI, pages 1300–1309, 1999. [6] D. Geiger and D. Heckerman. Knowledge representation and inference in similarity networks and Bayesian multinets. Artif. Intell., 82(1-2):45–74, 1996. [7] L. Getoor, N. Friedman, D. Koller, and B. Taskar. Learning probabilistic models of link structure. Journal of Machine Learning Research, 3(4-5):697–707, May 2003. [8] C. Jensen, A. Kong, and U. Kjaerulff. Blocking Gibbs sampling in very large probabilistic expert systems. In International Journal of Human Computer Studies. Special Issue on Real-World Applications of Uncertain Reasoning., 1995. [9] P. Koehn. Europarl: A multilingual corpus for evaluation of machine translation. http://www.isi.edu/koehn/publications/europarl, 2002. [10] P. Koehn, F. Och, and D. Marcu. Statistical phrase-based translation. In NAACL/HLT 2003, 2003. [11] S. Lauritzen. Graphical Models. Oxford Science Publications, 1996. [12] K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, U.C. Berkeley, Dept. of EECS, CS Division, 2002. [13] F. J. Och and H. Ney. Improved statistical alignment models. In ACL, pages 440–447, Oct 2000. [14] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 2nd printing edition, 1988. [15] S. Vogel, H. Ney, and C. Tillmann. HMM-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics, pages 836–841, Morristown, NJ, USA, 1996.
2006
181
3,012
Max-margin classification of incomplete data Gal Chechik1, Geremy Heitz2, Gal Elidan1, Pieter Abbeel 1, Daphne Koller 1 1 Department of Computer Science, Stanford University, Stanford CA, 94305 2 Department of Electrical Engineering, Stanford University, Stanford CA, 94305 Email for correspondence: gal@ai.stanford.edu Abstract We consider the problem of learning classifiers for structurally incomplete data, where some objects have a subset of features inherently absent due to complex relationships between the features. The common approach for handling missing features is to begin with a preprocessing phase that completes the missing features, and then use a standard classification procedure. In this paper we show how incomplete data can be classified directly without any completion of the missing features using a max-margin learning framework. We formulate this task using a geometrically-inspired objective function, and discuss two optimization approaches: The linearly separable case is written as a set of convex feasibility problems, and the non-separable case has a non-convex objective that we optimize iteratively. By avoiding the pre-processing phase in which the data is completed, these approaches offer considerable computational savings. More importantly, we show that by elegantly handling complex patterns of missing values, our approach is both competitive with other methods when the values are missing at random and outperforms them when the missing values have non-trivial structure. We demonstrate our results on two real-world problems: edge prediction in metabolic pathways, and automobile detection in natural images. 1 Introduction In the traditional formulation of supervised learning, data instances are viewed as vectors of features in some high-dimensional space. However, in many real-world tasks, data instances have a complex pattern of missing features. While features may sometimes be missing due to measurement noise or corruption, different samples often have different sets of observable features due to inherent properties of the instances. For example, in the case of recognizing objects in natural images, an object is often classified using a set of image patches corresponding to parts of the object (like the license plate for cars); but some images may not contain all parts, either because a part was not captured in the image or because the specific instance does not have this part in the first place. In other scenarios, some features cannot even be defined for all instances. Such situations arise when the objects to be learned are organized based on a known graph structure, since their features may rely on local properties of the graph. For example, we might wish to classify the attributes of a web-page given the attributes of neighboring web-pages [8]. In analyzing genomic data, we may wish to predict the edges in networks of interacting proteins or chemical reactions [9, 15]. In these cases, the local neighborhood of an instance in the graph often varies drastically, and it has already been observed that variation this could introduce statistical biases [8]. In the web-page task, for instance, a useful feature is the most common topic of other sites that point to a given page. When a page has no such parents, however, this feature is meaningless and should be considered structurally absent. The common approach for classification with missing features is imputation, a two phase procedure where the values of the missing attributes are first filled in during a preprocessing phase, after which a standard classifier is applied to the completed data [10]. Most Imputation techniques make most sense when the features are missing due to noise, especially in the setting of missing at random (MAR, when the missingness pattern is conditionally independent of the unobserved features given the observations), or missing completely at random (MCAR, when it is independent of both observed and unobserved measurements). In common practice of applying imputation, missing attributes in continuous data are often filled with zeros, or with the average of all of the data instances, or using the k nearest neighbors (kNN) of each instance to find a plausible value of its missing features. A second family of imputation methods builds probabilistic generative models of the features using raw maximum likelihood or algorithms such as expectation maximization (EM) [4]. Such model-based methods allow the designer to introduce prior knowledge and are extremely useful when priors can be explicitly modeled. These methods work very well for MAR data settings, because they assume that the missing features are generated by the same model that generates the observed features. However, model-based approaches can be computationally expensive, and require significant prior knowledge about the data. More importantly, they will produce meaningless completions for features that are structurally absent. As an extreme example, consider two subpopulation of instances (e.g., animals and buildings) having no overlapping features (e.g., body parts, and architectural aspects), in which filling missing values (e.g., the body parts of buildings) is clearly meaningless and may harm classification performance. As a result, for structurally absent features, it would be useful if we could avoid unnecessary prediction of hypothetical undefined values, and classify instances directly. We approach this problem directly from the geometric interpretation of the classification task as finding a separating hyperplane in the feature space. We view instances with different feature sets as lying in subspaces of the full feature space, and suggest a modified optimization objective within the framework of support vector machines (SVMs), that explicitly considers the subspace of each instance. We show how the linearly separable case can be efficiently solved using convex optimization (second order cone programming, SOCP). The objective of the non separable case is non-convex, and we propose an iterative procedure that is found to converge in practice. These approaches may be viewed as model-free methods for handling missing data in the cases where the MAR assumption fails to hold. We evaluate the performance of our approach in two real world applications: prediction of missing enzymes in a metabolic network, and automobile detection in natural images. In both tasks, features may be inherently absent due to the mechanisms described above, and our methods are found superior to other simple imputation methods. 2 Max-Margin Formulation for Missing Features Let x1 . . . xn be a set of samples with binary labels yi ∈{−1, 1}. Each sample xi is characterized by a subset of features F(xi), from a full set F of size d. A sample that has all features F(xi) = F, is viewed as a vector in Rd, where the ith coordinate corresponds to the ith feature. A sample xi with partially valid features can be viewed as embedded in the relevant subspace R|F(xi)| ⊆Rd. For simplicity of notation, we treat each xi as if it were a vector in Rd where only its F(xi) entries are valid and define the inner product with another vector in Rd as wx = P k:fk∈F(xi) wkxk. Importantly, since instances share features, the learned classifier must be consistent across instances, assigning the same weight to a given feature in different samples, even if those instance do not lie in the same subspace. In the classical SVM approach [14, 13], a linear classifier w is optimized to maximize the margin, defined as mini yi(wxi + b)/∥w∥, and the learning problem is reduced to the quadratic constrained optimization problem min w,ξ,b 1 2∥w∥2 + C n X i=1 ξi s.t. yi(wxi + b) ≥1 −ξi , i = 1 . . . n (1) where b is a threshold, the ξ’s are slack variables necessary for the case when the training instances are not linearly separable, and C is the error penalty. Eq. (1) can be extended to nonlinear classifiers using kernels [13]. ρ2 ρ1 Figure 1: The margin is incorrectly scaled when a sample that has missing features is treated as if the missing features have a value of zero. In this example, the margin of a sample that only has one feature (the x dimension) is measured both in the higher dimensional space (ρ2) and the lower one (ρ1). If all features are assumed to exist, and we give the missing feature (along the y axis) a value of zero, the margin ρ2 measured in the higher dimensional space is shorter that the margin measured in the relevant subspace ρ1 . Consider now learning such a classifier in the presence of missing data. At first glance, it may appear that since the x’s only affect the optimization through inner products with w, missing features can merely be skipped (or equivalently, replaced with zeros), thereby preserving the values of the inner product. However, this does not properly normalize the different entries in w, and damages classification accuracy. The reason is illustrated in Fig. 1 where a single sample in R2 has one valid and one missing feature. Due to the missing feature, measuring the margin in the full space ρ2, underestimates the correct geometric margin of the sample in the valid space ρ1. This is different from the case where the feature exists but is unknown, in which the sample’s margin could be either over- or under-estimated. In the next sections, we explore how this Eq. (1) can be solved while properly taking this normalization into account. We start by reminding the reader about the geometric interpretation of SVM. 3 Geometric interpretation The derivation of the SVM classifier [14] is motivated by the goal of finding a hyperplane that maximally separates the positive examples from the negative, as measured by the geometric margin ρ(w) = mini yiwxi ∥w∥. The task of maximizing the margin ρ(w), max w ρ(w) = max w  min i yiwxi ∥w∥  (2) is transformed into the quadratic programming problem of Eq. (1) in two steps. First, ∥w∥, is taken out of the minimization, yielding maxw 1 ∥w∥(mini yiwxi). Then, the following invariance is used: for every solution, there exists a solution that achieves the same target function value, but with a margin that equals 1. This allows us to write the SVM problem as a constrained optimization problem: maxw ∥w∥−1 s.t. yi(wxi) ≥1. This is equivalent to minimizing ∥w∥2 with the same constraints, which equals the SVM problem of Eq. (1). In the case of missing features, this derivation no longer optimizes the correct geometrical margin (Fig. 1). To address this problem, we treat the margin of each instance in its own subspace, by defining the instance margin for the ith instance as ρi(w) = yiw(i)xi ∥w(i)∥ where ∥w(i)∥= qP k:fk∈F(xi) w2 k. The geometric margin is, as before, the minimum over all instance margins, yielding a new optimization problem max w  min i yiw(i)xi ∥w(i)∥  . (3) Unfortunately, since different margin terms are normalized by different norms ∥w(i)∥, we can no longer take the denominator out of the minimization as above. In addition, each of the terms yiw(i)xi/∥w(i)∥is non-convex in w, which is difficult to solve directly in an efficient way. We now discuss two approaches for solving this problem. In the linearly separable case, the optimization problem of Eq. (3) is equivalent to max w,γ γ s.t. yiw(i)xi ≥γ∥w(i)∥ i = 1 . . . n , (4) This is a convex feasibility problem for any fixed value of γ, which is a real scalar that corresponds to the margin. It can be solved efficiently using a bisection search over γ ∈R+, where in each iteration we solve a convex second order cone program (SOCP) [11]. Unfortunately, extending this formulation to the non-separable while preserving the geometric margin interpretation case makes the problem non-convex (this is discussed elsewhere). A second approach for solving Eq. (3) is to treat each instance margin individually. We represent each of the norms ∥w(i)∥as a scaling of the full norm by defining scaling coefficients si = ∥w(i)∥/∥w∥, and rewriting Eq. (3) as max w  min i yiwxi si∥w∥  = max w 1 ∥w∥  min i yiwxi si  , si = ∥w(i)∥ ∥w∥. (5) The si factors are scalars, and had we known them, we could have solved a standard SVM problem. Unfortunately they depend on w(i) and are unknown. This formalism allows us to use again the invariance to the rescaling of ∥w∥and rewrite as a constrained optimization problem over si and w. In the non-separable case, Eq. (5) becomes min w,b,ξ,s 1 2∥w∥2 + C X i ξi s.t. 1 si (yi(wxi + b)) ≥1 −ξi , i = 1 . . . n (6) si = ∥w(i)∥/∥w∥, i = 1 . . . n This constrained optimization problem is no longer a QP. In fact, due to the normalization constraint it is not even convex in w. One solution is a projected gradient approach, in which one iterates between steps in the direction of the gradient of the Lagrangian and projections to the constrained space, by calculating si = ∥w(i)∥/∥w∥. For the right choices of step sizes, such approaches are guaranteed to converge to local minima [2]. We can use a faster iterative algorithm based on the fact that the problem is a QP for any given set of si’s, and iterate between (1) solve a QP for w given si, and (2) use the resulting w to calculate new si’s. This algorithm differs from a projected gradient approach in that rather than taking a series of small gradient steps, it takes bigger leaps, and projects back to the constrained space after each step. Since the convergence of this iterative algorithm is not guaranteed, we used cross validation to choose an early stopping point and found that the best solutions were obtained within 2-5 steps. Typically, the objective improved on the first 1-3 iterations, but then, in about 75% of the cases the objective oscillated. In the remaining cases the algorithm converged to a fixed point. It is easy to see that a fixed point of this iterative procedure achieves an optimal solution for Eq. (6), since it achieves a minimal ∥w∥ while obeying the si constraints. As a result, when this algorithm converges, the solution is also guaranteed to be a locally optimal solution of the original problem Eq. (3). The power of the SVM approach can be largely attributed to the flexibility and efficiency of nonlinear classification allowed through the use of kernels. The dual of the above QP can be kernelized as in a standard SVM, yielding max α∈R n n X i=1 αi −1 2 n X i,j=1 αi yi si K (xi, xj) yj sj αj s.t. 0 ≤αi ≤C ; n X i=1 αiyi = 0. (7) where K(xi, xj) is the kernel function that simulates an inner product in the higher dimensional feature space. Classification of new samples are obtained as in standard SVM by calculating the margin ρ(xnew) = P j yjαj 1 sj K(xj, xnew) 1 snew . Kernels in this formulation operate over vectors with missing features, hence we have to develop kernels that handle them correctly. Fortunately, many kernels only depend on their inputs through their inner product. In this case there is an easy procedure to construct a modified kernel that takes such missing values into account. For example, for a polynomial kernel K(xi, xj) = (⟨xi, xj⟩+ 1)d, define K′(xi, xj) = K(xi, xj) = (⟨xi, xj⟩F + 1)d, with the inner product calculated over valid features ⟨xi, xj⟩F = P k:fk∈χ(xj)∩F(xi)⟨xik, xjk⟩. This can be easily proved to be a kernel. (a) (b) (c) geom zero mean flag−agg kNN 5 0 0.1 0.2 0.3 0.4 0.5 classification error Figure 2: Car classification results. (a) An easy instance where all local features are approximately in agreement. (b) A hard instance where local features are divided into two distinct groups. This instance was correctly classified by the ‘geometric margin’ approach but misclassified by all other methods. (c) Classification accuracy of the different methods for the task of object recognition in real images. Error bars are standard errors of the mean (SEM) over the five cross validation sets. 4 Experiments We evaluated our approaches in three different missingness scenarios. First, as a sanity check, we explored performance when features are missing at random, in a series of five standard UCI benchmarks, and also in a large digit recognition task using MNIST data. In this setting our methods performed equally well as other approaches (or slightly better). The full details of these experiments are provided in a longer version of this work. Second, we study a visual object recognition application where some features are missing because they cannot be located in the image. Finally, we apply our methods to a problem of biological network completion, where missingness patterns of features is determined by the known structure of the network. For all applications, we compare our iterative algorithm with five common approaches for completing missing features. 1. Zero: Missing values were set to zero. 2. Mean: Missing values were set to the average feature values 3. Aggregated Flags: Features were annotated with an explicit additional feature noting whether a feature is valid or missing. To reduce the number of added features, we added a single flag for each group of features that were valid or invalid together across all instances. For example, In the vision application, all features of a landmark candidate are grouped together since they are all invalid if the match is wrong (see below). 4. kNN: Missing features were set with the mean value obtained from the K nearest neighbors instances; neighborhood was measured using a Euclidean distance in the subspace relevant to each two samples, number of neighbors was varied as K = 3, 5, 10, 20, and the best result is the one reported. 5. EM: Generative model in the spirit of [4]. A Gaussian mixture model is learned by iterating between (1) learning a GMM model of the filled data (2) re-filling missing values using clusters means, weighted by the posterior probability that a cluster generated the sample. Covariances were assumed spherical. The number of clusters was varied as K = 3, 5, 10, 15, 20, and the best result is the one reported. 6. Geometric margin: Our non-separable approach described in Sec. 3. In all of the experiments, we used a 5-fold cross validation procedure and evaluated performance using a testing set that was not used during training. In addition, 20% of the training set was used for choosing optimization parameters, such as the kernel type, its parameters, and an early stopping point for the iterative algorithm. 4.1 Visual object recognition We now consider a visual object recognition task where instances have structurally missing features. In this task we attempt to determine if an object from a certain class (automobiles) is present in a given input image. The task of classifying images based on the object class that they contain has seen much work in recent years [1, 5],and discriminative approaches have typically produced very good results [5, 12]. Features in these methods are commonly constructed from regions of interest (patches) in the image. These patches typically cover “landmarks” of the object, like the trunk or a headlight for a car. A typical set of patches includes several candidates for any object part, and some images may have more candidates for a given part than others. For example, a trunk of a car may not be found in a picture of a hatch-back car, hence all its corresponding features are considered to be structurally missing from that image. Our object model contains a set of “landmarks”, for which we find several matches in a given image (details are omitted due to lack of space). Fig. 2 shows examples of matches for the front windshield landmark. Because of the noisy matches, the highest scoring match often does not match the true landmark, and the number of high-quality matches (features) varies in practice. It is in precisely such a scenario that we expect our proposed algorithm to be effective. In some cases, landmark models could provide confidence levels for each match. These could in principle be used as additional features to help the classifiers give more weight to better matches, and are expected to improve classification when the confidence measure is reliable. While this is a potentially useful approach for the current application, this paper takes a different approach: it does not use any soft confidence values but rather treats the low-confidence matches as wrong, removing them from the data. Concretely, we located up to 10 candidate patches (21 × 21 pixels) that were promising (likelihood above a given threshold) for each of the 19 landmarks in the car model. For each candidate, we compute the first 10 principal component coefficients of the image patch and concatenate these patches to form the image feature vector. If the number of patches for a given landmark is less than ten, we consider the rest to be structurally absent. We evaluated performance for this task using two levels of a 5-fold cross validation procedure as explained above. We compared several kernels and report results using the kernel that fared best on the validation set, which was usually a second order polynomial kernel. Fig. 2c compares the accuracy of the different methods. We found the geometric approach to be significantly superior to all other methods. To further evaluate our method, we qualitatively examined the classification results for several images across the various methods. Fig. 2a shows the top 10 matches for the front windshield landmark for a representative “easy” test instance where all local features are approximately in agreement. This instance was correctly classified by all methods. In contrast, Fig. 2b shows a representative “hard” test instance where local features cluster into two different groups. In this case, the cluster of bad matches was automatically excluded yielding missing features, and our geometric approach was the only method able to classify the instance correctly. 4.2 Metabolic pathway reconstruction As a final application, we consider the problem of predicting missing enzymes in metabolic pathways, a long-standing and important challenge in computational biology [15, 9]. Instances in this task have missing features due to the structure of the biochemical network. Cells use a complex network of chemical reactions to produce their chemical building blocks (Fig. 3). Each reaction transforms a set of molecular compounds (called substrates) into another set of molecules (products), and requires the presence of an enzyme to catalyze the reaction. It is often unknown which enzyme catalyzes a given reaction, and it is desirable to predict the identity of such missing enzymes computationally. Our approach for predicting missing enzymes is based on the observation that enzymes in local network neighborhoods usually participate in related functions. As a result, neighboring enzyme pairs have non trivial correlations over their features that reflect their functional relations. Importantly, different types of neighborhood relations between enzyme pairs lead to different relations of their properties. For example, an enzyme in a linear chain depends on the preceding enzyme product as its substrate. Hence it is expected that the corresponding genes are co-expressed [9, 15]. On the other hand, enzymes in forking motifs (same substrate, different products) often have anti-correlated expression profiles [7]. To preserve the distinction between different neighbor relations, we defined a set of network motifs, including forks, funnels and linear chains. Each enzyme is represented as a vector of features that measure its relatedness to each of its neighbors. A feature vector has structurally missing entries if the enzyme does not have all types of neighbors. For example, the enzyme PHA2 in Fig. 3 does not have a neighbor of type fork, and therefore all features assigned to such a neighbor are absent in the representation of the reaction “Prephanate → Phenylpyruvate”. geom zero mean flag−agg kNN 10 0 0.1 0.2 0.3 0.4 0.5 classification error 0 0.25 0.75 1 0 0.2 0.8 1 True positives 1 − False positives geom zero mean flag−agg kNN 10 Figure 3: Left: A fragment of the full metabolic pathway network in S. cerevisiae. Chemical reactions (arrows) transform a set of molecular compounds into other compounds. Small molecules like CO2 were omitted from this drawing for clarity. Reactions are catalyzed by enzymes (boxed names, e.g., ARO7), but in some cases these enzymes are unknown. The network imposes various neighborhood relations between enzymes assigned to reactions, like linear chains (ARO7,PHA2), forks (TRP2,ARO7) and funnels (ARO9,PHA2) Top Right: Classification accuracy for compared methods. The classification task is to identify if a candidate enzyme is in the right “neighborhood”. Error bars are SEMs over 5 cross validation sets. Bottom right: ROC curves for the same task. We used the metabolic network of S. cerevisiae, as reconstructed by Palsson and colleagues [3], after removing 14 metabolic currencies and reactions with unknown enzymes, leaving 1265 directed reactions. We used three data types: (1) A compendium of 645 gene expression experiments; each experimental condition k contributed one feature, the point-wise Pearson correlation xi(k)xj(k) ∥xi∥∥xj∥. xi is the vector of expression levels across conditions. (2) The proteindomain content of each enzyme as found by the Prosite database. Each domain k contributed one feature, the point-wise symmetric DKL measure xi(k) (log(xi(k)/(xj(k) + xi(k))/2)) + xj(k) (log(xj(k)/(xj(k) + xi(k))/2)). (3) The cellular localization of the protein [6]; each cellular localization contributed one feature, the point-wise Hamming distance. In total, the feature vector length was about 3900. Pathway reconstruction requires that we rank candidate enzymes by their potential to match a reaction. As a first step towards this goal, we train a binary classifier, to predict if an enzyme fits its neighborhood. We created a set of positive examples from the reactions with known enzymes (∼520 reactions), and also created negative examples by plugging impostor genes into ‘wrong’ neighborhoods. We trained an SVM classifier using a 5-fold cross validation procedure as described above. Figure 3 shows the classification error of the different methods in the gene filling task. The geometric margin approach achieves significantly better performance in this task. kNN achieved very poor performance compared to all other methods. One reason could be that the Euclidean distance is inappropriate for the current task and that a more elaborate distance measure needs to developed for this type of data. Learning metrics is a complicated task in general, and more so in the current problem since the feature vectors contain entries of several different types, making it unlikely that a naive distance measure would work well. Finally, the resulting classifier is used for predicting missing enzymes, by ranking all candidate enzymes according to their match to a given neighborhood. Evaluating the quality of ranking on known enzymes (cross validation), shows that it significantly outperforms previous approaches [9] (not shown here due to space limitations). We attribute this to the ability of the current approach to preserve different types of network-neighbors as separate features in spite of creating missing values. 5 Discussion We presented a novel method for max-margin training of classifiers in the presence of missing features, where the pattern of missing features is an inherent part of the domain. Instead of completing missing features as a preprocessing phase, we developed a max-margin learning objective based on a geometric interpretation of the margin when different instances essentially lie in different spaces. Using two challenging real life problems we showed that our method is significantly superior when the pattern of missing features has structure. The standard treatment of missing features is based on the concept that missing features exist but are unobserved. This assumption underlies the approach of completing features before the data is used in classification. This paper focuses on a different scenario, in which features are inherently absent. In such cases, it is not clear why we should guess hypothetical values for undefined features, since the completed values are filled based on other observed values, and do not add information to our classifiers. In fact, by completing features that are not supposed to be part of an instance, we may be confusing the learning algorithm by presenting it with problem which may be harder than the one we actually need to solve. Interestingly, the problem of classifying with missing features is related to another problem, where individual reliability measures are available for features at each instance separately. This is a common case in analysis scientific measurements, where the reliability of each experiment could be provided separately. For example, DNA micro-array experiments have inherent measures of experimental noise levels, and biological variability is often estimated using replicates. This problem can be viewed in the same framework described in this paper: the geometric margin must be defined separately for each instance since the different noise levels distort the relative scale of each coordinate of each instance separately. Relative to this setting, the completely missing and fully valid features discussed in this paper are extreme points on the spectrum of reliability. It will be interesting to see which aspects of the geometric formulation discussed in this paper can be extended to this new problem. Acknowledgement: This paper was supported by a NSF grant DBI-0345474. References [1] A. Berg, T. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In CVPR, 2005. [2] Paul H. Calamai and Jorge J. More:9A. Projected gradient methods for linearly constrained problems. Math. Program., 39(1):93–116, 1987. [3] J. Forster, I. Famili, P. Fu, B.. Palsson, and J. Nielsen. Genome-scale reconstruction of the saccharomyces cerevisiae metabolic network. Genome Research, 13(2):244–253, February 2003. [4] Z. Ghahramani and MI. Jordan. Supervised learning from incomplete data via an EM approach. In JD. Cowan, G. Tesauro, and J. Alspector, editors, NIPS, volume 6, pages 120–127, 1994. [5] K. Grauman and T. Darrell. Pyramid match kernels: Discriminative classification with sets of image features. In ICCV, 2005. [6] W.K. Huh, J.V. Falvo, L.C. Gerke, A.S. Carroll, R.W. Howson, J.S. Weissman, and E.K. O’Shea. Global analysis of protein localization in budding yeast. Nature, 425:686–691, 2003. [7] J. Ihmels, R. Levy, and N. Barkai. Principles of transcriptional control in the metabolic network of saccharomyces cerevisiae. Nature Biotechnology, 22:86–92, 2003. [8] D. Jensen and J. Neville. Linkage and autocorrelation cause feature selection bias in relational learning. In ICML, 2002. [9] P. Kharchenko, D. Vitkup, and GM Church. Filling gaps in a metabolic network using expression information. Bioinformatics, 20:I178–I185, 2003. [10] R.J.A. Little and D.B. Rubin. Statistical Analysis with Missing Data. NY wiley, 1987. [11] MS. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming. Linear Algebra and its Applications, 284:193–228, 1998. [12] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In LK. Saul, Y. Weiss, and L. Bottou, editors, NIPS 17, pages 1097–1104, 2005. [13] B. Sch¨olkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization Optimization and Beyond. MIT Press, Cambridge, MA, 2002. [14] V.N. Vapnik. The nature of statistical learning theory. SpringerVerlag, 1995. [15] J. P. Vert and Y. Yamanishi. Supervised graph inference. In LK. Saul, Y. Weiss, and L. Bottou, editors, NIPS 17, pages 1433–1440, 2004.
2006
182
3,013
Scalable Discriminative Learning for Natural Language Parsing and Translation Joseph Turian, Benjamin Wellington, and I. Dan Melamed {lastname}@cs.nyu.edu Computer Science Department New York University New York, New York 10003 Abstract Parsing and translating natural languages can be viewed as problems of predicting tree structures. For machine learning approaches to these predictions, the diversity and high dimensionality of the structures involved mandate very large training sets. This paper presents a purely discriminative learning method that scales up well to problems of this size. Its accuracy was at least as good as other comparable methods on a standard parsing task. To our knowledge, it is the first purely discriminative learning algorithm for translation with treestructured models. Unlike other popular methods, this method does not require a great deal of feature engineering a priori, because it performs feature selection over a compound feature space as it learns. Experiments demonstrate the method’s versatility, accuracy, and efficiency. Relevant software is freely available at http://nlp.cs.nyu.edu/parser and http://nlp.cs.nyu.edu/GenPar. 1 Introduction Discriminative machine learning methods have led to better solutions for many problems in natural language processing (NLP), such as various kinds of sequence labeling. However, only limited advances have been made on NLP problems involving tree-structured prediction. State of the art methods for both parsing and translation use discriminative methods, but they are still limited by their reliance on generative models that can be estimated relatively cheaply. For example, some parsers and translators use a generative model to generate a list of candidates, and then rerank them using a discriminative reranker (e.g., Henderson, 2004; Charniak & Johnson, 2005; Cowan et al., 2006). Others use a generative model as a feature in a discriminative framework, because otherwise training is impractically slow (Collins & Roark, 2004; Taskar et al., 2004; Riezler & Maxwell, 2006). Similarly, the best machine translation (MT) systems use discriminative methods only to calibrate the weights of a handful of different knowledge sources, which are either enumerated by hand or learned automatically but not discriminatively (e.g., Chiang, 2005). The problem with generative models is that they are typically not regularized in a principled way, and it is difficult to make up for their unregularized risk post-hoc. It is also difficult to come up with a generative model for certain kinds of data, especially the kind used to train MT systems, so approaches that rely on generative models are hard to adapt. This paper proposes a discriminative learning method that can scale up to large structured prediction problems, without using generative models in any way. The proposed method employs the traditional AI technique of predicting a structure by searching over possible sequences of inferences, where each inference predicts a part of the eventual structure. However, unlike most approaches employed in NLP, the proposed method makes no independence assumptions: The function that evaluates each inference can use arbitrary information not only from the input, but also from all previous inferences. Let us define some terms to help explain how our algorithm predicts a tree. An item is a node in the tree. Every state in the search space consists of a set of items, representing nodes that have been inferred since the algorithm started. States whose items form a complete tree1 are final states. An inference is a (state, item) pair, i.e. a state and an item to be added to it. Each inference represents a transition from one state to another. A state is correct if it is possible to infer zero or more items to obtain the final state that corresponds to the training data tree. Similarly, an inference is correct if it leads to a correct state. Given input s, the inference engine searches the possible complete trees T(s) for the tree ˆt ∈T(s) that has minimum cost CΘ(t) under model Θ: ˆt = arg min t∈T(s) CΘ(t) = arg min t∈T(s)  |t| X j=1 cΘ(ij)  (1) The i j are the inferences involved in constructing tree t. cΘ(i) is the cost of an individual inference i. The number of states in the search space is exponential in the size of the input. The freedom to compute cΘ(i) using arbitrary non-local information from anywhere in inference i’s state precludes exact solutions by ordinary dynamic programming. We know of two effective ways to approach such large search problems. The first, which we use for our parsing experiments, is to severely restrict the order in which items can be inferred. The second, which we use for translation, is to make the simplifying assumption that the cost of adding a given item to a state is the same for all states. Under this assumption, the fraction of any state’s cost due to a particular item can be computed just once per item, instead of once per state. However, in contrast to traditional context-free parsing algorithms, that computation can involve context-sensitive features. An important design decision in learning the inference cost function cΘ is the choice of feature set. Given the typically very large number of possible features, the learning method must satisfy two criteria. First, it must be able to learn effectively even if the number of irrelevant features is exponential in the number of examples. It is too time-consuming to manually figure out the right feature set for such problems. Second, the learned function must be sparse. Otherwise, it would be too large for the memory of an ordinary computer, and therefore impractical. Section 2 presents an algorithm that satisfies these criteria. This algorithm is in the family that has been shown to converge to an ℓ1-optimal separating hyperplane, which maximizes the minimum ℓ1-margin on separable training data (Rosset et al., 2004). Sections 3 and 4 present experiments on parsing and translation, respectively, illustrating the advantages of this algorithm. For lack of space, the experiments are described tersely; for details see Turian and Melamed (2006a) and Wellington et al. (2006). Also, Turian and Melamed (2006b) show how to reduce training time. 2 Learning Method 2.1 The Training Set The training data used for both parsing and translation initially comes in the form of trees.2 These gold-standard trees are used to generate training examples, each of which is a candidate inference: Starting at the initial state, we randomly choose a sequence of correct inferences that lead to the (gold-standard) final state. All the candidate inferences that can possibly follow each state in this sequence become part of the training set. The vast majority of these inferences will lead to incorrect states, which makes them negative examples. An advantage of this method of generating training examples is that it does not require a working inference engine and can be run prior to any training. A disadvantage of this approach is that it does not teach the model to recover from mistakes. We conjecture this this approach is not subject to label bias because states can “dampen” the mass they receive, as recommended by Lafferty et al. (2001). The training set I consists of training examples i, where each i is a tuple ⟨X(i), y(i), b(i)⟩. X(i) is a feature vector describing i, with each element in {0, 1}. We will use Xf (i) to refer to the element of 1 What counts as a complete tree is problem-specific. E.g., in parsing, a complete tree is one that covers the input and has a root labeled TOP. 2 Section 4 shows how to do MT by predicting a certain kind of tree. X(i) that pertains to feature f. y(i) = +1 if i is correct, and y(i) = −1 if not. Some training examples might be more important than others, so each is given a bias b(i) ∈R+. By default, all b(i) = 1. A priori, we define only a set A of simple atomic features (described later). The learner then induces compound features, each of which is a conjunction of possibly negated atomic features. Each atomic feature can have one of three values (yes/no/don’t care), so the size of the compound feature space is 3|A|, exponential in the number of atomic features. In our experiments, it was also exponential in the number of training examples, because |A| ≈|I|. For this reason, we expect that the number of irrelevant (compound) features is exponential in the number of training examples. 2.2 Objective Function The training method induces a real-valued inference evaluation function hΘ(i). In the present work, hΘ is a linear model parameterized by a real vector Θ, which has one entry for each feature f: hΘ(i) = Θ · X(i) = X f Θf · Xf (i) (2) The sign of hΘ(i) predicts the y-value of i and the magnitude gives the confidence in this prediction. The training procedure adjusts Θ to minimize the expected risk RΘ over training set I. RΘ is the objective function, which is the sum of loss function LΘ and regularization term ΩΘ. We use the log-loss and ℓ1 regularization, so we have RΘ(I) = LΘ(I) + ΩΘ =  X i∈I lΘ(i) + ΩΘ =  X i∈I [b(i) · ln(1 + exp(−µΘ(i)))] + λ · X f |Θf |  (3) λ is a parameter that controls the strength of the regularizer and µΘ(i) = y(i) · hΘ(i) is the margin of example i. The tree cost CΘ (Equation 1) is obtained by computing the objective function with y(i) = +1 and b(i) = 1 for every inference in the tree, and treating the penalty term ΩΘ as constant. I.e., cΘ(i) = ln(1 + exp(−hΘ(i))). This choice of objective function was motivated by Ng (2004), who showed that it is possible to achieve sample complexity that is logarithmic in the number of irrelevant features by minimizing the ℓ1-regularized log-loss. On the other hand, Ng showed that most other discriminative learning algorithms used for structured prediction in NLP will overfit in this setting, including: the perceptron algorithm, unregularized logistic regression, logistic regression with an ℓ2 penalty (a Gaussian prior), SVMs using most kernels, and neural nets trained by back-propagation. 2.3 Boosting ℓ1-Regularized Decision Trees We use an ensemble of confidence-rated decision trees (Schapire & Singer, 1999) to represent hΘ.3 Each internal node is split on an atomic feature. The path from the root to each node n in a decision tree corresponds to a compound feature f, and we write ϕ(n) = f. An inference i percolates down to node n iffXϕ(n) = 1. Each leaf node n keeps track of the parameter value Θϕ(n). To score an inference i using a decision tree, we percolate the inference down to a leaf n and return confidence Θϕ(n). The score hΘ(i) given to an inference i by the whole ensemble is the sum of the confidences returned by all trees in the ensemble. Listing 1 Outline of training algorithm. procedure T(I) ensemble ←∅ ℓ1 parameter λ ←∞ while not converged do t ←tree with one (root) node while the root node cannot be split do decay λ MT(t, I) procedure MT(t, I) while some leaf in t can be split do split the leaf to maximize gain percolate every i ∈I to a leaf node for each leaf n in t do update Θϕ(n) to minimize RΘ append t to ensemble Listing 1 presents our training algorithm. At the beginning of training, the ensemble is empty, Θ = 0, and λ is set to ∞. We grow the ensemble until the objective cannot be further reduced for the current 3 Turian and Melamed (2005) built more accurate parsers more quickly using decision trees rather than decision stumps, so we build full decision trees. choice of λ. We then relax the regularization penalty by decreasing λ and continue training. In this way, instead of choosing the best λ heuristically, we can optimize it during a single training run. Each invocation of MThas several steps. First, we choose some compound features that will allow us to decrease the objective function. We do this by building a decision tree, whose leaf node paths represent the chosen compound features. Second, we confidence-rate each leaf to minimize the objective over the examples that percolate down to that leaf. Finally, we append the decision tree to the ensemble and update parameter vector Θ accordingly. In this manner, compound feature selection is performed incrementally during training, as opposed to a priori. Our strategy for feature selection is a variant of steepest descent (Perkins et al., 2003), extended to work over the compound feature space. The construction of each decision tree begins with a root node, which corresponds to a dummy “always true” feature. To avoid the discontinuity at Θf = 0 of the gradient of the regularization term in the objective (Equation 3), we define the gain of feature f as: GΘ(I; f) = max 0, ∂LΘ(I) ∂Θf −λ ! (4) The gain function indicates how the polyhedral structure of the ℓ1 norm tends to keep the model sparse (Riezler & Vasserman, 2004). Unless the magnitude of the gradient of the loss |∂LΘ(I)/∂Θf | exceeds the penalty term λ, the gain is zero and the objective cannot be reduced by adjusting parameter Θ f away from zero. However, if the gain is non-zero, GΘ(I; f) is the magnitude of the gradient of the objective as we adjust Θf in the direction that reduces RΘ. Let us define the weight of an example i under the current model as the rate at which loss decreases as the margin of i increases: wΘ(i) = −∂lΘ(i) ∂µΘ(i) = b(i) · 1 1 + exp(µΘ(i)) (5) Now, to compute the gain (Equation 4), we note that: ∂LΘ(I) ∂Θf = X i∈I ∂lΘ(i) ∂Θf = X i∈I ∂lΘ(i) ∂µΘ(i) · ∂µΘ(i) ∂Θf = − X i∈I wΘ(i) · [y(i) · X f (i)] = − X i∈I: Xf (i)=1 wΘ(i) · y(i) (6) We recursively split leaf nodes by choosing the best atomic splitting feature that will allow us to increase the gain. Specifically, we consider splitting each leaf node n using atomic feature ˆa, where ˆa = arg max a∈A GΘ(I; f ∧a) + GΘ(I; f ∧¬a) (7) Splitting using ˆa would create children nodes n1 and n2, with ϕ(n1) = f ∧ˆa and ϕ(n2) = f ∧¬ˆa. We split node n using ˆa only if the total gain of these two children exceeds the gain of the unsplit node, i.e. if: GΘ(I; f ∧ˆa) + GΘ(I; f ∧¬ˆa) > GΘ(I; f) (8) Otherwise, n remains a leaf node of the decision tree, and Θϕ(n) becomes one of the values to be optimized during the parameter update step. Parameter update is done sequentially on only the most recently added compound features, which correspond to the leaves of the new decision tree. After the entire tree is built, we percolate each example down to its appropriate leaf node. A convenient property of decision trees is that the leaves’ compound features are mutually exclusive, so their parameters can be directly optimized independently of each other. We use a line search to choose for each leaf node n the parameter Θϕ(n) that minimizes the objective over the examples in n. 3 Parsing The parsing algorithm starts from an initial state that contains one terminal item per input word, labeled with a part-of-speech (POS) tag by the method of Ratnaparkhi (1996). For simplicity and efficiency, we impose a (deterministic) bottom-up right-to-left order for adding items to a state. The resulting search space is still exponential, and one might worry about search errors. However, in our experiments, the inference evaluation function was learned accurately enough to guide the parser to the optimal parse reasonably quickly without pruning, and thus without search errors. Following Taskar et al. (2004), we trained and tested a parser using the algorithm in Section 2 on ≤15 word sentences from the English Penn Treebank (Taylor et al., 2003). We used sections 02–21 Table 1 Accuracy on the English Penn Treebank, training and testing on sentences of ≤15 words. % Recall % Precision F1 Turian and Melamed (2005) 86.47 87.80 87.13 Bikel (2004) 87.85 88.75 88.30 Taskar et al. (2004) 89.10 89.14 89.12 our parser 89.26 89.55 89.40 for training, section 22 for development, and section 23 for testing. There were 40 million training inferences. Turian and Melamed (2005) observed that uniform example biases b(i) produced lower accuracy as training progressed, because the model minimized the error per example. To minimize the error per state, we assigned every training state equal value and shared half the value uniformly among negative examples generated from that state and gave the other half to the positive examples. Our atomic feature set A contained features of the form “is there an item in group J whose label/headword/headtag/headtagclass is X?”. Possible values of X for each predicate were collected from the training data. Some examples of possible values for J are the last n child items, the first n left-context items, all right-context items, and the terminal items dominated by the non-head child items. These feature templates gave rise to 1.1 million different atomic features. Significantly smaller feature sets lowered accuracy on the development set. To situate our results in the literature, we compared them to those reported by Taskar et al. (2004) and Turian and Melamed (2005) for their discriminative parsers, which were also trained and tested on ≤15 word sentences.4 We also compared our parser to a representative non-discriminative parser (Bikel, 2004)5, the only one that we were able to train and test under exactly the same experimental conditions, including the use of POS tags from Ratnaparkhi (1996). The comparison was in terms of the standard PARSEVAL measures (Black et al., 1991): labeled precision, labeled recall, and labeled F-measure, which are based on the number of non-terminal items in the parser’s output that match those in the gold-standard parse. Table 1 shows the results of these four parsers on the test set. The accuracy of our parser is at least as high as that of comparable parsers in the literature. An advantage of our choice of loss function is that each of the binary classifiers can be learned independently of the others. We parallelized training by inducing 26 separate classifiers, one for each non-terminal label in the Penn Treebank. It took less than five CPU-days to build each of the ensembles used at test time by the final parser. By comparison, it took several CPU-months to train the parser of Taskar et al. (2004) (Dan Klein, p.c.). 4 Translation The experiments in this section employed the tree transduction approach to translation, which is used by today’s best MT systems (Marcu et al., 2006). To translate by tree transduction, we assume that the input sentence has already been parsed by a parser like the one described in Section 3. The transduction algorithm performs a sequence of inferences to transform this input parse tree into an output parse tree, which has words of the target language in its leaves, often in a different order than the corresponding words in the source tree. The words are then read offthe target tree and outputted; the rest of the tree is discarded. Inferences are ordered by their cost, just like in ordinary parsing, and tree transduction stops when each source node has been transduced. The data for our experiments came from the English and French components of the EuroParl corpus (Koehn, 2005). From this corpus, we extracted sentence pairs where both sentences had between 5 and 40 words, and where the ratio of their lengths was no more than 2:1. We then extracted disjoint training, tuning, development, and test sets. The tuning, development, and test sets were 1000 sentence pairs each. Typical MT systems in the literature are trained on hundreds of thousands of sentence pairs, so our main experiment used 100K sentence pairs of training data. Where noted, preliminary experiments were performed using 10K sentence pairs of training data. We computed parse trees for all the English sentences in all data sets. For each of our two training sets, we induced word alignments using the default configuration of GIZA++ (Och & Ney, 2003). The training set 4 The results reported by Taskar et al. (2004) were not for a purely discriminative parser. Their parser beat the generative model of Bikel (2004) only after using the output from a generative model as a feature. 5 Bikel (2004) is a “clean room” reimplementation of the Collins (1999) model with comparable accuracy. word alignments and English parse trees were fed into the default French-English hierarchical alignment algorithm distributed with the GenPar system (Burbank et al., 2005) to produce binarized tree alignments. Tree alignments are the ideal form of training data for tree transducers, because they fully specify the relation between nodes in the source tree and nodes in the target tree. We experimented with a simplistic tree transducer that involves only two types of inferences. The first type transduces words at the leaves of the source tree; the second type transduces internal nodes. To transduce a word w at the leaf, the transducer replaces it with a single word v that is a translation of w. v can be empty (“NULL”). Leaves that are transduced to NULL are deterministically erased. Internal nodes are transduced merely by permuting the order of their children, where one of the possible permutations is to retain the original order. E.g., for a node with two children, the permutation classifier predicts either (1,2) or (2,1). This transducer is grossly inadequate for modeling real translations (Galley et al., 2004): It cannot account for many kinds of noise nor for many real translingual phenomena, such as head-switching and discontinuous constituents, which are important for accurate MT. It cannot even capture common “phrasal” translations such as English there is to French il y a. However, it is sufficient for controlled comparison of learning methods. One could apply the same learning methods to more sophisticated tree transducers. When inducing leaf transducers using 10K training sentence pairs, there were 819K training inferences and 80.9K tuning inferences. For 100K training sentence pairs, there were 36.8M and 375K, respectively. And for inducing internal node transducers using 100K training sentence pairs, there were 1.0M and 9.2K, respectively. 362K leaf transduction inferences were used for development. We parallelized training of the word transducers according to the source and target word pair (w, v). Prior to training, we filtered out word translation examples that were likely to be noise.6 Given this filtering, we induced 11.6K different word transducers over 10K training sentence pairs, and 41.3K over 100K sentence pairs. We used several kinds of features to evaluate leaf transductions. “Window” features included the source words and part-of-speech (POS) tags within a 2-word window around the word in the leaf (the “focus” word), along with their relative positions (from -2 to +2). “Co-occurrence” features included all words and POS tags from the whole source sentence, without position information. “Dependency” features were compiled from the automatically generated English parse trees. The literature on monolingual parsing gives a standard procedure for annotating each node in an English parse tree with its “lexical head word.” The dependency features of each word were the label of its maximal projection7, the label and lexical head of the parent of the maximal projection, the label and lexical head of all dependents of the maximal projection, and all the labels of all head-children, recursively, of the maximal projection. The features used to evaluate transductions of internal nodes included all those listed for leaf transduction above, where the focus words were the head words of the children of the internal node. Using these features, we applied the method of Section 2 to induce confidence-rating binary classifiers for each word pair in the lexicon, and additional binary classifiers for predicting the permutations of the children of internal tree nodes. Before attempting the whole transduction task, we compared the model of Section 2 with the model of Vickrey et al. (2005), which learned word transduction classifiers using logistic regression with ℓ2 regularization. The ℓ2 parameters were optimized using the conjugate gradient implementation of Daum´e (2004). We induced word transduction classifiers over the 10K training data using this model and our own, and tested them on the development set. The accuracy of the two models was statistically indistinguishable (about 54%). However, the models were vastly different in their size. The boosted decision trees had a total of about 38.7K non-zero compound features over an even smaller number of atomic features. In contrast, the ℓ2-regularized model had about 6.5 million nonzero features—an increase of more than two orders of magnitude. We estimated that, to scale up to training data sizes typically used by modern statistical MT systems, the ℓ2 classifiers would not fit in memory. To make them fit, we set all but the heaviest feature weights to zero. The number of features allowed to remain active in the ℓ2 classifier was the number of active features in the ℓ1 classifier. With the playing field leveled, the accuracy of the ℓ2 classifiers was only 6 Specifically: v was retained as a possible translation of w if v was the most frequent translation of w, or if v occurred as a translation of w at least three times and accounted for at least 20% of the translations of w in the training data. 7 I.e., the highest node that has the focus word as its lexical head; if it is a leaf, then that label is a POS tag. Table 2 Accuracy of tree transducers using 100K sentence pairs of training data. exponent = 1.0 exponent = 2.0 Precision Recall F1 Precision Recall F1 generative 51.29 38.30 43.85 22.62 16.90 19.35 discriminative 62.36 39.06 48.04 28.02 17.55 21.59 45%, even worse than the baseline accuracy of 48% obtained by always predicting the most common translation. In the main experiment, we compared two models of the inference cost function cΘ—one generative and one discriminative. The generative model was a top-down tree transducer (Comon et al., 1997), which stochastically generates the target tree top-down given the source tree. Under this model, the loss of an inference i is the negative log-probability of the node n(i) that it infers. We estimated the parameters of this transducer using the Viterbi approximation to the inside-outside algorithm described by Graehl and Knight (2004). We lexicalized the nodes so that their probabilities could capture bilexical dependencies. Our hypothesis was that the discriminative approach would be more accurate than the generative model, because its evaluation of each inference could take into account a greater variety of information in the tree, including its entire yield (string), not just the information in nearby nodes. We used the second search technique described in Section 1 to find the minimum cost target tree. For efficiency, we used a chart to keep track of item costs, and pruned items whose cost was more than 103 times the cost of the least expensive item in the same chart cell. We also pruned items whenever the number of items in the same cell exceeded 40. Our entire tree transduction algorithm was equivalent to bottom-up synchronous parsing (Melamed, 2004) where the source side of the output bi-tree is constrained by the input (source) tree. We compared the generative and discriminative models by reading out the string encoded in their predicted trees, and computing the F-measure between that string and the reference target sentence in the test corpus. Turian et al. (2003) show how to compute precision, recall, and the F-measure over pairs of strings without double-counting. Their family of measures is parameterized by an exponent. With the exponent set to 1.0, the F-measure is essentially the unigram overlap ratio. With the exponent set to 2.0, the F-measure rewards longer n-gram matches without double-counting. The generative transducer achieved its highest F-measure when the input parse trees were computed by the generative parser of Bikel (2004). The discriminatively trained transducer was most accurate when the source trees were computed by the parser in Section 3. Table 2 shows the results—the discriminatively trained transducer was much more accurate on all measures, at a statistical significance level of 0.001 using the Wilcoxon signed ranks test. Conclusion We have demonstrated how to predict tree structures using binary classifiers. These classifiers are discriminatively induced by boosting confidence-rated decision trees to minimize the ℓ1-regularized log-loss. For large problems in tree-structured prediction, such as natural language parsing and translation, this learning algorithm has several attractive properties. It learned a purely discriminative machine over 40 million training examples and 1.1 million atomic features, using no generative model of any kind. The method did not require a great deal of feature engineering a priori, because it performed feature selection over a compound feature space as it learned. To our knowledge, this is the first purely discriminatively trained constituent parser that surpasses a generative baseline, as well as the first published method for purely discriminative training of a syntax-driven MT system that makes no use of generative translation models, either in training or translation. In future work, we plan to integrate the parsing and translation methods described in our experiments, to reduce compounded error. Acknowledgments The authors would like to thank L´eon Bottou, Patrick Haffner, Fernando Pereira, Cynthia Rudin, and the anonymous reviewers for their helpful comments and constructive criticism. This research was sponsored by NSF grants #0238406 and #0415933. References Bikel, D. M. (2004). Intricacies of Collins’ parsing model. Computational Linguistics, 30(4), 479–511. Black, E., Abney, S., Flickenger, D., Gdaniec, C., Grishman, R., Harrison, P., et al. (1991). A procedure for quantitatively comparing the syntactic coverage of English grammars. In Speech and Natural Language. Burbank, A., Carpuat, M., Clark, S., Dreyer, M., Fox, P., Groves, D., et al. (2005). Final report on statistical machine translation by parsing (Tech. Rep.). Johns Hopkins University Center for Speech and Language Processing. http://www.clsp.jhu.edu/ws2005/groups/statistical/report.html. Charniak, E., & Johnson, M. (2005). Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In ACL. Chiang, D. (2005). A hierarchical phrase-based model for statistical machine translation. In ACL. Collins, M. (1999). Head-driven statistical models for natural language parsing. Doctoral dissertation, University of Pennsylvania. Collins, M., & Roark, B. (2004). Incremental parsing with the perceptron algorithm. In ACL. Comon, H., Dauchet, M., Gilleron, R., Jacquemard, F., Lugiez, D., Tison, S., et al. (1997). Tree automata techniques and applications. Available at http://www.grappa.univ-lille3.fr/tata. (released October 1, 2002) Cowan, B., Kuˇcerov´a, I., & Collins, M. (2006). A discriminative model for tree-to-tree translation. In EMNLP. Daum´e, H. (2004). Notes on CG and LM-BFGS optimization of logistic regression. (Paper available at http://pub.hal3.name#daume04cg-bfgs, implementation available at http://hal3.name/megam/) Galley, M., Hopkins, M., Knight, K., & Marcu, D. (2004). What’s in a translation rule? In HLT-NAACL. Graehl, J., & Knight, K. (2004). Training tree transducers. In HLT-NAACL. Henderson, J. (2004). Discriminative training of a neural network statistical parser. In ACL. Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In MT Summit X. Lafferty, J., McCallum, A., & Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. Marcu, D., Wang, W., Echihabi, A., & Knight, K. (2006). SPMT: Statistical machine translation with syntactified target language phrases. In EMNLP. Melamed, I. D. (2004). Statistical machine translation by parsing. In ACL. Ng, A. Y. (2004). Feature selection, ℓ1 vs. ℓ2 regularization, and rotational invariance. In ICML. Och, F. J., & Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19–51. Perkins, S., Lacker, K., & Theiler, J. (2003). Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3, 1333–1356. Ratnaparkhi, A. (1996). A maximum entropy part-of-speech tagger. In EMNLP. Riezler, S., & Maxwell, J. T. (2006). Grammatical machine translation. In HLT-NAACL. Riezler, S., & Vasserman, A. (2004). Incremental feature selection and ℓ1 regularization for relaxed maximumentropy modeling. In EMNLP. Rosset, S., Zhu, J., & Hastie, T. (2004). Boosting as a regularized path to a maximum margin classifier. Journal of Machine Learning Research, 5, 941–973. Schapire, R. E., & Singer, Y. (1999). Improved boosting using confidence-rated predictions. Machine Learning, 37(3), 297–336. Taskar, B., Klein, D., Collins, M., Koller, D., & Manning, C. (2004). Max-margin parsing. In EMNLP. Taylor, A., Marcus, M., & Santorini, B. (2003). The Penn Treebank: An overview. In A. Abeill´e (Ed.), Treebanks: Building and using parsed corpora (chap. 1). Turian, J., & Melamed, I. D. (2005). Constituent parsing by classification. In IWPT. Turian, J., & Melamed, I. D. (2006a). Advances in discriminative parsing. In ACL. Turian, J., & Melamed, I. D. (2006b). Computational challenges in parsing by classification. In HLT-NAACL workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing. Turian, J., Shen, L., & Melamed, I. D. (2003). Evaluation of machine translation and its evaluation. In MT Summit IX. Vickrey, D., Biewald, L., Teyssier, M., & Koller, D. (2005). Word-sense disambiguation for machine translation. In EMNLP. Wellington, B., Turian, J., Pike, C., & Melamed, I. D. (2006). Scalable purely-discriminative training for word and tree transducers. In AMTA.
2006
183
3,014
On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts Hariharan Narayanan Department of Computer Science University of Chicago Chicago IL 60637 hari@cs.uchicago.edu Mikhail Belkin Department of Computer Science and Engineering The Ohio State University Columbus, OH 43210 mbelkin@cse.ohio-state.edu Partha Niyogi Department of Computer Science University of Chicago Chicago IL 60637 niyogi@cs.uchicago.edu Abstract One of the intuitions underlying many graph-based methods for clustering and semi-supervised learning, is that class or cluster boundaries pass through areas of low probability density. In this paper we provide some formal analysis of that notion for a probability distribution. We introduce a notion of weighted boundary volume, which measures the length of the class/cluster boundary weighted by the density of the underlying probability distribution. We show that sizes of the cuts of certain commonly used data adjacency graphs converge to this continuous weighted volume of the boundary. keywords: Clustering, Semi-Supervised Learning 1 Introduction Consider the probability distribution with density p(x) depicted in Fig. 1, where darker color denotes higher probability density. Asked to cluster this probability distribution, we would probably separate it into two roughly Gaussian bumps as shown in the left panel. Same intuition applies to semi-supervised learning. Asked to point out more likely groups of data of the same type, we would be inclined to believe that these two bumps contain data points with the same labels. On the other hand, the class boundary shown in the right panel seems rather less likely. One way to state this basic intuition is the Low Density Separation assumption [5], saying that the class/cluster boundary tends to pass through regions of low density. In this paper we propose a formal measure on the complexity of the boundary, which intuitively corresponds to the Low Density Separation assumption. We will show that given a class boundary, this measure can be computed from a finite sample from the probability distribution. Moreover, we show this is done by computing the size of a cut for a partition of a certain standard adjacency graph, defined on that sample, and point out some interesting connections to spectral clustering. To fix our intuitions, let us consider the question of what makes the cut in the left panel more intuitively acceptable than the cut in the right. Two features of the left cut make it more pleasing: the cut is shorter in length and the cut Figure 1: A likely cut and a less likely cut. passes through a low-density area. Note that a very jagged cut through a low-density area or a short cut through the middle of a high-density bump would be unsatisfactory. It will therefore appear reasonable to take the length of the cut as a measure of its complexity but weight it depending on the density of the probability distribution p through which it passes. In other words, we propose the weighted length of the boundary, represented by the contour integral along the boundary R cut p(s)ds to measure the complexity of a cut. It is clear that the boundary in the left panel has a considerably lower weighted length than the boundary in the right panel of our Fig. 1. To formalize this notion further consider a (marginal) probability distribution with density p(x) supported on some domain or manifold M. This domain is partitioned in two disjoint clusters/parts. Assuming that the boundary S is a smooth hypersurface we define the weighted volume of the cut to be R S p(s)ds. Note that just as in the example above, the integral is taken over the surface of the boundary. We will show how this quantity can be approximated given empirical data and establish connections with some popular graph-based methods. 2 Connections and related work 2.1 Spectral Clustering Over the last two decades there has been considerable interest in various spectral clustering techniques (see, e.g., [6] for an overview). The idea of spectral clustering can be expressed very simply. Given a graph, we would often like to construct a balanced partitioning of the vertex set, i.e. a partitioning such which minimizes the number (or total weight) of edges across the cut. This is generally an NP-hard optimization problem. It turns out, however, that a simple real-valued relaxation can be used to reduce it to standard linear algebra, typically to finding eigenvectors of a certain graph Laplacian. We note that the quality of partition is usually measured in terms of the corresponding cut size. A critical question, when this notion is applied to general purpose clustering in the context of machine learning is how to construct the graph given data points. A typical choice here is the Gaussian weights (e.g., [14]). To summarize, a graph is obtained from a point cloud, using Gaussian or other weights, and partitioned using spectral clustering or a different algorithm, which attempts to approximate the smallest (balanced) cut. We note that while the intuition is that spectral clustering is an approximation to the minimum cut, and is closely related to random walks and diffusions on graphs and the underlying probability distributions ([13, 12]), existing results on convergence of spectral clustering ([11]) do not provide a formal interpretation of the limiting partition or connect it to the size of the resulting cut. 2.2 Graph-based semi-supervised learning Similarly to spectral clustering, graph-based semi-supervised learning constructs a graph from the data. In contrast to clustering, however, some of the data is labeled. The problem is typically to either label the unlabeled points (transduction) or, more generally, to build a classifier defined on the whole space. This may be done trying to find the 2 Figure 2: Curves of small and high condition number respectively minimum cut, which respects the labels of the data directly ([3]), or, using graph Laplacian as a penalty functional (e.g.,[15, 1]). One of the important intuitions of semi-supervised learning is the cluster assumption(e.g., [4]) or, more specifically, the low density separation assumption suggested in [5], which states that the class boundary passes through a low density region. We argue that this intuition needs to slightly modified by suggesting that cutting through a high density region may be acceptable as long as the length of the cut is very short. For example imagine two high-density round clusters connected by a very thin high-density thread. Cutting the thread is appropriate as long as the width of the thread is much smaller than the radii of the clusters. 2.3 Convergence of Manifold Laplacians Another closely related line of research is the connections between point-cloud graph Laplacians and Laplace-Beltrami operators on manifolds, which have been explored recently in [9, 2, 10]. A typical result in that setting shows that for a fixed function f and points sampled from a probability distribution on a manifold or a domain, the graph Laplacian applied to f converges to the manifold Laplace-Beltrami operator ∆Mf. We note that the results of those papers cannot be directly applied in our situation as for us f is the indicator function of a subset and is not differentiable. Even more importantly, this paper establishes an explicit connection between the point-cloud Laplacian applied to such characteristic functions (weighted graph cuts) and a geometric quantity, which is the weighted volume of the cut boundary. This geometric connection does not easily follow from results of those papers and the techniques used in the proof of our Theorem 3 are significantly different. 3 Summary of the Main Results Let p be a probability density function on a domain M ⊆Rd. Let S be a smooth hypersurface that separates M into two parts, S1 and S2. The smoothness of S will be quantified by a condition number 1/τ, where τ is the radius of the largest ball that can be placed tangent to the manifold at any point, intersecting the manifold at only one point. It bounds the curvature of the manifold. Definition 1 Let Kt(x, y) be the heat kernel in Rd given by Kt(x, y) := 1 (4πt)d/2 e−∥x−y∥2/4t. Let Mt := Kt(x, x) = 1 (4πt)d/2 . Let X := {x1, . . . , xN} be a set of N points chosen independently at random from p. Consider the complete graph whose vertices are associated with the points in X, and where the weight of the edge between xi and xj, i ̸= j is given 3 by Wij = Kt(xi, xj) Let W be the weight matrix. Let X1 = X ∩S1 and X2 = X ∩S2 be the data point which land in S1 and S2 respectively. Let D be the diagonal matrix whose entries are row sums of W (degrees of the corresponding vertices) Dii = X j Wij The normalized Laplacian associated to the data X (and parameter t) is the matrix L(t, X) := I −D−1/2WD−1/2. Let f = (f1, . . . , fN) be the indicator vector for X1: fi = ½ 1 if xi ∈X1 0 otherwise There are two quantities of interest: 1. R S p(s)ds, which measures the quality of the partition S in accordance with the weighted volume of the boundary. 2. f T Lf, which measures the quality of the empirical partition in terms of its cut size. Our main Theorem shows that after an appropriate scaling, the empirical cut size converges to the volume of the boundary. Theorem 1 Let the number of points, |X| = N tend to infinity and {tN}∞ 0 , be a sequence of values of t that tend to zero such that tN > 1 N 1 2d+2 . Then with probability 1, lim √π N √ tf T L(tN, X)f = Z S p(s)ds Further, for any δ ∈(0, 1) and any ϵ ∈(0, 1/2), there exists a positive constant C and an integer N0(depending on ϵ, δ and certain generic invariants of p and S) such that with probability 1 −δ, (∀N > N0), ¯¯¯¯ √π N √ tf T L(tN, X)f − Z S p(s)ds ¯¯¯¯ < Ctϵ N. This theorem is proved by first relating the empirical quantity √π N √ tf T L(tN, X)f to a heat flow across the relevant cut (on the continuous domain), and then relating the heat flow to the measure of the cut. In order to state these results, we need the following notation. Definition 2 Let ψt(x) = p(x) qR M Kt(x, z)p(z)dz . Let β(t, X) := √π N √ tf T L(t, X)f and α(t) := rπ t Z S1 Z S2 Kt(x, y)ψt(x)ψt(y)dxdy. Where t and X are clear from context, we shall abbreviate β(t, X) to β and α(t) to α. In theorem 2 we show that for a fixed t, as the number of points |X| = N tends to infinity, with probability 1, β(t, X) tends to α(t). In theorem 3 we show that α(t) can be made arbitrarily close to the weighted volume of the boundary by making t tend to 0. 4 S 1/2 t S1 S2 Figure 3: Heat flow α tends to R S p(s)ds Theorem 2 Let 0 < µ < 1. Let u := 1/ √ t2d+1N 1−µ. Then, there exist positive constants C1, C2 depending only on p and S such that with probability greater than 1 −exp (−C1 N µ) |β(t, X) −α(t)| < C2 ³ 1 + t d+1 2 ´ uα(t). (1) Theorem 3 For any ϵ ∈(0, 1 2), there exists a constant C such that for all t such that 0 < t < τ(2d)− e e−1 , ¯¯¯¯ rπ t Z S2 Z S1 Kt(x, y)ψt(x)ψt(y)dxdy − Z S p(s)ds ¯¯¯¯ < C tϵ. (2) By letting N →∞and tN →0 at suitable rates and putting together theorems 2 and 3, we obtain the following theorem: Theorem 4 Let the number of random data points N →∞, and tN →0, at rates so that u := 1/ √ t2d+1N 1−µ →0. Then, for any ϵ ∈(0, 1/2), there exist positive constants C1, C2 depending only on p and S , such that for any N > 1 with probability greater than 1 −exp (−C1(N µ)), ¯¯¯¯β (tN, X) − Z S p(s)ds ¯¯¯¯ < C2 (tϵ + u) (3) 4 Outline of Proofs Theorem 1 is a corollary of Theorem 4, obtained by setting u to be tϵ, and setting µ to 1−2ϵ 2d+2. N0 is chosen to be a large enough integer so that an application of the union bound on all N > N0, still gives us a probability P N>N0 exp (−C1(N µ)) < δ, of the rate of convergence being worse than stated in Theorem 1. Theorem 4 is a direct consequence of Theorem 2 and Theorem 3. Theorem 2: We prove theorem 2 using a generalization of McDiarmid’s inequality from [7, 8]. McDiarmid’s inequality asserts that a function of a large number of independent random variables, that is not very influenced by the value of any one of these, takes a value close to its mean. In the generalization that we use, it is permitted that over a bad set that has a small probability mass, the function is highly influenced by some of the random variables. In our setting, it can be shown that our measure of a cut, f T Lf is such a function of the independent random points in X, and so the result is applicable. There is another step involved, since the mean of f T Lf is not α, the quantity to which we wish to prove convergence. Therefore we need to prove that the mean E[ √π N √ tf T L(t, X)f] tends to α(t) as N tends to infinity. Now, √π N √ tf T L(t, X)f = 1/N p π/t X x∈X1 X y∈X2 Kt(x, y) {(P z̸=x Kt(x, z))(P z̸=y Kt(y, z))}1/2 . 5 If, instead, we had in the denominator of the right side v u u t Z M p(z)Kt(x, z)dz Z M p(z)Kt(y, z)dz, using the linearity of Expectation, E   1 N(N −1) p π/t X x∈X1 X y∈X2 Kt(x, y) sµR M p(z)Kt(x, z)dz ¶ µR M p(z)Kt(y, z)dz ¶   = α(t). Using Chernoff bounds, we can show that with high probability, for all x ∈X, P z̸=x Kt(x, z) N −1 ≈ Z M p(z)Kt(x, z)dz. Putting the last two facts together and using the Generalization of McDiarmid’s inequality from [7, 8], the result follows. Since the exact details require fairly technical calculations, we leave them to the Journal version. Theorem 3: The quantity α := rπ t Z S1 Z S2 Kt(x, y)ψt(x)ψt(y)dxdy is similar to the heat that would flow in time t from one part to another if the first were heated proportional to p. Intuitively, the heat that would flow from one part to the other in a small interval ought to be related to the volume of the boundary between these two parts, which in our setting is R S p(s)ds. To prove this relationship, we bound α both above and below in terms of the weighted volume and condition number of the boundary. These bounds are obtained by making comparisons with the “worst case”, given condition number 1 τ , which is when S is a sphere of radius τ. In order to obtain a lower bound on α, we observe that if B2 is the nearest ball of radius τ contained in S1 to a point P in S2 that is within τ of S1, Z S1 Kt(x, P)ψt(x)ψt(P)dx ≥ Z B2 Kt(x, P)ψt(x)ψt(P)dx, as in Figure 4. Similarly, to obtain an upper bound on α, we observe that if B1 is a ball or radius τ in S2, tangent to B2 at the point of S nearest to P, Z S1 Kt(x, P)ψt(x)ψt(P)dx ≤ Z Bc 1 Kt(x, P)ψt(x)ψt(P)dx. We now indicate how a lower bound is obtained for Z B2 Kt(x, P)ψt(x)ψt(P)dx. A key observation is that for R = p 2dt ln(1/t), R ∥x−P ∥>R Kt(x, P)dx ≪1. For this reason, only the portions of B2 near P contribute to the the integral Z B2 Kt(x, P)ψt(x)ψt(P)dx. It turns out that a good lower bound can be obtained by considering the integral over H2 instead, where H2 is a halfspace whose boundary is at a distance τ −R2 2τ from the center as in figure 4. 6                                                                                                                                   B2                                            P B2 B1   S1 P Figure 4: The density of heat diffusing to point P from S1 in the left panel is less or equal to the density of heat diffusing to P from B2 in the right panel. 2 B                                                                     P H2 P 2 R τ 2 Figure 5: The density of heat received by point P from B2 in the left panel can be approximated by the density of heat received by P from the halfspace H2 in the right panel. 7 An upper bound for Z Bc 1 Kt(x, P)ψt(x)ψt(P)dx is obtained along similar lines. The details of this proof will be presented in the Journal version. 5 Conclusion In this paper we take a step towards a probabilistic analysis of graph based methods for clustering. The nodes of the graph are identified with data points drawn at random from an underlying probability distribution on a continuous domain. For a fixed partition we show that the cut-size of the graph partition converges to the weighted volume of the boundary separating the two regions of the domain. The rates of this convergence are analyzed. If one is able to generalize our result uniformly over all partitions, this allows us to relate ideas around graph based partitioning to ideas surrounding Low Density Separation. The most important future direction would be to achieve similar results uniformly over balanced partitions. References [1] M. Belkin and P. Niyogi (2004).“Semi-supervised Learning on Riemannian Manifolds.” In Machine Learning 56, Special Issue on Clustering, 209-239. [2] M. Belkin and P. Niyogi. “Toward a theoretical foundation for Laplacian-based manifold methods.” COLT 2005. [3] A. Blum and S. Chawla, “Learning from labeled and unlabeled data using graph mincuts“, ICML 2001. [4] O.Chapelle, J. Weston, B. Scholkopf, “Cluster kernels for semi-supervised learning”, NIPS 2002. [5] O. Chapelle and A. Zien, “Semi-supervised Classification by Low Density Separation”, AISTATS 2005. [6] Chris Ding, Spectral Clustering, ICML 2004 Tutorial. [7] Samuel Kutin, Partha Niyogi, “Almost-everywhere Algorithmic Stability and Generalization Error.”, UAI 2002, 275-282 [8] S. Kutin, TR-2002-04, “Extensions to McDiarmid’s inequality when differences are bounded with high probability.” Technical report TR-2002-04 at the Department of Computer Science, University of Chicago. [9] S. Lafon, Diffusion Maps and Geodesic Harmonics, Ph. D. Thesis, Yale University, 2004. [10] M. Hein, J.-Y. Audibert, U. von Luxburg, From Graphs to Manifolds – Weak and Strong Pointwise Consistency of Graph Laplacians, COLT 2005. [11] U. von Luxburg, M. Belkin, O. Bousquet, Consistency of Spectral Clustering, Max Planck Institute for Biological Cybernetics Technical Report TR 134, 2004. [12] M. Meila and J. Shi. A Random Walks View of Spectral Segmentation, NIPS 2001. [13] B. Nadler, S. Lafon, R. R. Coifman,and I. G. Kevrekidis. Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators, NIPS 2006. [14] J. Shi and J. Malik. “Normalized cuts and image segmentation.” [15] X. Zhu, J. Lafferty and Z. Ghahramani, Semi-supervised learning using Gaussian fields and harmonic functions, ICML 2003. 8
2006
184
3,015
A Kernel Method for the Two-Sample-Problem Arthur Gretton MPI for Biological Cybernetics T¨ubingen, Germany arthur@tuebingen.mpg.de Karsten M. Borgwardt Ludwig-Maximilians-Univ. Munich, Germany kb@dbs.ifi.lmu.de Malte Rasch Graz Univ. of Technology, Graz, Austria malte.rasch@igi.tu-graz.ac.at Bernhard Sch¨olkopf MPI for Biological Cybernetics T¨ubingen, Germany bs@tuebingen.mpg.de Alexander J. Smola NICTA, ANU Canberra, Australia Alex.Smola@anu.edu.au Abstract We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. The test statistic can be computed in O(m2) time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist. 1 Introduction We address the problem of comparing samples from two probability distributions, by proposing a statistical test of the hypothesis that these distributions are different (this is called the two-sample or homogeneity problem). This test has application in a variety of areas. In bioinformatics, it is of interest to compare microarray data from different tissue types, either to determine whether two subtypes of cancer may be treated as statistically indistinguishable from a diagnosis perspective, or to detect differences in healthy and cancerous tissue. In database attribute matching, it is desirable to merge databases containing multiple fields, where it is not known in advance which fields correspond: the fields are matched by maximising the similarity in the distributions of their entries. In this study, we propose to test whether distributions p and q are different on the basis of samples drawn from each of them, by finding a smooth function which is large on the points drawn from p, and small (as negative as possible) on the points from q. We use as our test statistic the difference between the mean function values on the two samples; when this is large, the samples are likely from different distributions. We call this statistic the Maximum Mean Discrepancy (MMD). Clearly the quality of MMD as a statistic depends heavily on the class F of smooth functions that define it. On one hand, F must be “rich enough” so that the population MMD vanishes if and only if p = q. On the other hand, for the test to be consistent, F needs to be “restrictive” enough for the empirical estimate of MMD to converge quickly to its expectation as the sample size increases. We shall use the unit balls in universal reproducing kernel Hilbert spaces [22] as our function class, since these will be shown to satisfy both of the foregoing properties. On a more practical note, MMD is cheap to compute: given m points sampled from p and n from q, the cost is O(m + n)2 time. We define two non-parametric statistical tests based on MMD. The first, which uses distributionindependent uniform convergence bounds, provides finite sample guarantees of test performance, at the expense of being conservative in detecting differences between p and q. The second test is based on the asymptotic distribution of MMD, and is in practice more sensitive to differences in distribution at small sample sizes. These results build on our earlier work in [6] on MMD for the two sample problem, which addresses only the second kind of test. In addition, the present approach employs a more accurate approximation to the asymptotic distribution of the test statistic. We begin our presentation in Section 2 with a formal definition of the MMD, and a proof that the population MMD is zero if and only if p = q when F is the unit ball of a universal RKHS. We also give an overview of hypothesis testing as it applies to the two-sample problem, and review previous approaches. In Section 3, we provide a bound on the deviation between the population and empirical MMD, as a function of the Rademacher averages of F with respect to p and q. This leads to a first hypothesis test. We take a different approach in Section 4, where we use the asymptotic distribution of an unbiased estimate of the squared MMD as the basis for a second test. Finally, in Section 5, we demonstrate the performance of our method on problems from neuroscience, bioinformatics, and attribute matching using the Hungarian marriage approach. Our approach performs well on high dimensional data with low sample size; in addition, we are able to successfully apply our test to graph data, for which no alternative tests exist. Proofs and further details are provided in [13], and software may be downloaded from http : //www.kyb.mpg.de/bs/people/arthur/mmd.htm 2 The Two-Sample-Problem Our goal is to formulate a statistical test that answers the following question: Problem 1 Let p and q be distributions defined on a domain X. Given observations X := {x1, . . . , xm} and Y := {y1, . . . , yn}, drawn independently and identically distributed (i.i.d.) from p and q respectively, is p ̸= q? To start with, we wish to determine a criterion that, in the population setting, takes on a unique and distinctive value only when p = q. It will be defined based on [10, Lemma 9.3.2]. Lemma 1 Let (X, d) be a separable metric space, and let p, q be two Borel probability measures defined on X. Then p = q if and only if Ep(f(x)) = Eq(f(x)) for all f ∈C(X), where C(X) is the space of continuous bounded functions on X. Although C(X) in principle allows us to identify p = q uniquely, such a rich function class is not practical in the finite sample setting. We thus define a more general class of statistic, for as yet unspecified function classes F, to measure the discrepancy between p and q, as proposed in [11]. Definition 2 Let F be a class of functions f : X →R and let p, q, X, Y be defined as above. Then we define the maximum mean discrepancy (MMD) and its empirical estimate as MMD [F, p, q] := sup f∈F (Ex∼p[f(x)] −Ey∼q[f(y)]) , (1) MMD [F, X, Y ] := sup f∈F 1 m m X i=1 f(xi) −1 n n X i=1 f(yi) ! . (2) We must now identify a function class that is rich enough to uniquely establish whether p = q, yet restrictive enough to provide useful finite sample estimates (the latter property will be established in subsequent sections). To this end, we select F to be the unit ball in a universal RKHS H [22]; we will henceforth use F only to denote this function class. With the additional restriction that X be compact, a universal RKHS is dense in C(X) with respect to the L∞norm. It is shown in [22] that Gaussian and Laplace kernels are universal. Theorem 3 Let F be a unit ball in a universal RKHS H, defined on the compact metric space X, with associated kernel k(·, ·). Then MMD [F, p, q] = 0 if and only if p = q. This theorem is proved in [13]. We next express the MMD in a more easily computable form. This is simplified by the fact that in an RKHS, function evaluations can be written f(x) = ⟨φ(x), f⟩, where φ(x) = k(x, .). Denote by µ[p] := Ex∼p(x) [φ(x)] the expectation of φ(x) (assuming that it exists – a sufficient condition for this is ∥µ[p]∥2 H < ∞, which is rearranged as Ep[k(x, x′)] < ∞, where x and x′ are independent random variables drawn according to p). Since Ep[f(x)] = ⟨µ[p], f⟩, we may rewrite MMD[F, p, q] = sup ∥f∥H≤1 ⟨µ[p] −µ[q], f⟩= ∥µ[p] −µ[q]∥H . (3) Using µ[X] := 1 m Pm i=1 φ(xi) and k(x, x′) = ⟨φ(x), φ(x′)⟩, an empirical estimate of MMD is MMD [F, X, Y ] =  1 m2 m X i,j=1 k(xi, xj) − 2 mn m,n X i,j=1 k(xi, yj) + 1 n2 n X i,j=1 k(yi, yj)   1 2 . (4) Eq. (4) provides us with a test statistic for p ̸= q. We shall see in Section 3 that this estimate is biased, although it is straightforward to upper bound the bias (we give an unbiased estimate, and an associated test, in Section 4). Intuitively we expect MMD[F, X, Y ] to be small if p = q, and the quantity to be large if the distributions are far apart. Computing (4) costs O((m + n)2) time. Overview of Statistical Hypothesis Testing, and of Previous Approaches Having defined our test statistic, we briefly describe the framework of statistical hypothesis testing as it applies in the present context, following [9, Chapter 8]. Given i.i.d. samples X ∼p of size m and Y ∼q of size n, the statistical test, T(X, Y ) : Xm × Xn 7→{0, 1} is used to distinguish between the null hypothesis H0 : p = q and the alternative hypothesis H1 : p ̸= q. This is achieved by comparing the test statistic MMD[F, X, Y ] with a particular threshold: if the threshold is exceeded, then the test rejects the null hypothesis (bearing in mind that a zero population MMD indicates p = q). The acceptance region of the test is thus defined as any real number below the threshold. Since the test is based on finite samples, it is possible that an incorrect answer will be returned: we define the Type I error as the probability of rejecting p = q based on the observed sample, despite the null hypothesis being true. Conversely, the Type II error is the probability of accepting p = q despite the underlying distributions being different. The level α of a test is an upper bound on the Type I error: this is a design parameter of the test, and is used to set the threshold to which we compare the test statistic (finding the test threshold for a given α is the topic of Sections 3 and 4). A consistent test achieves a level α, and a Type II error of zero, in the large sample limit. We will see that both of the tests proposed in this paper are consistent. We next give a brief overview of previous approaches to the two sample problem for multivariate data. Since our later experimental comparison is with respect to certain of these methods, we give abbreviated algorithm names in italics where appropriate: these should be used as a key to the tables in Section 5. We provide further details in [13]. A generalisation of the Wald-Wolfowitz runs test to the multivariate domain was proposed and analysed in [12, 17] (Wolf), which involves counting the number of edges in the minimum spanning tree over the aggregated data that connect points in X to points in Y . The computational cost of this method using Kruskal’s algorithm is O((m + n)2 log(m + n)), although more modern methods improve on the log(m + n) term. Two possible generalisations of the Kolmogorov-Smirnov test to the multivariate case were studied in [4, 12]. The approach of Friedman and Rafsky (Smir) in this case again requires a minimal spanning tree, and has a similar cost to their multivariate runs test. A more recent multivariate test was proposed in [20], which is based on the minimum distance non-bipartite matching over the aggregate data, at cost O((m + n)3). Another recent test was proposed in [15] (Hall): for each point from p, it requires computing the closest points in the aggregated data, and counting how many of these are from q (the procedure is repeated for each point from q with respect to points from p). The test statistic is costly to compute; [15] consider only tens of points in their experiments. Yet another approach is to use some distance (e.g. L1 or L2) between Parzen window estimates of the densities as a test statistic [1, 3], based on the asymptotic distribution of this distance given p = q. When the L2 norm is used, the test statistic is related to those we present here, although it is arrived at from a different perspective (see [13]: the test in [1] is obtained in a more restricted setting where the RKHS kernel is an inner product between Parzen windows. Since we are not doing density estimation, however, we need not decrease the kernel width as the sample grows. In fact, decreasing the kernel width reduces the convergence rate of the associated two-sample test, compared with the (m + n)−1/2 rate for fixed kernels). The L1 approach of [3] (Biau) requires the space to be partitioned into a grid of bins, which becomes difficult or impossible for high dimensional problems. Hence we use this test only for low-dimensional problems in our experiments. 3 A Test based on Uniform Convergence Bounds In this section, we establish two properties of the MMD. First, we show that regardless of whether or not p = q, the empirical MMD converges in probability at rate 1/√m + n to its population value. This establishes the consistency of statistical tests based on MMD. Second, we give probabilistic bounds for large deviations of the empirical MMD in the case p = q. These bounds lead directly to a threshold for our first hypothesis test. We begin our discussion of the convergence of MMD[F, X, Y ] to MMD[F, p, q]. Theorem 4 Let p, q, X, Y be defined as in Problem 1, and assume |k(x, y)| ≤K. Then Pr n |MMD[F, X, Y ] −MMD[F, p, q]| > 2  (K/m) 1 2 + (K/n) 1 2  + ǫ o ≤2 exp  −ǫ2mn 2K(m+n)  . Our next goal is to refine this result in a way that allows us to define a test threshold under the null hypothesis p = q. Under this circumstance, the constants in the exponent are slightly improved. Theorem 5 Under the conditions of Theorem 4 where additionally p = q and m = n, MMD[F, X, Y ] > m−1 2 q 2Ep [k(x, x) −k(x, x′)] | {z } B1(F,p) + ǫ > 2(K/m)1/2 | {z } B2(F,p) + ǫ, both with probability less than exp  −ǫ2m 4K  (see [13] for the proof). In this theorem, we illustrate two possible bounds B1(F, p) and B2(F, p) on the bias in the empirical estimate (4). The first inequality is interesting inasmuch as it provides a link between the bias bound B1(F, p) and kernel size (for instance, if we were to use a Gaussian kernel with large σ, then k(x, x) and k(x, x′) would likely be close, and the bias small). In the context of testing, however, we would need to provide an additional bound to show convergence of an empirical estimate of B1(F, p) to its population equivalent. Thus, in the following test for p = q based on Theorem 5, we use B2(F, p) to bound the bias. Lemma 6 A hypothesis test of level α for the null hypothesis p = q (equivalently MMD[F, p, q] = 0) has the acceptance region MMD[F, X, Y ] < 2 p K/m  1 + p log α−1  . We emphasise that Theorem 4 guarantees the consistency of the test, and that the Type II error probability decreases to zero at rate 1/√m (assuming m = n). To put this convergence rate in perspective, consider a test of whether two normal distributions have equal means, given they have unknown but equal variance [9, Exercise 8.41]. In this case, the test statistic has a Student-t distribution with n + m −2 degrees of freedom, and its error probability converges at the same rate as our test. 4 An Unbiased Test Based on the Asymptotic Distribution of the U-Statistic We now propose a second test, which is based on the asymptotic distribution of an unbiased estimate of MMD2. We begin by defining this test statistic. Lemma 7 Given x and x′ independent random variables with distribution p, and y and y′ independent random variables with distribution q, the population MMD2 is MMD2 [F, p, q] = Ex,x′∼p [k(x, x′)] −2Ex∼p,y∼q [k(x, y)] + Ey,y′∼q [k(y, y′)] (5) (see [13] for details). Let Z := (z1, . . . , zm) be m i.i.d. random variables, where zi := (xi, yi) (i.e. we assume m = n). An unbiased empirical estimate of MMD2 is MMD2 u [F, X, Y ] = 1 (m)(m −1) m X i̸=j h(zi, zj), (6) which is a one-sample U-statistic with h(zi, zj) := k(xi, xj) + k(yi, yj) −k(xi, yj) −k(xj, yi). The empirical statistic is an unbiased estimate of MMD2, although it does not have minimum variance (the minimum variance estimate is almost identical: see [21, Section 5.1.4]). We remark that these quantities can easily be linked with a simple kernel between probability measures: (5) is a special case of the Hilbertian metric [16, Eq. (4)] with the associated kernel K(p, q) = Ep,qk(x, y) [16, Theorem 4]. The asymptotic distribution of this test statistic under H1 is given by [21, Section 5.5.1], and the distribution under H0 is computed based on [21, Section 5.5.2] and [1, Appendix]; see [13] for details. Theorem 8 We assume E h2 < ∞. Under H1, MMD2 u converges in distribution (defined e.g. in [14, Section 7.2]) to a Gaussian according to m 1 2 MMD2 u −MMD2 [F, p, q]  D →N 0, σ2 u  , where σ2 u = 4  Ez  (Ez′h(z, z′))2 −[Ez,z′(h(z, z′))]2 , uniformly at rate 1/√m [21, Theorem B, p. 193]. Under H0, the U-statistic is degenerate, meaning Ez′h(z, z′) = 0. In this case, MMD2 u converges in distribution according to mMMD2 u D → ∞ X l=1 λl  z2 l −2  , (7) where zl ∼N(0, 2) i.i.d., λi are the solutions to the eigenvalue equation Z X ˜k(x, x′)ψi(x)dp(x) = λiψi(x′), and ˜k(xi, xj) := k(xi, xj) −Exk(xi, x) −Exk(x, xj) + Ex,x′k(x, x′) is the centred RKHS kernel. Our goal is to determine whether the empirical test statistic MMD2 u is so large as to be outside the 1 −α quantile of the null distribution in (7) (consistency of the resulting test is guaranteed by the form of the distribution under H1). One way to estimate this quantile is using the bootstrap [2] on the aggregated data. Alternatively, we may approximate the null distribution by fitting Pearson curves to its first four moments [18, Section 18.8]. Taking advantage of the degeneracy of the U-statistic, we obtain (see [13]) E  MMD2 u 2 = 2 m(m −1)Ez,z′  h2(z, z′)  and E  MMD2 u 3 = 8(m −2) m2(m −1)2 Ez,z′ [h(z, z′)Ez′′ (h(z, z′′)h(z′, z′′))] + O(m−4). (8) The fourth moment E  MMD2 u 4 is not computed, since it is both very small (O(m−4)) and expensive to calculate (O(m4)). Instead, we replace the kurtosis with its lower bound kurt MMD2 u  ≥ skew MMD2 u 2 + 1. 5 Experiments We conducted distribution comparisons using our MMD-based tests on datasets from three realworld domains: database applications, bioinformatics, and neurobiology. We investigated the uniform convergence approach (MMD), the asymptotic approach with bootstrap (MMD2 u B), and the asymptotic approach with moment matching to Pearson curves (MMD2 u M). We also compared against several alternatives from the literature (where applicable): the multivariate ttest, the Friedman-Rafsky Kolmogorov-Smirnov generalisation (Smir), the Friedman-Rafsky WaldWolfowitz generalisation (Wolf), the Biau-Gy¨orfitest (Biau), and the Hall-Tajvidi test (Hall). Note that we do not apply the Biau-Gy¨orfitest to high-dimensional problems (see end of Section 2), and that MMD is the only method applicable to structured data such as graphs. An important issue in the practical application of the MMD-based tests is the selection of the kernel parameters. We illustrate this with a Gaussian RBF kernel, where we must choose the kernel width σ (we use this kernel for univariate and multivariate data, but not for graphs). The empirical MMD is zero both for kernel size σ = 0 (where the aggregate Gram matrix over X and Y is a unit matrix), and also approaches zero as σ →∞(where the aggregate Gram matrix becomes uniformly constant). We set σ to be the median distance between points in the aggregate sample, as a compromise between these two extremes: this remains a heuristic, however, and the optimum choice of kernel size is an ongoing area of research. Data integration As a first application of MMD, we performed distribution testing for data integration: the objective is to aggregate two datasets into a single sample, with the understanding that both original samples are generated from the same distribution. Clearly, it is important to check this last condition before proceeding, or an analysis could detect patterns in the new dataset that are caused by combining the two different source distributions, and not by real-world phenomena. We chose several real-world settings to perform this task: we compared microarray data from normal and tumor tissues (Health status), microarray data from different subtypes of cancer (Subtype), and local field potential (LFP) electrode recordings from the Macaque primary visual cortex (V1) with and without spike events (Neural Data I and II). In all cases, the two data sets have different statistical properties, but the detection of these differences is made difficult by the high data dimensionality. We applied our tests to these datasets in the following fashion. Given two datasets A and B, we either chose one sample from A and the other from B (attributes = different); or both samples from either A or B (attributes = same). We then repeated this process up to 1200 times. Results are reported in Table 1. Our asymptotic tests perform better than all competitors besides Wolf: in the latter case, we have greater Type II error for one neural dataset, lower Type II error on the Health Status data (which has very high dimension and low sample size), and identical (error-free) performance on the remaining examples. We note that the Type I error of the bootstrap test on the Subtype dataset is far from its design value of 0.05, indicating that the Pearson curves provide a better threshold estimate for these low sample sizes. For the remaining datasets, the Type I errors of the Pearson and Bootstrap approximations are close. Thus, for larger datasets, the bootstrap is to be preferred, since it costs O(m2), compared with a cost of O(m3) for Pearson (due to the cost of computing (8)). Finally, the uniform convergence-based test is too conservative, finding differences in distribution only for the data with largest sample size. Dataset Attr. MMD MMD2 u B MMD2 u M t-test Wolf Smir Hall Neural Data I Same 100.0 96.5 96.5 100.0 97.0 95.0 96.0 Different 50.0 0.0 0.0 42.0 0.0 10.0 49.0 Neural Data II Same 100.0 94.6 95.2 100.0 95.0 94.5 96.0 Different 100.0 3.3 3.4 100.0 0.8 31.8 5.9 Health status Same 100.0 95.5 94.4 100.0 94.7 96.1 95.6 Different 100.0 1.0 0.8 100.0 2.8 44.0 35.7 Subtype Same 100.0 99.1 96.4 100.0 94.6 97.3 96.5 Different 100.0 0.0 0.0 100.0 0.0 28.4 0.2 Table 1: Distribution testing for data integration on multivariate data. Numbers indicate the percentage of repetitions for which the null hypothesis (p=q) was accepted, given α = 0.05. Sample size (dimension; repetitions of experiment): Neural I 4000 (63; 100) ; Neural II 1000 (100; 1200); Health Status 25 (12,600; 1000); Subtype 25 (2,118; 1000). Attribute matching Our second series of experiments addresses automatic attribute matching. Given two databases, we want to detect corresponding attributes in the schemas of these databases, based on their data-content (as a simple example, two databases might have respective fields Wage and Salary, which are assumed to be observed via a subsampling of a particular population, and we wish to automatically determine that both Wage and Salary denote to the same underlying attribute). We use a two-sample test on pairs of attributes from two databases to find corresponding pairs.1 This procedure is also called table matching for tables from different databases. We performed attribute matching as follows: first, the dataset D was split into two halves A and B. Each of the n attributes 1Note that corresponding attributes may have different distributions in real-world databases. Hence, schema matching cannot solely rely on distribution testing. Advanced approaches to schema matching using MMD as one key statistical test are a topic of current research. in A (and B, resp.) was then represented by its instances in A (resp. B). We then tested all pairs of attributes from A and from B against each other, to find the optimal assignment of attributes A1, . . . , An from A to attributes B1, . . . , Bn from B. We assumed that A and B contain the same number of attributes. As a naive approach, one could assume that any possible pair of attributes might correspond, and thus that every attribute of A needs to be tested against all the attributes of B to find the optimal match. We report results for this naive approach, aggregated over all pairs of possible attribute matches, in Table 2. We used three datasets: the census income dataset from the UCI KDD archive (CNUM), the protein homology dataset from the 2004 KDD Cup (BIO) [8], and the forest dataset from the UCI ML archive [5]. For the final dataset, we performed univariate matching of attributes (FOREST) and multivariate matching of tables (FOREST10D) from two different databases, where each table represents one type of forest. Both our asymptotic MMD2 u-based tests perform as well as or better than the alternatives, notably for CNUM, where the advantage of MMD2 u is large. Unlike in Table 1, the next best alternatives are not consistently the same across all data: e.g. in BIO they are Wolf or Hall, whereas in FOREST they are Smir, Biau, or the t-test. Thus, MMD2 u appears to perform more consistently across the multiple datasets. The Friedman-Rafsky tests do not always return a Type I error close to the design parameter: for instance, Wolf has a Type I error of 9.7% on the BIO dataset (on these data, MMD2 u has the joint best Type II error without compromising the designed Type I performance). Finally, our uniform convergence approach performs much better than in Table 1, although surprisingly it fails to detect differences in FOREST10D. A more principled approach to attribute matching is also possible. Assume that φ(A) = (φ1(A1), φ2(A2), ..., φn(An)): in other words, the kernel decomposes into kernels on the individual attributes of A (and also decomposes this way on the attributes of B). In this case, MMD2 can be written Pn i=1 ∥µi(Ai) −µi(Bi)∥2, where we sum over the MMD terms on each of the attributes. Our goal of optimally assigning attributes from B to attributes of A via MMD is equivalent to finding the optimal permutation π of attributes of B that minimizes Pn i=1 ∥µi(Ai)−µi(Bπ(i))∥2. If we define Cij = ∥µi(Ai) −µi(Bj)∥2, then this is the same as minimizing the sum over Ci,π(i). This is the linear assignment problem, which costs O(n3) time using the Hungarian method [19]. Dataset Attr. MMD MMD2 u B MMD2 u M t-test Wolf Smir Hall Biau BIO Same 100.0 93.8 94.8 95.2 90.3 95.8 95.3 99.3 Different 20.0 17.2 17.6 36.2 17.2 18.6 17.9 42.1 FOREST Same 100.0 96.4 96.0 97.4 94.6 99.8 95.5 100.0 Different 4.9 0.0 0.0 0.2 3.8 0.0 50.1 0.0 CNUM Same 100.0 94.5 93.8 94.0 98.4 97.5 91.2 98.5 Different 15.2 2.7 2.5 19.17 22.5 11.6 79.1 50.5 FOREST10D Same 100.0 94.0 94.0 100.0 93.5 96.5 97.0 100.0 Different 100.0 0.0 0.0 0.0 0.0 1.0 72.0 100.0 Table 2: Naive attribute matching on univariate (BIO, FOREST, CNUM) and multivariate data (FOREST10D). Numbers indicate the percentage of accepted null hypothesis (p=q) pooled over attributes. α = 0.05. Sample size (dimension; attributes; repetitions of experiment): BIO 377 (1; 6; 100); FOREST 538 (1; 10; 100); CNUM 386 (1; 13; 100); FOREST10D 1000 (10; 2; 100). We tested this ’Hungarian approach’ to attribute matching via MMD2 u B on three univariate datasets (BIO, CNUM, FOREST) and for table matching on a fourth (FOREST10D). To study MMD2 u B on structured data, we obtained two datasets of protein graphs (PROTEINS and ENZYMES) and used the graph kernel for proteins from [7] for table matching via the Hungarian method (the other tests were not applicable to this graph data). The challenge here is to match tables representing one functional class of proteins (or enzymes) from dataset A to the corresponding tables (functional classes) in B. Results are shown in Table 3. Besides on the BIO dataset, MMD2 u B made no errors. 6 Summary and Discussion We have established two simple multivariate tests for comparing two distributions p and q. The test statistics are based on the maximum deviation of the expectation of a function evaluated on each of the random variables, taken over a sufficiently rich function class. We do not require density Dataset Data type No. attributes Sample size Repetitions % correct matches BIO univariate 6 377 100 90.0 CNUM univariate 13 386 100 99.8 FOREST univariate 10 538 100 100.0 FOREST10D multivariate 2 1000 100 100.0 ENZYME structured 6 50 50 100.0 PROTEINS structured 2 200 50 100.0 Table 3: Hungarian Method for attribute matching via MMD2 u B on univariate (BIO, CNUM, FOREST), multivariate (FOREST10D), and structured data (ENZYMES, PROTEINS) (α = 0.05; ‘% correct matches’ is the percentage of the correct attribute matches detected over all repetitions). estimates as an intermediate step. Our method either outperforms competing methods, or is close to the best performing alternative. Finally, our test was successfully used to compare distributions on graphs, for which it is currently the only option. Acknowledgements: The authors thank Matthias Hein for helpful discussions, Patrick Warnat (DKFZ, Heidelberg) for providing the microarray datasets, and Nikos Logothetis for providing the neural datasets. NICTA is funded through the Australian Government’s Backing Australia’s Ability initiative, in part through the ARC. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. References [1] N. Anderson, P. Hall, and D. Titterington. Two-sample test statistics for measuring discrepancies between two multivariate probability density functions using kernel-based density estimates. Journal of Multivariate Analysis, 50:41–54, 1994. [2] M. Arcones and E. Gin´e. On the bootstrap of u and v statistics. The Annals of Statistics, 20(2):655–674, 1992. [3] G. Biau and L. Gyorfi. On the asymptotic properties of a nonparametric l1-test statistic of homogeneity. IEEE Transactions on Information Theory, 51(11):3965–3973, 2005. [4] P. Bickel. A distribution free version of the Smirnov two sample test in the p-variate case. The Annals of Mathematical Statistics, 40(1):1–23, 1969. [5] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. [6] K. M. Borgwardt, A. Gretton, M.J. Rasch, H.P. Kriegel, B. Sch¨olkopf, and A.J. Smola. Integrating structured biological data by kernel maximum mean discrepancy. In ISMB, 2006. [7] K. M. Borgwardt, C. S. Ong, S. Schonauer, S. V. N. Vishwanathan, A. J. Smola, and H. P. Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(Suppl 1):i47–i56, Jun 2005. [8] R. Caruana and T. Joachims. Kdd cup. http://kodiak.cs.cornell.edu/kddcup/index.html, 2004. [9] G. Casella and R. Berger. Statistical Inference. Duxbury, Pacific Grove, CA, 2nd edition, 2002. [10] R. M. Dudley. Real analysis and probability. Cambridge University Press, Cambridge, UK, 2002. [11] R. Fortet and E. Mourier. Convergence de la r´eparation empirique vers la r´eparation th´eorique. Ann. Scient. ´Ecole Norm. Sup., 70:266–285, 1953. [12] J. Friedman and L. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample tests. The Annals of Statistics, 7(4):697–717, 1979. [13] A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for the two sample problem. Technical Report 157, MPI for Biological Cybernetics, 2007. [14] G. R. Grimmet and D. R. Stirzaker. Probability and Random Processes. Oxford University Press, Oxford, third edition, 2001. [15] P. Hall and N. Tajvidi. Permutation tests for equality of distributions in high-dimensional settings. Biometrika, 89(2):359–374, 2002. [16] M. Hein, T.N. Lal, and O. Bousquet. Hilbertian metrics on probability measures and their application in svm’s. In Proceedings of the 26th DAGM Symposium, pages 270–277, Berlin, 2004. Springer. [17] N. Henze and M. Penrose. On the multivariate runs test. The Annals of Statistics, 27(1):290–298, 1999. [18] N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. Volume 1 (Second Edition). John Wiley and Sons, 1994. [19] H.W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2:83–97, 1955. [20] P. Rosenbaum. An exact distribution-free test comparing two multivariate distributions based on adjacency. Journal of the Royal Statistical Society B, 67(4):515–530, 2005. [21] R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980. [22] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res., 2:67–93, 2002.
2006
185
3,016
Modeling General and Specific Aspects of Documents with a Probabilistic Topic Model Chaitanya Chemudugunta, Padhraic Smyth Department of Computer Science University of California, Irvine Irvine, CA 92697-3435, USA {chandra,smyth}@ics.uci.edu Mark Steyvers Department of Cognitive Sciences University of California, Irvine Irvine, CA 92697-5100, USA msteyver@uci.edu Abstract Techniques such as probabilistic topic models and latent-semantic indexing have been shown to be broadly useful at automatically extracting the topical or semantic content of documents, or more generally for dimension-reduction of sparse count data. These types of models and algorithms can be viewed as generating an abstraction from the words in a document to a lower-dimensional latent variable representation that captures what the document is generally about beyond the specific words it contains. In this paper we propose a new probabilistic model that tempers this approach by representing each document as a combination of (a) a background distribution over common words, (b) a mixture distribution over general topics, and (c) a distribution over words that are treated as being specific to that document. We illustrate how this model can be used for information retrieval by matching documents both at a general topic level and at a specific word level, providing an advantage over techniques that only match documents at a general level (such as topic models or latent-sematic indexing) or that only match documents at the specific word level (such as TF-IDF). 1 Introduction and Motivation Reducing high-dimensional data vectors to robust and interpretable lower-dimensional representations has a long and successful history in data analysis, including recent innovations such as latent semantic indexing (LSI) (Deerwester et al, 1994) and latent Dirichlet allocation (LDA) (Blei, Ng, and Jordan, 2003). These types of techniques have found broad application in modeling of sparse high-dimensional count data such as the “bag of words” representations for documents or transaction data for Web and retail applications. Approaches such as LSI and LDA have both been shown to be useful for “object matching” in their respective latent spaces. In information retrieval for example, both a query and a set of documents can be represented in the LSI or topic latent spaces, and the documents can be ranked in terms of how well they match the query based on distance or similarity in the latent space. The mapping to latent space represents a generalization or abstraction away from the sparse set of observed words, to a “higher-level” semantic representation in the latent space. These abstractions in principle lead to better generalization on new data compared to inferences carried out directly in the original sparse high-dimensional space. The capability of these models to provide improved generalization has been demonstrated empirically in a number of studies (e.g., Deerwester et al 1994; Hofmann 1999; Canny 2004; Buntine et al, 2005). However, while this type of generalization is broadly useful in terms of inference and prediction, there are situations where one can over-generalize. Consider trying to match the following query to a historical archive of news articles: election + campaign + Camejo. The query is intended to find documents that are about US presidential campaigns and also about Peter Camejo (who ran as vice-presidential candidate alongside independent Ralph Nader in 2004). LSI and topic models are likely to highly rank articles that are related to presidential elections (even if they don’t necessarily contain the words election or campaign). However, a potential problem is that the documents that are highly ranked by LSI or topic models need not include any mention of the name Camejo. The reason is that the combination of words in this query is likely to activate one or more latent variables related to the concept of presidential campaigns. However, once this generalization is made the model has “lost” the information about the specific word Camejo and it will only show up in highly ranked documents if this word happens to frequently occur in these topics (unlikely in this case given that this candidate received relatively little media coverage compared to the coverage given to the candidates from the two main parties). But from the viewpoint of the original query, our preference would be to get documents that are about the general topic of US presidential elections with the specific constraint that they mention Peter Camejo. Word-based retrieval techniques, such as the widely-used term-frequency inverse-documentfrequency (TF-IDF) method, have the opposite problem in general. They tend to be overly specific in terms of matching words in the query to documents. In general of course one would like to have a balance between generality and specificity. One ad hoc approach is to combine scores from a general method such as LSI with those from a more specific method such as TF-IDF in some manner, and indeed this technique has been proposed in information retrieval (Vogt and Cottrell, 1999). Similarly, in the ad hoc LDA approach (Wei and Croft, 2006), the LDA model is linearly combined with document-specific word distributions to capture both general as well as specific information in documents. However, neither method is entirely satisfactory since it is not clear how to trade-off generality and specificity in a principled way. The contribution of this paper is a new graphical model based on latent topics that handles the tradeoff between generality and specificity in a fully probabilistic and automated manner. The model, which we call the special words with background (SWB) model, is an extension of the LDA model. The new model allows words in documents to be modeled as either originating from general topics, or from document-specific “special” word distributions, or from a corpus-wide background distribution. The idea is that words in a document such as election and campaign are likely to come from a general topic on presidential elections, whereas a name such as Camejo is much more likely to be treated as “non-topical” and specific to that document. Words in queries are automatically interpreted (in a probabilistic manner) as either being topical or special, in the context of each document, allowing for a data-driven document-specific trade-off between the benefits of topic-based abstraction and specific word matching. Daum´e and Marcu (2006) independently proposed a probabilistic model using similar concepts for handling different training and test distributions in classification problems. Although we have focused primarily on documents in information retrieval in the discussion above, the model we propose can in principle be used on any large sparse matrix of count data. For example, transaction data sets where rows are individuals and columns correspond to items purchased or Web sites visited are ideally suited to this approach. The latent topics can capture broad patterns of population behavior and the “special word distributions” can capture the idiosyncracies of specific individuals. Section 2 reviews the basic principles of the LDA model and introduces the new SWB model. Section 3 illustrates how the model works in practice using examples from New York Times news articles. In Section 4 we describe a number of experiments with 4 different document sets, including perplexity experiments and information retrieval experiments, illustrating the trade-offs between generalization and specificity for different models. Section 5 contains a brief discussion and concluding comments. 2 A Topic Model for Special Words Figure 1(a) shows the graphical model for what we will refer to as the “standard topic model” or LDA. There are D documents and document d has Nd words. α and β are fixed parameters of symmetric Dirichlet priors for the D document-topic multinomials represented by θ and the T topicword multinomials represented by φ. In the generative model, for each document d, the Nd words z w D φ β α θ T d N z w D φ 0 β α θ T d N Ω 2 β x λ γ 1β ψ (a) (b) Figure 1: Graphical models for (a) the standard LDA topic model (left) and (b) the proposed special words topic model with a background distribution (SWB) (right). are generated by drawing a topic t from the document-topic distribution p(z|θd) and then drawing a word w from the topic-word distribution p(w|z = t, φt). As shown in Griffiths and Steyvers (2004) the topic assignments z for each word token in the corpus can be efficiently sampled via Gibbs sampling (after marginalizing over θ and φ). Point estimates for the θ and φ distributions can be computed conditioned on a particular sample, and predictive distributions can be obtained by averaging over multiple samples. We will refer to the proposed model as the special words topic model with background distribution (SWB) (Figure 1(b)). SWB has a similar general structure to the LDA model (Figure 1(a)) but with additional machinery to handle special words and background words. In particular, associated with each word token is a latent random variable x, taking value x = 0 if the word w is generated via the topic route, value x = 1 if the word is generated as a special word (for that document) and value x = 2 if the word is generated from a background distribution specific for the corpus. The variable x acts as a switch: if x = 0, the previously described standard topic mechanism is used to generate the word, whereas if x = 1 or x = 2, words are sampled from a document-specific multinomial Ψ or a corpus specific multinomial Ω(with symmetric Dirichlet priors parametrized by β1 and β2) respectively. x is sampled from a document-specific multinomial λ, which in turn has a symmetric Dirichlet prior, γ. One could also use a hierarchical Bayesian approach to introduce another level of uncertainty about the Dirichlet priors (e.g., see Blei, Ng, and Jordan, 2003)—we have not investigated this option, primarily for computational reasons. In all our experiments, we set α = 0.1, β0 = β2 = 0.01, β1 = 0.0001 and γ = 0.3—all weak symmetric priors. The conditional probability of a word w given a document d can be written as: p(w|d) = p(x = 0|d) T X t=1 p(w|z = t)p(z = t|d) + p(x = 1|d)p′(w|d) + p(x = 2|d)p′′(w) where p′(w|d) is the special word distribution for document d, and p′′(w) is the background word distribution for the corpus. Note that when compared to the standard topic model the SWB model can explain words in three different ways, via topics, via a special word distribution, or via a background word distribution. Given the graphical model above, it is relatively straightforward to derive Gibbs sampling equations that allow joint sampling of the zi and xi latent variables for each word token wi, for xi = 0: p (xi = 0, zi = t |w, x−i, z−i, α, β0, γ ) ∝Nd0,−i + γ Nd,−i + 3γ × CT D td,−i + α P t′ CT D t′d,−i + Tα× CWT wt,−i + β0 P w′ CWT w′t,−i + Wβ0 and for xi = 1: p (xi = 1 |w, x−i, z−i, β1, γ ) ∝Nd1,−i + γ Nd,−i + 3γ × CWD wd,−i + β1 P w′ CWD w′d,−i + Wβ1 e mail krugman nytimes com memo to critics of the media s liberal bias the pinkos you really should be going after are those business reporters even i was startled by the tone of the jan 21 issue of investment news which describes itself as the weekly newspaper for financial advisers the headline was paul o neill s sweet deal the blurb was irs backs off closing loophole averting tax liability for execs and treasury chief it s not really news that the bush administration likes tax breaks for businessmen but two weeks later i learned from the wall street journal that this loophole is more than a tax break for businessmen it s a gift to biznesmen and it may be part of a larger pattern confused in the former soviet union the term biznesmen pronounced beeznessmen refers to the class of sudden new rich who emerged after the fall of communism and who generally got rich by using their connections to strip away the assets of public enterprises what we ve learned from enron and other players to be named later is that america has its own biznesmen and that we need to watch out for policies that make it easier for them to ply their trade it turns out that the sweet deal investment news was referring to the use of split premium life insurance policies to give executives largely tax free compensation you don t want to know the details is an even sweeter deal for executives of companies that go belly up it shields their wealth from creditors and even from lawsuits sure enough reports the wall street journal former enron c e o s kenneth lay and jeffrey skilling both had large split premium policies so what other pro biznes policies have been promulgated lately last year both houses of … john w snow was paid more than 50 million in salary bonus and stock in his nearly 12 years as chairman of the csx corporation the railroad company during that period the company s profits fell and its stock rose a bit more than half as much as that of the average big company mr snow s compensation amid csx s uneven performance has drawn criticism from union officials and some corporate governance specialists in 2000 for example after the stock had plunged csx decided to reverse a 25 million loan to him the move is likely to get more scrutiny after yesterday s announcement that mr snow has been chosen by president bush to replace paul o neill as the treasury secretary like mr o neill mr snow is an outsider on wall street but an insider in corporate america with long experience running an industrial company some wall street analysts who follow csx said yesterday that mr snow had ably led the company through a difficult period in the railroad industry and would make a good treasury secretary it s an excellent nomination said jill evans an analyst at j p morgan who has a neutral rating on csx stock i think john s a great person for the administration he as the c e o of a railroad has probably touched every sector of the economy union officials are less complimentary of mr snow s performance at csx last year the a f l c i o criticized him and csx for the company s decision to reverse the loan allowing him to return stock he had purchased with the borrowed money at a time when independent directors are in demand a corporate governance specialist said recently that mr snow had more business relationships with members of his own board than any other chief executive in addition mr snow is the third highest paid of 37 chief executives of transportation companies said ric marshall chief executive of the corporate library which provides specialized investment research into corporate boards his own compensation levels have been pretty high mr marshall said he could afford to take a public service job a csx program in 1996 allowed mr snow and other top csx executives to buy… Figure 2: Examples of two news articles with special words (as inferred by the model) shaded in gray. (a) upper, email article with several colloquialisms, (b) lower, article about CSX corporation. and for xi = 2: p (xi = 2 |w, x−i, z−i, β2, γ ) ∝Nd2,−i + γ Nd,−i + 3γ × CW w,−i + β2 P w′ CW w′,−i + Wβ2 where the subscript −i indicates that the count for word token i is removed, Nd is the number of words in document d and Nd0, Nd1 and Nd2 are the number of words in document d assigned to the latent topics, special words and background component, respectively, CWT wt , CWD wd and CW w are the number of times word w is assigned to topic t, to the special-words distribution of document d, and to the background distribution, respectively, and W is the number of unique words in the corpus. Note that when there is not strong supporting evidence for xi = 0 (i.e., the conditional probability of this event is low), then the probability of the word being generated by the special words route, xi = 1, or background route, xi = 2 increases. One iteration of the Gibbs sampler corresponds to a sampling pass through all word tokens in the corpus. In practice we have found that around 500 iterations are often sufficient for the in-sample perplexity (or log-likelihood) and the topic distributions to stabilize. We also pursued a variant of SWB, the special words (SW) model that excludes the background distribution Ωand has a symmetric Beta prior, γ, on λ (which in SW is a document-specificBernoulli distribution). In all our SW model runs, we set γ = 0.5 resulting in a weak symmetric prior that is equivalent to adding one pseudo-word to each document. Experimental results (not shown) indicate that the final word-topic assignments are not sensitive to either the value of the prior or the initial assignments to the latent variables, x and z. 3 Illustrative Examples We illustrate the operation of the SW model with a data set consisting of 3104 articles from the New York Times (NYT) with a total of 1,399,488 word tokens. This small set of NYT articles was formed by selecting all NYT articles that mention the word “Enron.” The SW topic model was run with T = 100 topics. In total, 10 Gibbs samples were collected from the model. Figure 2 shows two short fragments of articles from this NYT dataset. The background color of words indicates the probability of assigning words to the special words topic—darker colors are associated with higher probability that over the 10 Gibbs samples a word was assigned to the special topic. The words with gray foreground colors were treated as stopwords and were not included in the analysis. Figure 2(a) shows how intentionally misspelled words such as “biznesmen” and “beeznessmen” and rare Collection # of Total # of Median Mean # of Docs Word Tokens Doc Length Doc Length Queries NIPS 1740 2,301,375 1310 1322.6 N/A PATENTS 6711 15,014,099 1858 2237.2 N/A AP 10000 2,426,181 235.5 242.6 142 FR 2500 6,332,681 516 2533.1 30 Table 1: General characteristics of document data sets used in experiments. set .0206 fig .0647 tagnum .0416 nation .0147 number .0167 end .0372 itag .0412 sai .0129 results .0153 extend .0267 requir .0381 presid .0118 case .0123 invent .0246 includ .0207 polici .0108 problem .0118 view .0214 section .0189 issu .0096 function .0108 shown .0191 determin .0134 call .0094 values .0102 claim .0189 part .0112 support .0085 paper .0088 side .0177 inform .0105 need .0079 approach .0080 posit .0153 addit .0096 govern .0070 large .0079 form .0128 applic .0086 effort .0068 AP FR NIPS PATENTS Figure 3: Examples of background distributions (10 most likely words) learned by the SWB model for 4 different document corpora. words such as “pinkos” are likely to be assigned to the special words topic. Figure 2(b) shows how a last name such as “Snow” and the corporation name “CSX” that are specific to the document are likely to be assigned to the special topic. The words “Snow” and “CSX” do not occur often in other documents but are mentioned several times in the example document. This combination of low document-frequency and high term-frequency within the document is one factor that makes these words more likely to be treated as “special” words. 4 Experimental Results: Perplexity and Precision We use 4 different document sets in our experiments, as summarized in Table 1. The NIPS and PATENTS document sets are used for perplexity experiments and the AP and FR data sets for retrieval experiments. The NIPS data set is available online1 and PATENTS, AP, and FR consist of documents from the U.S. Patents collection (TREC Vol-3), Associated Press news articles from 1998 (TREC Vol-2), and articles from the Federal Register (TREC Vol-1, 2) respectively. To create the sampled AP and FR data sets, all documents relevant to queries were included first and the rest of the documents were chosen randomly. In the results below all LDA/SWB/SW models were fit using T = 200 topics. Figure 3 demonstrates the background component learned by the SWB model on the 4 different document data sets. The background distributions learned for each set of documents are quite intuitive, with words that are commonly used across a broad range of documents within each corpus. The ratio of words assigned to the special words distribution and the background distribution are (respectively for each data set), 25%:10% (NIPS), 58%:5% (PATENTS), 11%:6% (AP), 50%:11% (FR). Of note is the fact that a much larger fraction of words are treated as special in collections containing long documents (NIPS, PATENTS, and FR) than in short “abstract-like” collections (such as AP)—this makes sense since short documents are more likely to contain general summary information while longer documents will have more specific details. 4.1 Perplexity Comparisons The NIPS and PATENTS documents sets do not have queries and relevance judgments, but nonetheless are useful for evaluating perplexity. We compare the predictive performance of the SW and SWB topic models with the standard topic model by computing the perplexity of unseen words in test documents. Perplexity of a test set under a model is defined as follows: 1From http://www.cs.toronto.edu/˜roweis/data.html 10 20 30 40 50 60 70 80 90 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 Percentage of Words Observed Perplexity LDA SW SWB 10 20 30 40 50 60 70 80 90 150 200 250 300 350 400 450 500 550 Percentage of Words Observed Perplexity LDA SW SWB Figure 4: Average perplexity of the two special words models and the standard topics model as a function of the percentage of words observed in test documents on the NIPS data set (left) and the PATENTS data set (right). Perplexity(wtest|Dtrain) = exp  − PDtest d=1 log p(wd|Dtrain) PDtest d=1 Nd  where wtest is a vector of words in the test data set, wd is a vector of words in document d of the test set, and Dtrain is the training set. For the SWB model, we approximate p(wd|Dtrain) as follows: p(wd|Dtrain) ≈1 S S X s=1 p(wd|{ΘsΦs Ψs Ωs λs}) where Θs, Φs, Ψs, Ωs and λs are point estimates from s = 1:S different Gibbs sampling runs. The probability of the words wd in a test document d, given its parameters, can be computed as follows: p(wd|{ΘsΦs Ψs Ωs λs}) = Nd Y i=1 " λs 1d T X t=1 φs witθs td + λs 2dψs wid + λs 3dΩs wi # where Nd is the number of words in test document d and wi is the ith word being predicted in the test document. θs td, φs wit, ψs wid, Ωs wi and λs d are point estimates from sample s. When a fraction of words of a test document d is observed, a Gibbs sampler is run on the observed words to update the document-specific parameters, θd, ψd and λd and these updated parameters are used in the computation of perplexity. For the NIPS data set, documents from the last year of the data set were held out to compute perplexity (Dtest = 150), and for the PATENTS data set 500 documents were randomly selected as test documents. From the perplexity figures, it can be seen that once a small fraction of the test document words is observed (20% for NIPS and 10% for PATENTS), the SW and SWB models have significantly lower perplexity values than LDA indicating that the SW and SWB models are using the special words “route” to better learn predictive models for individual documents. 4.2 Information Retrieval Results Returning to the point of capturing both specific and general aspects of documents as discussed in the introduction of the paper, we generated 500 queries of length 3-5 using randomly selected lowfrequency words from the NIPS corpus and then ranked documents relative to these queries using several different methods. Table 2 shows for the top k-ranked documents (k = 1, 10, 50, 100) how many of the retrieved documents contained at least one of the words in the query. Note that we are not assessing relevance here in a traditional information retrieval sense, but instead are assessing how Method 1 Ret Doc 10 Ret Docs 50 Ret Docs 100 Ret Docs TF-IDF 100.0 100.0 100.0 100.0 LSI 97.6 82.7 64.6 54.3 LDA 90.0 80.6 67.0 58.7 SW 99.2 97.1 79.1 67.3 SWB 99.4 96.6 78.7 67.2 Table 2: Percentage of retrieved documents containing at least one query word (NIPS corpus). Method Title Desc Concepts Method Title Desc Concepts TF-IDF .353 .358 .498 TF-IDF .406 .434 .549 LSI .286 .387 .459 LSI .455 .469 .523 LDA .424 .394 .498 LDA .478 .463 .556 SW .466* .430* .550* SW .524* .509* .599* SWB .460* .417 .549* SWB .513* .495 .603* Method Title Desc Concepts Method Title Desc Concepts TF-IDF .268 .272 .391 TF-IDF .300 .287 .483 LSI .329 .295 .399 LSI .366 .327 .487 LDA .344 .271 .396 LDA .428 .340 .487 SW .371 .323* .448* SW .469 .407* .550* SWB .373 .328* .435 SWB .462 .423* .523 *=sig difference wrt LDA AP FR MAP Pr@10d MAP Pr@10d Figure 5: Information retrieval experimental results. often specific query words occur in retrieved documents. TF-IDF has 100% matches, as one would expect, and the techniques that generalize (such as LSI and LDA) have far fewer exact matches. The SWB and SW models have more specific matches than either LDA or LSI, indicating that they have the ability to match at the level of specific words. Of course this is not of much utility unless the SWB and SW models can also perform well in terms of retrieving relevant documents (not just documents containing the query words), which we investigate next. For the AP and FR documents sets, 3 types of query sets were constructed from TREC Topics 1150, based on the Title (short), Desc (sentence-length) and Concepts (long list of keywords) fields. Queries that have no relevance judgments for a collection were removed from the query set for that collection. The score for a document d relative to a query q for the SW and standard topic models can be computed as the probability of q given d (known as the query-likelihood model in the IR community). For the SWB topic model, we have p(q|d) ≈ Y w∈q [p(x = 0|d) T X t=1 p(w|z = t)p(z = t|d) + p(x = 1|d)p′(w|d) + p(x = 2|d)p′′(w)] We compare SW and SWB models with the standard topic model (LDA), LSI and TF-IDF. The TFIDF score for a word w in a document d is computed as TF-IDF(w, d) = C W D wd Nd × log2 D Dw . For LSI, the TF-IDF weight matrix is reduced to a K-dimensional latent space using SVD, K = 200. A given query is first mapped into the LSI latent space or the TF-IDF space (known as query folding), and documents are scored based on their cosine distances to the mapped queries. To measure the performance of each algorithm we used 2 metrics that are widely used in IR research: the mean average precision (MAP) and the precision for the top 10 documents retrieved (pr@10d). The main difference between the AP and FR documents is that the latter documents are considerably longer on average and there are fewer queries for the FR data set. Figure 5 summarizes the results, broken down by algorithm, query type, document set, and metric. The maximum score for each query experiment is shown in bold: in all cases (query-type/data set/metric) the SW or SWB model produced the highest scores. To determine statistical significance, we performed a t-test at the 0.05 level between the scores of each of the SW and SWB models, and the scores of the LDA model (as LDA has the best scores overall among TF-IDF, LSI and LDA). Differences between SW and SWB are not significant. In figure 5, we use the symbol * to indicate scores where the SW and SWB models showed a statistically significant difference (always an improvement) relative to the LDA model. The differences for the “non-starred” query and metric scores of SW and SWB are not statistically significant but nonetheless always favor SW and SWB over LDA. 5 Discussion and Conclusions Wei and Croft (2006) have recently proposed an ad hoc LDA approach that models p(q|d) as a weighted combination of a multinomial over the entire corpus (the background model), a multinomial over the document, and an LDA model. Wei and Croft showed that this combination provides excellent retrieval performance compared to other state-of-the-art IR methods. In a number of experiments (not shown) comparing the SWB and ad hoc LDA models we found that the two techniques produced comparable precision performance, with small but systematic performance gains being achieved by an ad hoc combination where the standard LDA model in ad hoc LDA was replaced with the SWB model. An interesting direction for future work is to investigate fully generative models that can achieve the performance of ad hoc approaches. In conclusion, we have proposed a new probabilistic model that accounts for both general and specific aspects of documents or individual behavior. The model extends existing latent variable probabilistic approaches such as LDA by allowing these models to take into account specific aspects of documents (or individuals) that are exceptions to the broader structure of the data. This allows, for example, documents to be modeled as a mixture of words generated by general topics and words generated in a manner specific to that document. Experimental results on information retrieval tasks indicate that the SWB topic model does not suffer from the weakness of techniques such as LSI and LDA when faced with very specific query words, nor does it suffer the limitations of TF-IDF in terms of its ability to generalize. Acknowledgements We thank Tom Griffiths for useful initial discussions about the special words model. This material is based upon work supported by the National Science Foundation under grant IIS-0083489. We acknowledge use of the computer clusters supported by NIH grant LM-07443-01 and NSF grant EIA-0321390 to Pierre Baldi and the Institute of Genomics and Bioinformatics. References Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003) Latent Dirichlet allocation, Journal of Machine Learning Research 3: 993-1022. Buntine, W., L¨ofstr¨om, J., Perttu, S. and Valtonen, K. (2005) Topic-specific scoring of documents for relevant retrieval Workshop on Learning in Web Search: 22nd International Conference on Machine Learning, pp. 34-41. Bonn, Germany. Canny, J. (2004) GaP: a factor model for discrete data. Proceedings of the 27th Annual SIGIR Conference, pp. 122-129. Daum´e III, H., and Marcu, D. (2006) Domain Adaptation for Statistical classifiers. Journal of the Artificial Intelligence Research, 26: 101-126. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990) Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6): 391-407. Griffiths, T. L., and Steyvers, M. (2004) Finding scientific topics, Proceedings of the National Academy of Sciences, pp. 5228-5235. Hofmann, T. (1999) Probabilistic latent semantic indexing, Proceedings of the 22nd Annual SIGIR Conference, pp. 50-57. Vogt, C. and Cottrell, G. (1999) Fusion via a linear combination of scores. Information Retrieval, 1(3): 151173. Wei, X. and Croft, W.B. (2006) LDA-based document models for ad-hoc retrieval, Proceedings of the 29th SIGIR Conference, pp. 178-185.
2006
186
3,017
Emergence of conjunctive visual features by quadratic independent component analysis J.T. Lindgren Department of Computer Science University of Helsinki Finland jtlindgr@cs.helsinki.fi Aapo Hyv¨arinen HIIT Basic Research Unit University of Helsinki Finland aapo.hyvarinen@cs.helsinki.fi Abstract In previous studies, quadratic modelling of natural images has resulted in cell models that react strongly to edges and bars. Here we apply quadratic Independent Component Analysis to natural image patches, and show that up to a small approximation error, the estimated components are computing conjunctions of two linear features. These conjunctive features appear to represent not only edges and bars, but also inherently two-dimensional stimuli, such as corners. In addition, we show that for many of the components, the underlying linear features have essentially V1 simple cell receptive field characteristics. Our results indicate that the development of the V2 cells preferring angles and corners may be partly explainable by the principle of unsupervised sparse coding of natural images. 1 Introduction Sparse coding of natural images has led to models that resemble the receptive fields in the primate primary visual cortex area V1 (see e.g. [1, 2, 3]). An ongoing research effort is in trying to understand and model the computational principles in visual areas following V1, commonly thought to provide representations for more complicated stimuli. For example, it has recently been shown that in the Macaque monkey, the V2 area following V1 contains neurons responding favourably to angles and corners, but not necessarily to their constituent edges if presented alone [4, 5]. This behaviour can not be easily attained with linear models [6]. In this paper we estimate quadratic models for natural images using Independent Component Analysis (ICA). The used quadratic functions are a natural extension to linear functions (i.e. lT x), and give the value of a single feature or component as s = xT Hx + lT x, (1) where the matrix H specifies weights for second-order interactions between the input variables in stimulus x. This class of functions is equivalent to second-order polynomials of the input, and can compute linear combinations of squared responses of linear models (see e.g. [7]). Another wellknown interpretation of components in a quadratic model is as outputs of two-layer neural networks, which is based on an eigenvalue decomposition and will be discussed below. Estimating a quadratic model for natural images with ICA, we report here the emergence of receptive field models that respond strongly only if the stimulus contains two features that are in a correct spatial arrangement. With a heavy dimensionality reduction, the conjuncted features are mostly collinear (i.e. prefer edges or bars), but with a smaller reduction, additional components emerge that appear to prefer more complex stimuli such as angles or corners. We show that in both cases, the emerging components approximately operate by computing products between the outputs of two linear submodels that have V1 simple cell characteristics. The rest of this paper is organized as follows. In section 2 we describe the quadratic ICA in detail. Section 3 outlines the dataset and the preprocessing we used, and section 4 describes the results. Finally, section 5 concludes with discussion and future work. 2 Quadratic ICA Let x ∈Rn be a vectorized grayscale input image patch. A basic form of linear ICA assumes that each data point is generated as x = As, (2) where A is a linear mixing matrix and s the vector of unknown source signals or independent components. The dimension of s is assumed to be equal to the dimension of x, possibly after the x have been reduced by PCA to a smaller dimension. ICA estimation tries to recover s and the parameter matrix W = A−1. If the independent components are sparse, this is equivalent to performing sparse coding (for an account of ICA, see e.g. [8]). It has been proposed that ICA for quadratic models can be performed by first making a quadratic basis expansion on each x and then applying standard linear ICA [9]. Let the new data vectors z ∈Rn(n+1)/2+n in quadratic space be z = Φ([x1, x2, ..., xn]) = [x2 1, x1x2, ..., x2 2, x2x3, ..., x2 n, x1, x2, ..., xn], (3) that is, Φ(x) generates all the monomials for a second-order polynomial of x, except for the constant term. Such a dimension expansion is also implicit in kernel methods, where a second-order polynomial kernel would be used instead of Φ. Here we work with the more traditional input transformation for simplicity. From now on, assume that ICA has been performed on the transformed data z. Then the columns wi of WT make up the quadratic components (cell models, polynomial filters) of the model. As the coefficients in wi are weights for a second-order polynomial, it is straightforward to decompose the response si of each quadratic component to x as si = wT i z = xT Hix + lT i x, (4) where Hi is a symmetric square matrix corresponding to the weights given to all the cross-terms and li weights the first- order monomials. It is well known that the Hi can be represented in another form by eigenvalue decomposition, leading to the expression si = n X j=1 αj(vT j x)2 + lT i x, (5) where αj are the decreasingly sorted eigenvalues of Hi and vj the corresponding eigenvectors. In some cases the representation of eq. 5 can help to understand the model, since the individual eigenvectors can be interpreted as linear receptive fields. A quadratic function in this form is illustrated on the left in figure 1. However, for our model estimated with quadratic ICA, many of the eigenvectors vj did not resemble V1 simple cell receptive fields. Here we propose another decomposition which leads to a simple network computation based on linear receptive fields similar to those of V1 simple cells. Assume that the two eigenvalues of Hi which are largest in absolute value have opposite signs; we will refer to these as the dominant eigenvalues and denote them by α1 and αn. This assumption will turn out to hold empirically for our estimated models. Now, including just the two corresponding dominant eigenvectors and ignoring the linear term, we denote v+ = ( p |α1|v1 + p |αn|vn), v−= ( p |α1|v1 − p |αn|vn), (6) x / _^]\ XYZ[ (vT 1 x)2 α1 !D D D D D D D D D D ... ... x / _^]\ XYZ[ (vT +x) 1 !B B B B B B B B B B x / _^]\ XYZ[ (vT n x)2 αn / ONML HIJK P / s x / _^]\ XYZ[ (vT −x) 1 /ONML HIJK Q / ˆs x / WVUT PQRS lT x 1 <z z z z z z z z z z z Figure 1: Quadratic components as networks. Using the eigenvalue decomposition, quadratic forms can be interpreted as networks. Left, the computation of a single component w, where the vi are the eigenvectors and the αi the eigenvalues of the matrix H. Right, its product approximation, which is possible if the variance is concentrated on just two eigenvectors with eigenvalues of opposite signs. This turns out to be the case for natural images. and by using simple arithmetic we obtain a product approximation for eq. 5 as ˆsi = α1(vT 1 x)2 + αn(vT n x)2 = (vT +x)(vT −x). (7) This approximation is shown as a network on the right in figure 1, and will be justified later by its relatively small empirical error for our models. Providing that the approximation is good, the intuition is that the component is essentially computing the product of the responses of two linear filters, analogous to a logical AND operation, or a conjunction. We will empirically show that the vectors v+ and v−have resemblance to V1 simple cell receptive fields for our model even if the respective two dominant eigenvectors have more complicated shapes. 3 Materials and methods In our experiments we used the natural image dataset provided by van Hateren and van der Schaaf [2]. This dataset contains over 4000 grayscale images representing natural scenes, each image having a resolution of 1024×1536. The intensity distribution over this image set has a very long right tail related to variation in the overall image contrast and intensity [2, 10]. In addition, it is known that the high frequencies in natural images contain sampling artifacts due to rectangular sampling, and that the spectral characteristics are not uniform across frequencies, causing difficulties for gradient-based estimation methods [1]. To alleviate these problems for ICA, we adopt a two-phase preprocessing for the raw images, following [1, 2]. This processing can be considered a very simple model of the physiological pathway containing the retina and the lateral geniculate nucleus (LGN). First, we address the problem of heavy contrast variation and the long-tailed intensity distribution by taking a natural logarithm of the input images, effectively compressing their dynamic range. This preprocessing is similar to what happens in the first stages of natural visual systems, and has been previously suggested for the current dataset [2]. Next, to correct for the spectral imbalances in the data, we use the whitening filter proposed by Olshausen and Field [1]. This whitening filter cuts the highest frequencies, and balances the frequency distribution otherwise by dampening the dominant low frequencies. We use the filter with the same parameters as in [1]. The whitening filter has bandpass characteristics and hence resembles the center-surround behaviour of LGN cells. In practice, the filtering approximately decorrelates the data. After preprocessing each image as a whole, we sampled 300, 000 small image patches from the images, each patch having a resolution of 9×9. Then we subtracted the local DC-component (mean intensity) from each patch. These patches then formed the data we used to estimate the quadratic ICA model. The model fitting was done by transforming the data to the quadratic space using eq. 3, followed by linear ICA. For ICA, we used the FastICA algorithm [11] with tanh nonlinearity and symmetric estimation of the components. The input dimension in the quadratic space was Figure 2: The quadratic ICA components when the model size is very small (81 components). Each quadruple displays the two dominant eigenvectors v1 and vn (top row), and the corresponding vectors v+ and v−(bottom row). Light and dark areas correspond to positive and negative weights, respectively. The components have been sorted by collinearity of the conjuncted features. 81 ∗(81 + 1)/2 + 81 = 3402. We used PCA to drop the dimension by selecting the 400 most dominant principal axes, covering approximately 50% of the summed eigenvalues. This resulted in estimation of 400 independent components (or second-order polynomial filters). We also performed additional experiments with 81 and 1024 dominant principal axes, corresponding to 18% and 80% coverage. Due to space constraints, we are unable to discuss the 1024 component model, other than to briefly mention that it conformed to the main results presented in this paper. To ensure replicable research, the source codes performing the experiments described in this paper have been made publicly available1. 4 Results In general, interpreting quadratic models can be difficult, and several strategies have been proposed in the literature (see e.g. [12]). However, in the current work the estimated components turned out to be fairly simple (up to a small approximation error, as shown later), and as discussed in section 2, it will be illustrative to display the estimated components in terms of their two dominant eigenvectors v1 and vn of H, and the respective vectors v+ and v−(see eq. 6). Since either pair of the two vectors can be used to compute the approximate component response to any stimuli using eq. 7, the analysis of the components can be based on the vectors v+ and v−if preferred. Figure 2 shows the quadratic ICA components when a small model was estimated with only 81 components. If we ignore the linear term as in eq. 7, the dominant eigenvectors shown at the top row of each quadruple are equal to the two unit-norm stimuli that the component reacts most highly to (e.g. [12]). Note that the reaction to the eigenvector vn (top right) will be highly negative. On the 1http://www.cs.helsinki.fi/u/jtlindgr/stuff/ Figure 3: Quadratic ICA components picked from 10 bootstrap iterations with 400 components estimated on each run. All 4000 components were ordered by collinearity of the conjuncted features, and a small sample of each tail is shown. The presentation is the same as in Figure 2. Top, some components that prefer conjunctions of two collinear features. Bottom, components that conjunct two highly orthogonal features. The latter components become more apparent if the model size is large. Clear Gabor-like V1 characteristics can be seen in both cases in the vectors v+ and v−, even if the corresponding eigenvectors are more complex. other hand, both vectors v+ and v−must respond to a stimuli with a non-zero value if the component is to respond strongly. In the case of this small model size, many of the conjuncted features v+ and v−are collinear, and respond strongly to edge- or bar-like stimuli. The feature conjunctions that are not collinear remain more unstructured and appear to react highly to blob-like stimuli. However, both component types are quite different from ordinary linear detectors for edges, bars and blobs, since their conjunctive nature makes them much more selective. In the following, we will limit the discussion to larger models consisting of 400 components (unless mentioned otherwise). With the higher dimensionality allowed, the diversity of the emerging components increased. Figure 3 shows quadratic ICA components picked from 10 experiments repeated with different subsets of the input patches and different random seeds. Now, in addition to collinear conjunctions (on the top in the image), we also get components that conjunct more orthogonal stimuli (on the bottom). The latter components appear to respond favourably to intuitive visual concepts such as angles and corners. In this case, the benefits of the decomposition to vectors v+ and v− becomes more apparent, as many of the receptive field models retain their resemblance to Gabor-like filters (as in e.g. [1, 2, 8]) even if the corresponding eigenvectors become more complicated. Next we will validate the above characterization by showing that the approximation of eq. 7 holds up to a generally small error. First, it turns out that the eigenvalue distributions decay fast for the quadratic forms Hi of the estimated components. This is illustrated on the left in figure 4, which shows the mean sorted eigenvalues for the 400 components (for a model of 81 components, the figure was similar). Since all the eigenvectors have equal norms, the eigenvalues imply the magnitude of the contribution of its respective eigenvector to the component output value. Due to the fast decay of the eigenvalues, the two dominant eigenvectors are largely responsible for the component output, providing that the linear term l is insignificant (for some discussion on the linear term in quadratic models, see [12]). Figure 4: The conjunctive nature of the components is due to the eigenvalues of the quadratic forms Hi typically emerging as heavily dominated by just two eigenvectors with opposite-sign eigenvalues. This conjunctiveness is further confirmed by the relatively small approximation error caused by ignoring the non-dominant eigenvectors and the linear term. Left, sorted eigenvalues of Hi averaged over all 400 components for both quadratic ICA and quadratic PCA. It can be seen that the ICA-based eigenvalue distributions tend to decay faster. Right, the relative mean square error of the product approximation for the 400 quadratic ICA components. The components have been sorted by the error of approximation when Gabor functions have been used to model v+ and v−. Here the quadratic part tended to dominate the component responses, which may be because the (co)variances were much larger for the quadratic dimensions of the space than for the linear ones. The above reasoning is supported by analysis of the prediction error of the product approximation. We examined this by sampling 100, 000 new image patches (not used in the training), and computing the mean square error of the approximation divided by the variance of the component response, i.e. err(ˆs) = E[(s−ˆs)2]/V ar(s). This error is shown on the right in figure 4 for all the 400 components. On average, this relative error was 12% of the respective component variance, ranging from 2% to 57%. Hence, the product approximation appears to capture the behaviour of the components rather well. The plot also shows the effect of approximating the vectors v+ and v−with Gabor functions, which are commonly used to model V1 receptive fields. Using Gabor functions, the approximation error increased, ranging from 8% to 93%, with mean of 34%. To better understand the obtained error rates, we also fitted linear models to approximate the estimated quadratic cells using least-squares regression. This revealed the quadratic components to have highly nonlinear behaviour. For all components, the error of the linear approximator was over 90%, coming close to the baseline 100% error attained if the empirical mean is used as a (constant) predictor. Since the product approximation only covers the two dominant eigenvectors, it is possible that the rest of the eigenvectors might code for interesting phenomena through further excitatory and inhibitory effects. However, the quick decay of the eigenvalues in our estimated model should make any such effects rather minor. Following the ideas and methods of [12], we explored the possibility that the nondominant eigenvectors coded for invariances of the component. The only strong invariance we found was insensitivity to (possibly local) input sign changes, which is at least partly a structural property of the model, originating from taking squares in eq. 5. In particular, we observed no shift-invariance, consistent with some recent findings in the V2 area of the Macaque monkey [5]. We leave more in-depth exploration of the role of the nondominant eigenvectors as future work. Finally, we performed some experiments to examine to what extent the method of quadratic ICA on the one hand, and the natural image input data on the other, are responsible for the reported results. For example, it could be argued that the quadratic ICA components might be very similar to the quadratic PCA components. Figure 5 illustrates that this is not trivially so by showing 16 PCA components with large eigenvalues. It can be seen that the PCA components quickly lose resemblance to Gabor-like filters as the eigenvalues decrease. Also, the conjunctive nature of the estimated features is not as clear for the PCA-based components. This is shown on the left in Figure 5: Left, the two top rows show the vectors v+ and v−for the first 8 quadratic PCA components. The two bottom rows display the PCA components 41 −48. It can be seen that the PCA components quickly lose any resemblance to Gabor-like receptive fields of V1 simple cells. Right, some typical quadratic ICA components in terms of the vectors v+ and v−when the model was estimated on white noise. The circular shapes are likely artifacts due to the whitening filter. figure 4, demonstrating that on the average, the eigenvalues of the quadratic forms decay slower for the PCA components. If the whole set of PCA components is studied (not shown), it can be seen that the components appear to change from low-pass filters to high-pass filters as the eigenvalues decrease. Comparing figure 5 to figure 3, both outputs seem characteristic to the method applied, the differences resembling those observed when linear ICA and linear PCA are used to code natural image data. To verify that the emerging component structures are not artifacts of the modelling methodology, we generated a dataset of artificial images of white noise, having the luminance distribution of the original dataset, but with no spatial or spectral structure. By repeating the model estimation (including preprocessing) on this new dataset, the resulting components did not respond favourably to the same stimuli as before, and they were no longer clearly conjunctive: the eigenvalue distributions decayed fast, but tended to have only one dominant eigenvector. Based on these vectors, the components could be roughly categorized to two classes. The first class responded to spatial forms of center-surround filters, possibly caused by the use of the whitening filter. The second class preferred apparently random configurations of inhibitory and excitatory effects. Some of the components estimated on random data are displayed on the right in figure 5 in terms of vectors v+ and v−. 5 Discussion In this paper, we specified a quadratic model for natural images and estimated its parameters with independent component analysis. We reported the emergence of cell models exhibiting strongly nonlinear behaviour. In particular, we demonstrated that the estimated cells were essentially computing products between outputs of two linear filters that had V1 simple cell characteristics. Many of these feature conjunctions preferred two collinear features, and yet others corresponded to combinations of more orthogonal stimuli, reacting strongly to angles and corners. Our results indicate that sparse coding of natural images might partially explain the development of angle- or corner-preferring cells in V2. There has been some previous work describing quadratic models of natural image data (i.e. [13, 7, 9]). Of these, the ICA-based approaches [13, 9] resemble ours the most. Bartsch and Obermayer [13] report curvature detecting cells, but the patch size used and the number of components estimated were very small, making the results inconclusive. Hashimoto [7] sought to replicate complex cell properties with an algorithm based on Kullback-Leibler divergences, and does not report conjunctive features or cells with preferences for angles or corners. Instead, most of the estimated quadratic forms on static image data had only one dominant eigenvector. Our work extends the previous research by reporting the emergence of conjunctive components that combine responses of V1-like linear filters. The differences of our work to the previous research can be due to various reasons. The number of estimated components (i.e., number of principal components retained) was seen to affect the feature diversity, and with only 81 components, the conjuncted features were mostly collinear, producing highly selective edge or bar detectors. Even larger differences to previous work are likely due to different input preprocessing: it is known that unprocessed image data can cause difficulties to statistical estimation of linear models [1, 10] and that both the preprocessing and the size of the used image patches can affect the estimated features [10]. In quadratic modelling, taking products between the dimensions of the input data can cause additional problems for any methods relying on non-robust estimation (such as covariance-based PCA) since the quadratic transform has the strongest boosting effect on outliers and tails of the marginal distributions. It is worthwhile to note that despite differences to previous work [13, 7, 9], invariances resembling complex cell behaviour did not emerge with our method either, although the class of quadratic models contains the classic energy-detector models of complex cells (fitted in e.g. [3]). It could be that static images and optimization of sparsity alone may not work towards the emergence of invariances, or equivalently, behaviour resembling logical OR operation, unless the model is further constrained (for example as in [3]). Optimizing model likelihood can also be preferable to optimizing output sparseness, but for quadratic ICA it is not clear how to construct a proper probabilistic model. Finally, an important open question regarding the current work is to what extent the obtained conjunctive features reflect real structures present in the image data. At the time of writing, we have not been able to either prove or disprove the possibility that the pairings are an algorithmic artifact in the following sense: it could be that after the effects of quadratic-space PCA have been accounted for, the quadratic ICA components are only combinations of two rather randomly selected sparse components (vT +x and vT −x) which are as independent as possible. We are currently investigating this issue. Acknowledgments The authors wish to thank Jarmo Hurri and the anonymous reviewers for helpful comments. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors’ views. References [1] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):3311–3325, 1997. [2] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359–366, 1998. [3] A. Hyv¨arinen and P. O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–1720, 2000. [4] J. Hegd´e and D. C. van Essen. Selectivity for complex shapes in primate visual area V2. The Journal of Neuroscience, 20(5):RC61–66, 2000. [5] M. Ito and H. Komatsu. Representation of angles embedded within contour stimuli in area V2 of macaque monkeys. The Journal of Neuroscience, 24(13):3313–3324, 2004. [6] G. Krieger and C. Zetzsche. Nonlinear image operators for the evaluation of local intrisic dimensionality. IEEE Transactions on Image Processing, 5(6):1026–1042, 1996. [7] W. Hashimoto. Quadratic forms in natural images. Network: Computation in Neural Systems, 14(4):765– 788, 2003. [8] A. Hyv¨arinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley, 2001. [9] F. Theis and W. Nakamura. Quadratic independent component analysis. IEICE Trans. Fundamentals, E87-A(9):2355–2363, 2004. [10] B. Willmore, P. A. Watters, and D. J. Tolhurst. A comparison of natural-image-based models of simplecell coding. Perception, 29:1017–1040, 2000. [11] A. Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [12] P. Berkes and L. Wiskott. On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields. Neural Computation, accepted, 2006. [13] H. Bartsch and K. Obermayer. Second-order statistics of natural images. Neurocomputing, 52-54:467– 472, 2003.
2006
187
3,018
Nonlinear physically-based models for decoding motor-cortical population activity Gregory Shakhnarovich Sung-Phil Kim Michael J. Black Department of Computer Science Brown University Providence, RI 02912 {gregory,spkim,black}@cs.brown.edu Abstract Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more “natural” in that their frequency spectrum more closely matched that of natural hand movements. 1 Introduction Neural motor prostheses (NMPs) aim to restore lost motor function to people with intact cerebral motor areas who, through disease or injury, have lost the ability to control their limbs. Central to these devices is a method for decoding the firing activity of motor cortical neurons to produce a voluntary control signal. A number of groups have recently demonstrated the real-time neural control of 2D or 3D computer cursors or simple robotic limbs in monkeys [1, 13, 18, 20, 22] and humans [6]. Previous work on decoding motor cortical signals however has focused on modeling the relationship between neural firing rates and simple hand kinematics including hand direction, speed, position, velocity, or acceleration [2, 4, 8, 10]. While the relationship between neural firing rates and hand kinematics is well established in ablebodied monkeys, the situation of a human NMP is quite different. For a paralyzed human, the NMP represents an artificial motor system with different physical properties than the intact human motor system. In particular, a human NMP may involve the control of devices as different as computer cursors or robotic wheelchairs. It remains an open question whether motor cortical neurons can successfully control such varied systems with dynamics that are quite different from human limbs. Here we propose a model that makes a first step toward neural control of novel artificial motor systems. We show that motor cortical firing rates can be nonlinearly related to the parameters of an idealized physical system. This provides an important proof-of-concept for human NMPs. Our model decodes the dynamics of hand movement directly from the neural activity. Ultimately, such a model should reflect the actuator being controlled. For a biological actuator this means the activation of individual muscles; for a robotic one, the forces and torques produced by the motors in the system. A model incorporating direct cortical control of dynamics has been proposed in [19]. There are two major distinctions between that work and ours. First, we consider the task of controlling an artificial system, rather than the subject’s real limb. Second, applying the model in [19] in practice would require constructing a very complex biomechanical model and controlling its many degrees of freedom with a limited neural bandwidth. Here we propose a much simpler approach, that does not attempt to accurately model the musculoskeletal structure of the arm. Instead, it provides a computationally effective framework to model the dynamics of the limb moving in two dimensional plane. Our approach is inspired by the recent work of Hinton and Nair [5], that suggested a generative model for images of handwritten digits. In that work, observed images were assumed to have been generated by a pen connected to a set of springs, the trajectory of the pen controlled by varying the stiffness of the springs according to a digit-specific “motor program”. The goal was to infer the motor program from an observed image, in order to classify the digit. In the context of neural decoding, the image observation is replaced with the recorded neural signal, from which we need to recover the “motor program”, and thus the intended movement. This is where the parallels between our work and [5] end. One particularly important difference is that the neural decoder may be learned in a supervised procedure, where the groud truth for the movement associated with a given neural signal is known. An advantage of this spring-based model (SBM) over previous kinematics-based decoding methods is that the realistic dynamics of the model produce smoother recovered movement. We show that the motions are more natural in that they better match the power spectrum of true hand movements. This suggests that the control of a physical system (even an artificial one) may prove more natural for a human NMP. The experimental setup we consider in this paper involves an electrode array, implanted in the arm/hand area of the MI cortex of a behaving monkey [17]. The animals are trained to control the cursor by moving the endpoint of a two-link manipulandum constrained to a plane, much like a human would use a computer mouse [11, 13]. Neural data and hand kinematics were recorded from two monkeys performing two different tasks. The data was separated into training and testing segments and we quantitatively compared a variety of popular linear and nonlinear algorithms for decoding hand kinematics and the spring coefficients of our SBM. As expected, nonlinear methods tend to outperform linear ones. Moreover, movement reconstructed with the SBM has a power spectrum significantly closer to that of natural movement. These results suggest that the control of idealized physical systems with real-time nonlinear decoding algorithms may form the basis for a practical human NMP. A B C D x2 x1 y2 y1 ay = kCy1 −kDy2 −βvy ax = kAx2 −kBx1 −βvx a = [ , ] ay ax −L L Figure 1: Sketch of the spring-based model. The outer endpoints of the springs are assumed to slide without friction, so that A and B are always orthogonal to C and D. The rest length is assumed to be zero for all springs. Movement is controlled by varying the stiffness coefficients kA, kB, kC and kD. 2 The spring-based model Decoding neural activity in N cells involves estimating the values of a hidden state X(t) at time t given an observed sequence of firing rates Z(0) . . . Z(t) up to time t, with each Z(i) being a 1 × N vector. The state here is typically taken to be either hand position, velocity, etc. Methods described in the literature can be roughly divided into two classes. Generative methods formulate the likelihood of the observed firing rates conditioned on the state and use Bayesian inference methods such as the Kalman filter [21] or particle filter [3] to estimate the system state from observations. In contrast, direct (or discriminative) methods learn a function that maps firing rates over some preceding temporal window into hand kinematics. Various methods have been explored including linear regression [1, 13], support-vector regression [15] and neural network algorithms [12, 20]. All these previous methods have focused on direct decoding of kinematic properties of the hand movement and have ignored the arm dynamics. 2.1 Parametrization of dynamics Our approach to incorporating dynamics into the decoding process has been inspired by the following model of [5], sketched out in Figure 1. Without loss of generality, let the work area (that fully contains the movement range) be an axis-aligned square [−L, L] × [−L, L]. The endpoint of the limb (wrist) is assumed to be connected to one end of four imaginary springs, the other end of which is sliding with no friction along rails forming the boundaries of the “work area”. Thus, at every time instance each spring is parallel to one of the axes. The analysis of dynamics therefore can be easily decomposed to x and y components. Below we focus on the x component. All four springs are assumed to have rest length of zero. Suppose that the position of the wrist at time t is [x(t), y(t)]. Then the springs A and B apply forces determined by Hooke’s law, namely, kA(t) (L −x(t))) and −kB(t) (x(t) + L), where kA(t) and kB(t) are the stiffness coefficients of A and B at time t. To reflect physical constraints on movement in the real world, the model presumes a point mass m in the center of the wrist (i.e. at the cursor location). Furthermore, it is assumed that the movement is damped by a viscous force proportional to the instantaneous velocity, −βvx(t). The viscosity is meant to represent both the medium resistance and the elasticity of the muscles. In summary, according to Newton’s second law the acceleration of the hand at time t is given by m · ax(t) = kA(t) · (L −x(t)) −kB(t) · (L + x(t)) −β · vx(t), (1) where vx(t) is the instantaneous velocity of the wrist at time t along the x axis. Control of movement in this model is realized through varying the stiffness coefficients of the springs: given the current position of the wrist x, the desired acceleration a is achieved by setting kA(t) and kB(t) so as to solve (1). This solution is not unique, in general. We note, however, that the physiological meaning of the k’s requires them to be non-negative, since the muscles can not “push”. This motivates us to introduce the total stiffness constraint kA + kB = κ, (2) where κ is a constant chosen so that no feasible acceleration would yield negative kA or kB. We can now recover the underlying parameters K = [kA, kB, kC, kD] for the observed movement by applying (1) at each time step as follows. First we estimate the velocities ˆvx(t) = x(t+1)−x(t) and accelerations ˆax(t) = ˆvx(t + 1) −ˆvx(t). Then, we substitute (2) into (1), yielding ˆkA(t) = m · ˆax(t) + ˆvx(t) + κ · (L + x(t)) 2L . (3) The value of kB(t) is then uniquely determined from (2). Repeating these calculations for the y-axis produces the coefficients for springs C and D. 2.2 Decoding neural activity We now turn to our main goal: inferring the desired movement from a recorded neural signal. We treat this as a supervised learning task. In the training stage, we take a data set in which we have both the recorded neural signal Z(t) and the observed trajectory of hand positions X(t) associated with that signal. From this, we can learn a mapping g from the neural signal to the desired representation of movement. For direct kinematic decoding this means inference g : Z(t) →X(t). For decoding with the SBM, this means inference of spring coefficients in the SBM, g : Z(t) →K(t), followed by the calculation K(t) →X(t) as described above. The SBM formulation also requires a preprocessing step for the training data: we need to convert the observed position trajectory X to the trajectory through K, acording to (3). We have focused on two ways of constructing g, described below. Linear filter. The linear filter (LF) approach [13] consists of modeling the mapping from firing rate to movement by a linear transformation W that is applied on a concatenated firing rate vector for a fixed history depth l: X(t) = x0 + W˜Z(t), (4) where x0 is a constant (bias) term and ˜Z(t) =  ZT (t −l), . . . , ZT (t) T . (5) The dimension of ˜Z(t) for a recording from N channels, is 1 × lN. The transformation W is fit to the training data by solving the least squares problem, and then used at the decoding stage to predict values of X. Application of the LF to the SBM is straightforward: the target of the mapping is in the space of coefficients K, rather than position X. Support vector regression. Support Vector Machines (SVM) are a widely popular learning architecture that relies on two key ideas: mapping the data into a (possibly infinite-dimensional) feature space using a kernel function, and optimizing the bound on generalization error. In the context of regression [16] this means using an ϵ-insensitive loss function, that does not penalize training errors up to ϵ, to fit a linear function in the feature space. SVMs also aim at reducing model complexity by penalizing the objective for the norm of the resulting function. The solution is finally expressed in terms of kernel functions involving a subset of the training examples (the support vectors). The key parameters that affect the performance of SVMs are the value of ϵ, the tradeoff c that governs the penalty of training error, and parameters of the kernel function. SVMs have been widely successful in many applications of machine learning. However, their application to the task of neural decoding has been limited to the directional center-out task [15]. Here we evaluate SV regression as a method for decoding more general 2D movement. Again, the SVM formulation is readily extended to the SBM (with the target functions being components of K). Alternative decoders. A variety of other decoding approaches has been proposed in the literature. We conducted experiments with three additional algorithms: Kalman filter [21], Multilayer Perceptrons [20] and Echo-state Networks, a recurrent neural network architecture [7]. The Kalman filter uses a linear model of the mapping of neural signals to movement, while the models underlying the other two methods are nonlinear. Our findings can be summarized as follows, for both kinematic decoding and decoding with the spring-based model. First, nonlinear methods perform significantly better than linear ones. Second, there was a trend for the Kalman filter to perform better than the linear filter. Third, among nonlinear methods SVM tended to perform better than the two neural network architectures. However, these latter differences could not be established with significance. In the following section, we focus on experiments with the linear filter (the de-facto standard decoding method today) and SVM, which achieved the overall best results in our experiments. 3 Experiments We evaluated the performance of the proposed approach on data sets obtained from two behaving monkeys (Macaca Mulatta). The neural signal was obtained with a Cyberkinetics microelectrode array [9] (96 electrodes) implanted in the arm/hand area of MI cortex. The experimental animals performed the tasks described below. Sequential reaching movement , described in [13] Reach targets and a hand position feedback cursor were presented on a video screen in front of the monkey. When a reach target was presented Table 1: Details of experiments. Units: number of distinct units identified after spike sorting. Train, test: length of train and test sequences in seconds. Session Units Train Test CL-sequential 49 623 140 LA-continuous 96 244 165 CL-continuous 55 448 140 the animal’s task was to move a manipulandum so that the feedback cursor moved into the target and remained in the target for 500ms, at which time that target was extinguished and a new reach target was presented in a different location. Target locations were drawn i.i.d. from the uniform distribution over the screen surface. This was repeated for up to 10 targets per trial. Upon successful completion of a trial the animal received a juice reward. Hand kinematics and neural activity were simultaneously recorded while the animal performed the task. Continuous tracking , described in [14] Monkey was viewing a computer screen on which a visual target appeared in a random, but smooth, sequence of locations. The monkey was trained to follow the target’s position with a cursor, using a manipulandum, and received a reward for each successful trial (i.e. when the cursor remained within the target for a duration drawn for each target randomly between 3 and 10 seconds). The recorded neural activity was converted to spike trains by computer-assisted spike-sorting software, and the spike counts were calculated in non-overlapping 70ms windows. The hand kinematics (obtained by recording the 2D position of the manipulandum) were averaged within each window, to produce an aligned representation. 3.1 Evaluation protocol In each of the data sets, we selected a segment of the recording to train all the decoders, and a subsequent segment to test the decoding accuracy. Tuning of parameters (the kernel parameters of the SVM or the mass and viscosity of the spring model) was done on a held-out portion of the training segment. We built the firing rate history matrix by concatenating for each time step the firing rates for 15 bins. For instance, for monkey CL, continuous tracking, the dimension of the neural signal representation was 825 (55 channels × 15 history bins). This firing rates were then normalized so that all values would be within [-1,1]. Basic statistics of the data used in the experiments are given in Table 1. We considered three evaluation criteria: Correlation coefficients (CC): between the estimated and true value for each of the two spatial coordinates over the entire trajectory: CC = P t(xt −¯xt)(ˆxt −¯ˆxt) qP t(xt −¯xt)2 P t(ˆxt −¯ˆxt)2 . Mean absolute error (MAE): in the estimated position versus the ground truth: MAE = 1 N PN t=1 ∥X(t) −ˆX(t)∥. Power spectrum reconstruction : One of the objectives of a practical decoding algorithm, especially in the context of assistive technology, is to produce movement that appears “natural”. As a criterion for evaluating the degree of “naturalness” we use the similarity between power spectrum densities of the true movement and the reconstructed one. Specifically, we calculated the L1 norm between the energy distributions over normalized angular frequencies, taken in the log domain (see Figure 2 for illustration). 3.2 Results The reported results for SVM were obtained with quadratic kernel, k(x, y) = (x·y+1)2; the tradeoff term c was fixed to 100, and the insensitivity parameter ϵ was set to 5 for the spring coefficient and 2 for direct position decoding. The number of support vectors was between 20% and 65% of the training set size. Decoder CL/sequential LA/continuous CL/continuous MAE CCx CCy MAE CCx CCy MAE CCx CCy Linear-kinematics 5.3 0.69 0.79 5.03 0.5 0.75 6.66 0.80 0.83 Linear-SBM 5.7 0.64 0.74 5.26 0.46 0.72 6.82 0.77 0.81 SVM-kinematics 4.45 0.80 0.85 4.44 0.60 0.82 3.82 0.86 0.86 SVM-SBM 4.91 0.76 0.81 4.69 0.55 0.80 4.05 0.83 0.84 Table 2: Summary of results on the three datasets. MAE is given in cm, over workspace of roughly 30×30 cm. Table 2 summarizes the MAE and CC measured on the test segment for each method. One observation is that SVM tends to outperform the linear filter, in line with previous observations [12, 15]. We believe that this is due to inherent nonlinearity in the underlying relationship, which is better captured by the SVM. Moreover, it is apparent that the decoding accuracy of the SBM is on par with that of the conventional kinematic decoding (the observed differences were not significant at the 0.05 level, measured over the per-bin position errors). 0 0.2 0.4 0.6 0.8 1 −60 −40 −20 0 20 40 Normalized frequency, π rad/sample Power/freq. unit, db/sample Figure 2: Example of power spectrum densities for true hand trajectory (dotted black), reconstruction with SVM-kinematics (dashed blue) and reconstruction with SVM-SBM (solid red). Estimated using Burg’s algorithm (pburg in Matlab, order 4). Data from x coordinate, LA-continuous. Figure 3: A 1.5 second path segment, true (circles) and reconstructed (squares). Left: SVM on kinematics, right: SVM with SBM. Markers show position averaged in each 70ms bin. Note the ragged form of the SVM-kinematics trajectory. Results in Table 2, however, tell only a part of the story. Figure 3 shows, for a segment of 4.2 sec, a typical example of the movement reconstructed with SVM on kinematics versus SVM on spring coefficients. The accuracy in terms of deviation from ground truth is similar, however the estimate produced by the direct kinematic decoding is significantly more “ragged”. Such discrepancy is not necessarily reflected in the standard measures of accuracy such as CC or MAE. Quantitavely, this can be assessed by calculating the L1 norm between the power spectrum densities of the true and reconstructed hand trajectories. The estmated values of this quantity in our experiments are shown in Table 3. These results reflect the relationship shown in Figure 2 (a typical case). Table 3: Estimated L1 norm between power spectrum density of true and reconstructed trajectories. Decoder CL-sequential LA-continuous CL-continuous x y x y x y Linear-kinematics 147.41 154.80 199.24 206.61 49.68 44.37 Linear-SBM 71.58 68.24 72.99 80.37 35.68 43.72 SVM-kinematics 143.78 151.35 188.65 196.14 33.96 28.44 SVM-SBM 51.45 52.31 53.05 66.20 20.83 21.15 4 Discussion The spring-based model proposed in this paper represents a first attempt to directly incorporate realistic physical constraints into a neural decoding model. Our experiments illustrate that the coefficients of an idealized physical system can be decoded from motor cortical firing rates, without sttistically significant loss of decoding accuracy compared to more standard direct decoding of kinematics. An advantage of such an approach is that the physical properties of the system damp high frequency motions resulting in decoded movements that inherently have the properties of natural movement, with no ad-hoc smoothing. Future work should consider more sophisticated physical models such as a simulated robotic arm and a biophysically motivated musculoskeletal system. With the current state of the art in neural recording and decoding, recovering the parameters of such models may be challenging. In contrast, the approach presented here “summarizes” the effect of a more complicated system with just a few idealized muscle-like elements. Additional experiments are also warranted. In particular using a robotic feedback device we can simulate the physical system of springs presented here such that the monkeys control a device with the properties of our model. We hypothesize that the accuracy of decoding spring coefficients from motor cortical activity in this condition will improve. This would suggest that matching the decoding model to the physical system being controlled will improve decoding accuracy. Finally, the real test of physically-based models will come in human NMP experiments. We plan to test human cursor control with kinematic and physically-based decoders. We hypothesize that the dynamics of the physically-based model will make it easier to control accurately (and perhaps provide a more satisfying experience for the user). This could be a first step toward the neural control of mechanical actuators in the physical world. Acknowledgments This work is partially supported by NIH-NINDS R01 NS 50867-01 as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program and by the Office of Naval Research (award N0014-04-1-082). We also thank the European Neurobotics Program FP6-IST-001917. We thank Matthew Fellows and John Donoghue for providing data, and Reza Shadmehr for helpful conversations. References [1] J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. O’Doherty, D. M. Santucci, D. F. Dimitrov, P. G. Patil, C. S. Henriquez, and M. A. L. Nicolelis. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS, Biology, 1(2):001–016, 2003. [2] D. Flament and J. Hore. Relations of motor cortex neural discharge to kinematics of passive and active elbow movements in the monkey. Journal of Neurophysiology, 60(4):1268–1284, 1988. [3] Y. Gao, M. J. Black, E. Bienenstock, S. Shoham, and J. P. Donoghue. Probabilistic inference of hand motion from neural activity in motor cortex. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 213–220, Cambridge, MA, 2002. MIT Press. [4] A. Georgopoulos, A. Schwartz, and R. Kettner. Neural population coding of movement direction. Science, 233:1416–1419, 1986. [5] G. E. Hinton and V. Nair. Inferring motor programs from images of handwritten digits. In Advances in Neural Information Processing, 2005. [6] L. R. Hochberg, J. A. Mukand, G. I. Polykoff, G. M. Friehs, and J. P. Donoghue. Braingate neuromotor prosthesis: Nature and use of neural control signals. In Society for Neuroscience Abst. Program No. 520.17, Online, 2005. [7] H. Jaeger. The “echo state” approach to analyzing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Institute for Computer Science, 2001. [8] R. Kettner, A. Schwartz, and A. Georgopoulos. Primary motor cortex and free arm movements to visual targets in three-dimensional space. iii. positional gradients and population coding of movement direction from various movement origins. Journal of Neuroscience, 8(8):2938– 2947, 1988. [9] E. Maynard, C. Nordhausen, and R. Normann. The Utah intracortical electrode array: A recording structure for potential brain-computer interfaces. Electroencephalography and Clinical Neurophysiology, 102:228–239, 1997. [10] D. Moran and A. Schwartz. Motor cortical representation of speed and direction during reaching. Jrnl. of Neurophysiology, 82(5):2676–2692, 1999. [11] L. Paninski, M. Fellows, N. Hatsopoulos, and J. P. Donoghue. Spatiotemporal tuning of motor cortical neurons for hand position and velocity. J. of Neurophysiology, 91:515–532, 2004. [12] Y. N. Rao, S.-P. Kim, J. Sanchez, D. Erdogmus, J. Principe, J. Carmena, M. Lebedev, and M. Nicolelis. Learning mappings in brain-machine interfaces with echo state networks. In IEEE Int. Conf. on Acou., Speech, and Sig. Proc., volume 5, pages 233–236, March 2005. [13] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue. Brainmachine interface: Instant neural control of a movement signal. Nature, 416:141–142, 2002. [14] S. Shoham, L. M. Paninsky, M. R. Fellows, N. G. Hatsopoulos, J.P. Donoghue, and R. A. Normann. Statistical encoding model for a primary motor cortical brain-machine interface. IEEE Transactions on Biomedical Engineering, 52(7):1312–1322, 2005. [15] L. Shpigelman, K. Crammerr, R. Paz, E. Vaadia, and Y. Singer. A temporal kernel-based model for tracking hand-movements from neural activities. In Advances in Neural Information Processing, Vancouver, BC, December 2005. [16] A. J. Smola and B. Sch¨olkopf. A tutorial on support vector regression. Statistics and Computing, 14:199–222, 2004. [17] S. Suner, M. R. Fellows, C. Vargas-Irwin, G. K. Nakata, and J. P. Donoghue. Reliability of signals from a chronically implanted, silicon-based electrode array in non-human primate primary motor cortex. IEEE Trans. on Neural Systems and Rehab. Eng., 13(4):524–541, 2005. [18] D. Taylor, S. Helms Tillery, and A. Schwartz. Direct cortical control of 3D neuroprosthetic devices. Science, 296(5574):1829–1832, 2002. [19] E. Todorov. Direct cortical control of muscle activation in voluntary arm movements: a model. Nature Neuroscience, 3(4):391–398, April 2000. [20] J. Wessberg, C. Stambaugh, J. Kralik, Laubach M. Beck, P., J. Chapin, J. Kim, S. Biggs, M. Srinivasan, and M. Nicolelis. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature, 408:361–365, 2000. [21] W. Wu, Y. Gao, E. Bienenstock, J. P. Donoghue, and M. J Black. Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Computation, 18(1):80–118, 2006. [22] W. Wu, A. Shaikhouni, J. P. Donoghue, and M. J. Black. Closed-loop neural control of cursor motion using a Kalman filter. In Proc. IEEE Engineering in Medicine and Biology Society, pages 4126–4129, Sep 2004.
2006
188
3,019
Conditional mean field Peter Carbonetto Department of Computer Science University of British Columbia Vancouver, BC, Canada V6T 1Z4 pcarbo@cs.ubc.ca Nando de Freitas Department of Computer Science University of British Columbia Vancouver, BC, Canada V6T 1Z4 nando@cs.ubc.ca Abstract Despite all the attention paid to variational methods based on sum-product message passing (loopy belief propagation, tree-reweighted sum-product), these methods are still bound to inference on a small set of probabilistic models. Mean field approximations have been applied to a broader set of problems, but the solutions are often poor. We propose a new class of conditionally-specified variational approximations based on mean field theory. While not usable on their own, combined with sequential Monte Carlo they produce guaranteed improvements over conventional mean field. Moreover, experiments on a well-studied problem— inferring the stable configurations of the Ising spin glass—show that the solutions can be significantly better than those obtained using sum-product-based methods. 1 Introduction Behind all variational methods for inference in probabilistic models lies a basic principle: treat the quantities of interest, which amount to moments of the random variables, as the solution to an optimization problem obtained via convex duality. Since optimizing the dual is rarely an amelioration over the original inference problem, various strategies have arisen out of statistical physics and machine learning for making principled (and unprincipled) approximations to the objective. One such class of techniques, mean field theory, requires that the solution define a distribution that factorizes in such a way that the statistics of interest are easily derived. Mean field remains a popular tool for statistical inference, mainly because it applies to a wide range of problems. As remarked by Yedidia in [17], however, mean field theory often imposes unrealistic or questionable factorizations, leading to poor solutions. Advances have been made in improving the quality of mean field approximations [17, 22, 26], but their applicability remains limited to specific models. Bethe-Kikuchi approximations overcome some of the severe restrictions on factorizability by decomposing the entropy according to a junction graph [1], for which it is well established that generalized belief propagation updates converge to the stationary points of the resulting optimization problem (provided they converge at all). Related variational approximations based on convex combinations of treestructured distributions [24] have the added advantage that they possess a unique global optimum (by contrast, we can only hope to discover a local minimum of the Bethe-Kikuchi and mean field objectives). However, both these methods rely on tractable sum-product messages, hence are limited to Gaussian Markov random fields or discrete random variables. Expectation propagation projections and Monte Carlo approximations to the sum-product messages get around these limitations, but can be unsuitable for dense graphs or can introduce extraordinary computational costs [5, 23]. Thus, there still exist factorized probabilistic models, such as sigmoid belief networks [21] and latent Dirichlet allocation [5], whereby mean field remains to date the tractable approximation of choice. Several Monte Carlo methods have been proposed to correct for the discrepancy between the factorized variational approximations and the target distribution. These methods include importance sampling [8, 14] and adaptive Markov Chain Monte Carlo (MCMC) [6]. However, none of these techniques scale well to general, high-dimensional state spaces because the variational approximations tend to be too restrictive when used as a proposal distribution. This is corroborated by experimental results in those papers as well as theoretical results [20]. We propose an entirely new approach that overcomes the problems of the aforementioned methods by constructing a sequence of variational approximations that converges to the target distribution. To accomplish this, we derive a new class of conditionally-specified mean field approximations, and use sequential Monte Carlo (SMC) [7] to obtain samples from them. SMC acts as a mechanism to migrate particles from an easy-to-sample distribution (naive mean field) to a difficult-to-sample one (the distribution of interest), through a sequence of artificial distributions. Each artificial distribution is a conditional mean field approximation, designed in such a way that it is at least as sensible as its predecessor because it recovers dependencies left out by mean field. Sec. 4 explains these ideas thoroughly. The idea of constructing a sequence of distributions has a strong tradition in the literature, dating back to work on simulating the behaviour of polymer chains [19] and counting and integration problems [12]. Recent advances in stochastic simulation have allowed practitioners to extend these ideas to general probabilistic inference [7, 11, 15]. However, very little is known as to how to come up with a good sequence of distributions. Tempering is perhaps the most widely used strategy, due to its ease of implementation and intuitive appeal. At early stages, high global temperatures smooth the modes and allow easy exploration of the state space. Afterward, the temperature is progressively cooled until the original distribution is recovered. The problem is that the variance of the importance weights tends to degenerate around a system’s critical range of temperatures, as observed in [9]. An entirely different approach is to remove constraints (or factors) from the original model, then incrementally reintroduce them. This has been a fruitful approach for approximate counting [12], simulation of protein folding, and inference in the Ising model [9]. If, however, a reintroduced constraint has a large effect on the distribution, the particles may again rapidly deterioriate. We limit our study to the Ising spin glass model [16]. Ernst Ising developed his model in order to explain the phenomenon of “spontaneous magnetization” in magnets. Here, we use it as a test bed to investigate the viability or our proposed algorithm. Our intent is not to design an algorithm tuned to sampling the states of the Ising model, but rather to tackle factorized graphical models with arbitrary potentials. Conditional mean field raises many questions, and since we can only hope to answer some in this study, the Ising model represents a respectable first step. We hint at how our ideas might generalize in Sec. 6. The next two sections serve as background for the presentation of our main contribution in Sec. 4. 2 Mean field theory In this study, we restrict our attention to random vectors X = (X1, . . . , Xn)T , with possible configurations x = (x1, . . . , xn)T ∈Ω, that admit a distribution belonging to the standard exponential family [25]. A member of this family has a probability density of the form p(x; θ) = exp  θT φ(x) −Ψ(θ) , (1) where θ is the canonical vector of parameters, and φ(x) is the vector of sufficient statistics [25]. The log-partition function Ψ(θ) ensures that p(x; θ) defines a valid probability density, and is given by Ψ(θ) = log R exp  θT φ(x) dx. Denoting Eπ{f(X)} to be the expected value of a function f(x) with respect to distribution π, Jensen’s inequality states that f(Eπ{X}) ≤Eπ{f(X)} for any convex function f(x) and distribution π on X. Using the fact that −log(x) is convex, we obtain the variational lower bound Ψ(θ) = log Ep( · ;α) n exp(θT φ(X)) p(X;α) o ≥θT µ(α) − R p(x; α) log p(x; α) dx, (2) where the mean statistics are defined by µ(α) ≡Ep( · ;α){φ(X)}. The second term on the righthand side of (2) is the Boltzmann-Shannon entropy of p(x; α), which we denote by H(α). Clearly, some lower bounds of the form (2) are better than others, so the optimization problem is to find a set of parameters α that leads to the tightest bound on the log-partition function. This defines the variational principle. We emphasize that this lower bound holds for any choice of α. A more rigorous treatment follows from analyzing the conjugate of the convex, differentiable function Ψ(θ) [25]. As it is presented here, the variational principle is of little practical use because no tractable expressions exist for the entropy and mean statistics. There do, however, exist particular choices of the variational parameters α where it is possible to compute them both. We shall examine one particular set of choices, naive mean field, in the context of the Ising spin glass model. At each site i ∈{1, . . . , n}, the random variable Xi is defined to be xi = +1 if the magnetic dipole in the “up” spin position, or xi = −1 if it is “down”. Each scalar θij defines the interaction between sites i and j. Setting θij > 0 causes attraction between spins, and θij < 0 induces repulsion. Scalars θi define the effect of the external magnetic field on the energy of the system. We use the undirected labelled graph G = (V, E), where V = {1, . . . , n}, to represent the conditional independence structure of the probability measure (there is no edge between i and j if and only if Xi and Xj are conditionally independent given values at all other points of the graph). Associating singleton factors with nodes of G and pairwise factors with its edges, and setting the entries of the sufficient statistics vector to be xi, ∀i ∈V and xixj, ∀(i, j)∈E, we can write the probability density as p(x; θ) = exp nP i∈V θixi + P (i,j)∈Eθijxixj −Ψ(θ) o . (3) The corresponding variational lower bound on the log-partition function Ψ(θ) then decomposes as F(α) ≡P i∈V θiµi(α) + P (i,j)∈Eθijµij(α) + H(α), (4) where µi(α) and µij(α) are the expectations of single spins i and pairs of spins (i, j), respectively. Naive mean field restricts the variational parameters α to belong to {α | ∀(i, j) ∈E, αij = 0}. We can compute the lower bound (4) for any α belonging to this subset because we have tractable expressions for the mean statistics and entropy. For the Ising spin glass, the mean statistics are µi(α) ≡ R xi p(x; α) dx = tanh(αi) (5) µij(α) ≡ R xi xj p(x; α) dx = µi(α) µj(α), (6) and the entropy is derived to be H(α) = − X i∈V  1−µi(α) 2  log  1−µi(α) 2  − X i∈V  1+µi(α) 2  log  1+µi(α) 2  . (7) The standard way to proceed [17, 25] is to derive coordinate ascent updates by equating the derivatives ∂F/∂µi to zero and solving for µi. Since the variables µi must be valid mean statistics, they are constrained to lie within an envelope known as the marginal polytope [25]. Alternatively, one can solve the optimization problem with respect to the unconstrained variational parameters α. Since it is not possible to obtain the fixed-point equations by isolating each αi, instead one can easily derive expressions for the gradient ∇F(α) and Hessian ∇2F(α) and run a nonlinear optimization routine. This approach, as we will see, is necessary for optimizing the conditional mean field objective. 3 Sequential Monte Carlo Consider a sequence of two distributions, π(x) and π⋆(x), where the second represents the target. Assuming familiarity with importance sampling, this will be sufficient to explain key concepts underlying SMC, and does not overwhelm the reader with subscripts. See [7] for a detailed description. In the first step, samples x(s) ∈Ωare drawn from some proposal density q(x) and assigned importance weights w(x) = π(x)/q(x). In the second step, a Markov transition kernel K⋆(x′ | x) shifts each sample towards the target, and the importance weights ˜w(x, x′) compensate for any failure to do so. In effect, the second step consists of extending the path of each particle onto the joint space Ω× Ω. The unbiased importance weights on the joint space are given by ˜w(x, x′) = ˜π(x, x′) ˜q(x, x′) = L(x | x′) π⋆(x′) K⋆(x′ | x) π(x) × w(x), (8) where ˜π(x, x′) = L(x | x′) π⋆(x′) is the artificial distribution over the joint space, ˜q(x, x′) = K⋆(x′ | x) q(x) is the corresponding importance distribution, and the “backward-in-time” kernel L(x | x′) is designed so that it admits π(x) as its invariant distribution. Our expectation is that K⋆(x′ | x) have invariant distribution π⋆(x), though it is not required. To prevent potential particle degeneracy in the marginal space, we adopt the standard stratified resampling algorithm [13]. Choice of backward-in-time kernel. Mean field tends to be overconfident in its estimates (although not necessarily so). Loosely speaking, this means that if π(x) were to be a mean field approximation, then it would likely have lighter tails than the target distribution π⋆(x). If we were to use a suboptimal backward kernel [7, Sec. 3.3.2.3], the importance weights would simplify to ˜w(x, x′) = π⋆(x) / π(x) × w(x). (9) Implicitly, this is the choice of backward kernel made in earlier sequential frameworks [11, 15]. Since the mean field approximation π(x) might very well fail to “dominate” the target π⋆(x), the expression (9) risks having unbounded variance. This is a problem because the weights may change abruptly from one iteration to the next, or give too much importance to too few values x [18]. Instead, Del Moral et al suggest approximating the optimal backward-in-time kernel [7, Sec. 3.3.2.1] by L(x | x′) = K⋆(x′ | x) π(x) R K⋆(x′ | x) π(x) dx. (10) It offers some hope because the resulting importance weights on the joint space, following (8), are ˜w(x, x′) = π⋆(x′) R K⋆(x′ | x) π(x) dx × w(x). (11) If the transition kernel increases the mass of the proposal in regions where π(x) is weak relative to π⋆(x), the backward kernel (10) will rectify the problems caused by an overconfident proposal. Choice of Markov transition kernel. The drawback of the backward kernel (10) is that it limits the choice of transition kernel K⋆(x′ | x), a crucial ingredient to a successful SMC simulation. For instance, we can’t use the Metropolis-Hastings algorithm because its transition kernel involves an integral that does not admit a closed form [18]. One transition kernel which fits our requirements and is widely applicable is a mixture of kernels based on the random-scan Gibbs sampler [18]. Denoting δy(x) to be the Dirac measure at location y, the transition kernel with invariant distribution π⋆(x) is K⋆(x′ | x) = P k ρkπ⋆(x′ k | x−k) δx−k(x′ −k), (12) where π(xk|x−k) is the conditional density of xk given values at all other sites, and ρk is the probability of shifting the samples according to the Gibbs kernel at site k. Following (11) and the identity for conditional probability, we arrive at the expression for the importance weights, ˜w(x, x′) = π⋆(x′) π(x′)  X k ρk π⋆(x′ k | x′ −k) π(x′ k | x′ −k) −1 × w(x). (13) Normalized estimator. For almost all problems in Bayesian analysis (and certainly the one considered in this paper), the densities are only known up to a normalizing constant. That is, only f(x) and f ⋆(x) are known pointwise, where π(x) = f(x)/Z and π⋆(x) = f ⋆(x)/Z⋆. The normalized importance sampling estimator [18] yields (asymptotically unbiased) importance weights ˜w(x, x′) ∝ˆw(x, x′), where the unnormalized importance weights ˆw(x, x′) in the joint space remain the same as (13), except that we substitute π(x) for f(x), and π⋆(x) for f ⋆(x). The normalized estimator can recover a Monte Carlo estimate of the normalizing constant Z⋆via the recursion Z⋆≈Z × P s ˆw(s), (14) provided we already have a good estimate of Z [7]. 4 Conditional mean field We start with a partition R (equivalence relation) of the set of vertices V . Elements of R, which we denote with the capital letters A and B, are disjoint subsets of V . Our strategy is to come up with a good naive mean field approximation to the conditional density p(xA | x−A; θ) for every equivalence class A ∈R, and then again for every configuration x−A. Here, we denote xA to be the configuration x restricted to set A ⊆V , and x−A to be the restriction of x to V \ A. The crux of the matter is that for any point α, the functions p(xA | x−A; α) only represent valid conditional densities if they correspond to some unique joint, as discussed in [2]. Fortunately, under the Ising model the terms p(xA | x−A; α) represent valid conditionals for any α. What we have is a slight generalization of the auto-logistic model [3], for which the joint is always known. As noted by Besag, “although this is derived classically from thermodynamic principles, it is remarkable that the Ising model follows necessarily as the very simplest non-trivial binary Markov random field [4].” Conditional mean field forces each conditional p(xA | x−A; α) to decompose as a product of marginals p(xi | x−A; α), for all i ∈A. As a result, αij must be zero for every edge (i, j) ∈E(A), where we define E(A) ≡{(i, j) | i ∈A, j ∈A} to be the set of edges contained by the vertices in subset A. Notice that we have a set of free variational parameters αij defined on the edges (i, j) that straddle subsets of the partition. Formally, these are the edges that belong to CR ≡{(i, j) | ∀A ∈R, (i, j) /∈E(A)}. We call CR the set of “connecting edges”. Our variational formulation consists of competing objectives, since the conditionals p(xA | x−A; α) share a common set of parameters. We formulate the final objective function as a linear combination of conditional objectives. A conditional mean field optimization problem with respect to graph partition R and linear weights λ is of the form maximize FR,λ(α) ≡P A∈R P xN(A) λA(xN(A))FA(α, xN(A)) subject to αij = 0, for all (i, j) ∈E \ CR. (15) We extend the notion of neighbours to sets, so that N(A) is the Markov blanket of A. The nonnegative scalars λA(xN(A)) are defined for every equivalence class A ∈R and configuration xN(A). Each conditional objective FA(α, xN(A)) represents a naive mean field lower bound to the logpartition function of the conditional density p(xA | x−A; θ) = p(xA | xN(A); θ). For the Ising model, FA(α, xN(A)) follows from the exact same steps used in the derivation of the naive mean field lower bound in Sec. 2, except that we replace the joint by a conditional. We obtain the expression FA(α, xN(A)) = P i∈Aθiµi(α, xN(A)) + P (i,j)∈E(A) θijµij(α, xN(A)) + P i∈A P j ∈(N(i) ∩N(A))θijxjµi(α, xN(A)) + HA(α, xN(A)), (16) with the conditional mean statistics for i ∈A, j ∈A given by µi(α, xN(A)) ≡ R xi p(xA | xN(A); α) dx = tanh αi + P j ∈(N(i) ∩N(A))αijxj  (17) µij(α, xN(A)) ≡ R xi xj p(xA | xN(A); α) dx = µi(α, xN(A)) µj(α, xN(A)). (18) The entropy is identical to (7), with the mean statistics replaced with their conditional counterparts. Notice the appearance of the new terms in (16). These terms account for the interaction between the random variables on the border of the partition. We can no longer optimize µ following the standard approach; we cannot treat the µi(α, xN(A)) as independent variables for all xN(A), as the solution would no longer define an Ising model (or even a valid probability density, as we discussed). Instead, we optimize with respect to the parameters α, taking derivatives ∇FR,λ(α) and ∇2FR,λ(α). We have yet to address the question: how to select the scalars λ? It stands to reason that we should place greater emphasis on those conditionals that are realised more often, and set λA(xN(A)) ∝ p(xN(A); θ). Of course, these probabilities aren’t available! Equally problematic is the fact that (15) may involve nearly as many terms as there are possible worlds, hence offering little improvement over the naive solution. As it turns out, a greedy choice resolves both issues. Supposing that we are at some intermediate stage in the SMC algorithm (see Sec. 4.1), a greedy but not unreasonable choice is to set λA(xN(A)) to be the current Monte Carlo estimate of the marginal p(xN(A); θ), λA(xN(A)) = P sw(s)δx(s) N(A)(xN(A)). (19) Happily, the number of terms in (15) is now on the order of the number of the particles. Unlike standard naive mean field, conditional mean field optimizes over the pairwise interactions αij defined on the connecting edges (i, j) ∈CR. In our study, we fix these parameters to αij = θij. This choice is convenient for two reasons. First, the objective is separable on the subsets of the partition. Second, the conditional objective of a singleton subset has a unique maximum at αi = θi, so any solution to (15) is guaranteed to recover the original distribution when |R| = n. 4.1 The Conditional mean field algorithm We propose an SMC algorithm that produces progressively refined particle estimates of the mean statistics, in which conditional mean field acts in a supporting role. The initial SMC distribution is obtained by solving (15) for R = {V }, which amounts to the mean field approximation derived in Sec. 2. In subsequent steps, we iteratively solve (15), update the estimates of the mean statistics by reweighting (see (20)) and occasionally resampling the particles, then we split the partition until we cannot split it anymore, at which point |R| = n and we recover the target p(x; θ). It is easy to Figure 1: The graphs on the left depict the Markov properties of the conditional mean field approximations in steps 1 to 4. Graph #4 recovers the target. In the right plot, the solid line is the evolution of the estimate of the log-partition function in SMC steps 1 to 4. The dashed line is the true value. draw samples from the initial fully-factorized distribution. It is also easy to compute its log-partition function, as Ψ(α) = P i∈V log(2 cosh(αi)). Note that this estimate is not a variational lower bound. Let’s now suppose we are at some intermediate step in the algorithm. We currently have a particle estimate of the R-partition conditional mean field approximation p(x; α) with samples x(s) and marginal importance weights w(s). To construct the next artificial distribution p(x; α⋆) in the sequence, we choose a finer partitioning of the graph, R⋆, set the weights λ⋆according to (19), and use a nonlinear solver to find a local minimum α⋆to (15). The solver is initialized to α⋆ i = θi. We require that the new graph partition satisfy that for every B ∈R⋆, B ⊆A for some A ∈R. In this manner, we ensure that the sequence is progressing toward the target (provided R ̸= R⋆), and that it is always possible to evaluate the importance weights. It is not understood how to tractably choose a good sequence of partitions, so we select them in an arbitrary manner. Next, we use the random-scan Gibbs sampler (12) to shift the particles toward the new distribution, where the Gibbs sites k correspond to the subsets B ∈R⋆. We set the mixture probabilities of the Markov transition kernel to ρB =|B|/n. Following (13), the expression for the unnormalized importance weights is ˆw(x, x′) = exp P i α⋆ i x′ i + P (i,j) α⋆ ijx′ ix′ j  exp P i αix′ i + P (i,j) αijx′ ix′ j   X B∈R⋆ ρB Y i∈B π(x′ i | x′ N(B); α⋆) π(x′ i | x′ N(A); α) −1 × w(x), (20) where the single-site conditionals are π(xi | xN(A); α) = (1 + xiµi(α, xN(A)))/2 and A ∈R is the unique subset containing B ∈R⋆. The new SMC estimate of the log-partition function is Ψ(α⋆) ≈ Ψ(α) + log P s ˆw(s). To obtain the particle estimate of the new distribution, we normalize the weights ˜w(s) ∝ˆw(s), assign the marginal importance weights w(s) ←˜w(s), and set x(s) ←(x′)(s). We are now ready to move to the next iteration. Let’s look at a small example to see how this works. Example. Consider an Ising model with n=4 and parameters θ1:4 = 1 10(4, 3, −5, −2), θ13 = θ24 = θ34 = + 1 2 and θ12 = −1 2. We assume we have enough particles to recover the distributions almost perfectly. Setting R = {{1, 2, 3, 4}}, the first artificial distribution is the naive mean field solution α1:4 = (0.09, 0.03, −0.68, −0.48) with Ψ(α) = 3.10. Knowing that the true mean statistics are µ1:4 = (0.11, 0.07, −0.40, −0.27), and Var(Xi) = 1 −µ2 i , it is easy to see naive mean field largely underestimates the variance of the spins. In step 2, we split the partition into R = {{1, 2}, {3, 4}}, and the new conditional mean field approximation is given by α1:4 = (0.39, 0.27, −0.66, −0.43), with potentials α13 =θ13, α24 =θ24 on the connecting edges CR. The second distribution recovers the two dependencies between the subsets, as depicted in Fig. 1. Step 3 then splits subset {1, 2}, and we get α=(0.40, 0.30, −0.64, −0.42) by setting λ according to the weighted samples from step 2. Notice that α1 = θ1, α2 = θ2. Step 4 recovers the original distribution, at which point the estimate of the log-partition function comes close to the exact solution, as shown in Fig. 1. In this example, Ψ(α) happens to underestimate Ψ(θ), but in other examples we may get overestimates. The random-scan Gibbs sampler can mix poorly, especially on a fine graph partition. Gradually changing the parameters with tempered artificial distributions [7, Sec. 2.3.1] p(x; α)1−γp(x; α⋆)γ gives the transition kernel more opportunity to correctly migrate the samples to the next distribution. To optimize (15), we used a stable modification to Newton’s method that maintains a quadratic approximation to the objective with a positive definite Hessian. In light of our experiences, a better choice might have been to sacrifice the quadratic convergence rate for a limited-memory Hessian approximation or conjugate gradient; the optimization routine was the computational bottleneck on dense graphs. Even though the solver is executed at every iteration of SMC, the separability of the objective (15) means that the computational expense decreases significantly at every iteration. To our knowledge, this is the only SMC implementation in which the next distribution in the sequence is constructed dynamically according to the particle approximation from the previous step. Figure 2: (a) Estimate of the 12×12 grid log-partition function for each iteration of SMC. (c) Same, for the fully-connected graph with 26 nodes. We omitted the tree-reweighted upper bound because it is way off the map. Note that these plots will vary slightly for each simulation. (b) Average error of the mean statistics according to the hot coupling (HC), conditional mean field algorithm (CMF), Bethe-Kikuchi variational approximation (B-K), and tree-reweighted upper bound (TRW) estimates. The maximum possible average error is 2. For the HC and CMF algorithms, 95% of the estimates fall within the shaded regions according to a sample of 10 simulations. 5 Experiments We conduct experiments on two Ising models, one defined on a 12×12 grid, and the other on a fullyconnected graph with 26 nodes. The model sizes approach the limit of what we can compute exactly for the purposes of evaluation. The magnetic fields are generated by drawing each θi uniformly from [−1, 1] and drawing θij uniformly from {−1 2, + 1 2}. Both models exhibit strong and conflicting pairwise interactions, so it is expected that rudimentary MCMC methods such as Gibbs sampling will get “stuck” in local modes [9]. Our algorithm settings are as follows. We use 1000 particles (as with most particle methods, the running time is proportional to the number of particles), and we temper across successive distributions with a linear inverse temperature schedule of length 100. The particles are resampled when the effective sample size [18] drops below 1 2. We compare our results with the “hot coupling” SMC algorithm described in [9] (appropriately, using the same algorithm settings), and with two sum-product methods based on Bethe-Kikuchi approximations [1] and treereweighted upper bounds [24]. We adopt the simplest formulation of both methods in which the regions (or junction graph nodes) are defined as the edges E. Since loopy belief propagation failed to converge for the complete graph, we implemented the convergent double-loop algorithm of [10]. The results of the experiments are summarized in Fig. 2. The plots on the left and right show that the estimate of the log-partition function, for the most part, moves to the exact solution as the graph is partitioned into smaller and smaller pieces. Both Bethe-Kikuchi approximations and tree-reweighted upper bounds provide good approximations to the grid model. Indeed, the former recovers the logpartition function almost perfectly. However, these approximations break down as soon as they encounter a dense, frustrated model. This is consistent with the results observed in other experiments [9, 24]. The SMC algorithms proposed here and in [9], by contrast, produce significantly improved estimates of the mean statistics. It is surprising that we achieve similar performance with hot coupling [9], given that we do not exploit the tractability of sum-product messages in the Ising model (which would offer guaranteed improvements due to the Rao-Blackwell theorem). 6 Conclusions and discussion We presented a sequential Monte Carlo algorithm in which each artificial distribution is the solution to a conditionally-specified mean field optimization problem. We believe that the extra expense of nonlinear optimization at each step may be warranted in the long run as our method holds promise in solving more difficult inference problems, problems where Monte Carlo and variational methods alone perform poorly. We hypothesize that our approach is superior methods that “prune” constraints on factors, but further exploration in other problems is needed to verify this theory. Beyond mean field. As noted in [22], naive mean field implies complete factorizability, which is not necessary under the Ising model. A number of refinements are possible. However, this is not a research direction we will pursue. Bethe-Kikuchi approximations based on junction graphs have many merits, but they cannot be considered candidates for our framework because they produce estimates of local mean statistics without defining a joint distribution. Tree-reweighted upper bounds are appealing because they tend to be underconfident, but again we have the same difficulty. Extending to other members of the exponential family. In general, the joint is not available in analytic form given expressions for the conditionals, but there are still some encouraging signs. For one, we can use Brook’s lemma [3, Sec. 2] to derive an expression for the importance weights that does not involve the joint. Furthermore, conditions for guaranteeing the validity of conditional densities have been extensively studied in multivariate [2] and spatial statistics [3]. Acknowledgments We are indebted to Arnaud Doucet and Firas Hamze for invaluable discussions, to Martin Wainwright for providing his code, and to the Natural Sciences and Engineering Research Council of Canada for their support. References [1] S. M. Aji and R. J. McEliece. The Generalized distributive law and free energy minimization. In Proceedings of the 39th Allerton Conference, pages 672–681, 2001. [2] B. Arnold, E. Castillo, and J.-M. Sarabia. Conditional Specification of Statistical Models. Springer, 1999. [3] J. Besag. Spatial interaction and the statistical analysis of lattice systems. J. Roy. Statist. Soc., Ser. B, 36:192–236, 1974. [4] J. Besag. Comment to “Conditionally specified distributions”. Statist. Sci., 16:265–267, 2001. [5] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In Uncertainty in Artificial Intelligence, volume 20, pages 59–66, 2004. [6] N. de Freitas, P. Højen-Sørensen, M. I. Jordan, and S. Russell. Variational MCMC. In Uncertainty in Artificial Intelligence, volume 17, pages 120–127, 2001. [7] P. del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. J. Roy. Statist. Soc., Ser. B, 68:411–436, 2006. [8] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Advances in Neural Information Processing Systems, volume 12, pages 449–455, 1999. [9] F. Hamze and N. de Freitas. Hot Coupling: a particle approach to inference and normalization on pairwise undirected graphs. Advances in Neural Information Processing Systems, 18:491–498, 2005. [10] T. Heskes, K. Albers, and B. Kappen. Approximate inference and constrained optimization. In Uncertainty in Artificial Intelligence, volume 19, pages 313–320, 2003. [11] C. Jarzynski. Nonequilibrium equality for free energy differences. Phys. Rev. Lett., 78:2690–2693, 1997. [12] M. Jerrum and A. Sinclair. The Markov chain Monte Carlo method: an approach to approximate counting and integration. In Approximation Algorithms for NP-hard Problems, pages 482–520. PWS Pubs., 1996. [13] G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. J. Comput. Graph. Statist., 5:1–25, 1996. [14] P. Muyan and N. de Freitas. A blessing of dimensionality: measure concentration and probabilistic inference. In Proceedings of the 19th Workshop on Artificial Intelligence and Statistics, 2003. [15] R. M. Neal. Annealed importance sampling. Statist. and Comput., 11:125–139, 2001. [16] M. Newman and G. Barkema. Monte Carlo Methods in Statistical Physics. Oxford Univ. Press, 1999. [17] M. Opper and D. Saad, editors. Advanced Mean Field Methods, Theory and Practice. MIT Press, 2001. [18] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004. [19] M. N. Rosenbluth and A. W. Rosenbluth. Monte Carlo calculation of the average extension of molecular chains. J. Chem. Phys., 23:356–359, 1955. [20] J. S. Sadowsky and J. A. Bucklew. On large deviations theory and asymptotically efficient Monte Carlo estimation. IEEE Trans. Inform. Theory, 36:579–588, 1990. [21] L. K. Saul, T. Jaakola, and M. I. Jordan. Mean field theory for sigmoid belief networks. J. Artificial Intelligence Res., 4:61–76, 1996. [22] L. K. Saul and M. I. Jordan. Exploiting tractable structures in intractable networks. In Advances in Neural Information Processing Systems, volume 8, pages 486–492, 1995. [23] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In Computer Vision and Pattern Recognition,, volume I, pages 605–612, 2003. [24] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. Inform. Theory, 51:2313–2335, 2005. [25] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Technical report, EECS Dept., University of California, Berkeley, 2003. [26] W. Wiegerinck. Variational approximations between mean field theory and the junction tree algorithm. In Uncertainty in Artificial Intelligence, volume 16, pages 626–633, 2000.
2006
189
3,020
Hidden Markov Dirichlet Process: Modeling Genetic Recombination in Open Ancestral Space Kyung-Ah Sohn School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ksohn@cs.cmu.edu Eric P. Xing School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 epxing@cs.cmu.edu Abstract We present a new statistical framework called hidden Markov Dirichlet process (HMDP) to jointly model the genetic recombinations among possibly infinite number of founders and the coalescence-with-mutation events in the resulting genealogies. The HMDP posits that a haplotype of genetic markers is generated by a sequence of recombination events that select an ancestor for each locus from an unbounded set of founders according to a 1st-order Markov transition process. Conjoining this process with a mutation model, our method accommodates both between-lineage recombination and within-lineage sequence variations, and leads to a compact and natural interpretation of the population structure and inheritance process underlying haplotype data. We have developed an efficient sampling algorithm for HMDP based on a two-level nested P´olya urn scheme. On both simulated and real SNP haplotype data, our method performs competitively or significantly better than extant methods in uncovering the recombination hotspots along chromosomal loci; and in addition it also infers the ancestral genetic patterns and offers a highly accurate map of ancestral compositions of modern populations. 1 Introduction Recombinations between ancestral chromosomes during meiosis play a key role in shaping the patterns of linkage disequilibrium (LD)—the non-random association of alleles at different loci—in a population. When a recombination occurs between two loci, it tends to decouple the alleles carried at those loci in its descendants and thus reduce LD; uneven occurrence of recombination events along chromosomal regions during genetic history can lead to “block structures” in molecular genetic polymorphisms such that within each block only low level of diversities are present in a population. The problem of inferring chromosomal recombination hotspots is essential for understanding the origin and characteristics of genome variations; several combinatorial and statistical approaches have been developed for uncovering optimum block boundaries from single nucleotide polymorphism (SNP) haplotypes [Daly et al., 2001; Anderson and Novembre, 2003; Patil et al., 2001; Zhang et al., 2002], and these advances have important applications in genetic analysis of disease propensities and other complex traits. The deluge of SNP data also fuels the long-standing interest of analyzing patterns of genetic variations to reconstruct the evolutionary history and ancestral structures of human populations, using, for example, variants of admixture models on genetic polymorphisms [Rosenberg et al., 2002]. These progress notwithstanding, the statistical methodologies developed so far mostly deal with LD analysis and ancestral inference separately, using specialized models that do not capture the close statistical and genetic relationships of these two problems. Moreover, most of these approaches ignore the inherent uncertainty in the genetic complexity (e,g., the number of genetic founders of a population) of the data and rely on inflexible models built on a pre-fixed, closed genetic space. Recently, Xing et al. [2004; 2006] have developed a nonparametric Bayesian framework for modeling genetic polymorphisms based on the Dirichlet process mixtures and extensions, which attempts to allow more flexible control over the number of genetic founders than has been provided by the statistical methods proposed thus far. In this paper, we leverage on this approach and present a unified framework to model complex genetic inheritance process that allows recombinations among possibly infinite founding alleles and coalescence-with-mutation events in the resulting genealogies. Inheritance of unknown generations .... 1 A 2 A 3 A K A 0i H 1i H Loci: 1 2 3 4 5 Figure 1: An illustration of a hidden Markov Dirichlet process for haplotype recombination and inheritance. We assume that individual chromosomes in a modern population are originated from an unknown number of ancestral haplotypes via biased random recombinations and mutations (Fig 1). The recombinations between the ancestors follow a a state-transition process we refer to as hidden Markov Dirichlet process (originated from the infinite HMM by Beal et al. [2001]), which travels in an open ancestor space, with nonstationary recombination rates depending on the genetic distances between SNP loci. Our model draws inspiration from the HMM proposed in [Greenspan and Geiger, 2003], but we employ a two-level P´olya urn scheme akin to the hierarchical DP [Teh et al., 2004] to accommodate an open ancestor space, and allow full posterior inference of the recombination sites, mutation rates, haplotype origin, ancestor patterns, etc., conditioning on phased SNP data, rather than estimating them using information theoretic or maximum likelihood principles. On both simulated and real genetic data, our model and algorithm show competitive or superior performance on a number of genetic inference tasks over the state-of-the-art parametric methods. 2 Hidden Markov Dirichlet Process for Recombination Sequentially choosing recombination targets from a set of ancestral chromosomes can be modeled as a hidden Markov process [Niu et al., 2002; Greenspan and Geiger, 2003], in which the hidden states correspond to the index of the candidate chromosomes, the transition probabilities correspond to the recombination rates between the recombining chromosome pairs, and the emission model corresponds to a mutation process that passes the chosen chromosome region in the ancestors to the descents. When the number of ancestral chromosomes is not known, it is natural to consider an HMM whose state space is countably infinite [Beal et al., 2001; Teh et al., 2004]. In this section, we describe such an infinite HMM formalism, which we would like to call hidden Markov Dirichlet process, for modeling recombination in an open ancestral space. 2.1 Dirichlet Process mixtures For self-containedness, we begin with a brief recap of the basic Dirichlet process mixture model proposed in Xing et al. [2004] for haplytope inheritance without recombination. A haplotype refers to the joint allele configuration of a contiguous list of SNPs located on a single chromosome (Fig 1). Under a well-known genetic model known as coalescence-with-mutation (but without recombination), one can treat a haplotype from a modern individual as a descendent of an unknown ancestor haplotype (i.e., a founder) via random mutations that alter the allelic states of some SNPs. It can be shown that such a coalescent process in an infinite population leads to a partition of the population that can be succinctly captured by the following P´olya urn scheme. Consider an urn that at the outset contains a ball of a single color. At each step we either draw a ball from the urn and replace it with two balls of the same color, or we are given a ball of a new color which we place in the urn. One can see that such a scheme leads to a partition of the balls according to their color. Letting parameter τ define the probabilities of the two types of draws, and viewing each (distinct) color as a sample from Q0, and each ball as a sample from Q, Blackwell and MacQueen [1973] showed that this P´olya urn model yields samples whose distributions are those of the marginal probabilities under the Dirichlet process. One can associate mixture component with colors in the P´olya urn model, and thereby define a “clustering” of the data. The resulting model is known as a DP mixture. Note that a DP mixture requires no prior specification of the number of components. Back to haplotype modeling, following Xing et al. [2004; 2006], let Hi = [Hi,1, . . . , Hi,T ] denote a haplotype over T SNPs from chromosome i 1; let Ak = [Ak,1, . . . , Ak,T ] denote an ancestor haplotype (indexed by k) and θk denote the mutation rate of ancestor k; and let Ci denote an inheritance variable that specifies the ancestor of haplotype Hi. As described in Xing et al. [2006], under a DP mixture, we have the following P´olya urn 1We ignore the parental origin index of haplotype as used in Xing et al. [2004], and assume that the paternal and maternal haplotypes of each individual are given unambiguously (i.e., phased, as known in genetics), as is the case in many LD and haplotype-block analyses. But it is noteworthy that our model can generalize straightforwardly to unphased genotype data by incorporating a simple genotype model as in Xing et al. [2004]. scheme for sampling modern haplotypes: • Draw first haplotype: a1 | DP(τ, Q0) ∼Q0(·), sample the 1st founder; h1 ∼Ph(·|a1, θ1), sample the 1st haplotype from an inheritance model defined on the 1st founder; • for subsequent haplotypes: – sample the founder indicator for the ith haplotype: ci|DP(τ, Q0) ∼ ( p(ci = cj for some j < i|c1, . . ., ci−1) = ncj i−1+τ p(ci ̸= cj for all j < i|c1, . . ., ci−1) = τ i−1+τ where nci is the occupancy number of class ci—the number of previous samples belonging to class ci. – sample the founder of haplotype i (indexed by ci): φci|DP(τ, Q0)  = {acj , θcj } if ci = cj for some j < i (i.e., ci refers to an inherited founder) ∼Q0(a, θ) if ci ̸= cj for all j < i (i.e., ci refers to a new founder) – sample the haplotype according to its founder: hi | ci ∼Ph(·|aci, θci). Notice that the above generative process assumes each modern haplotype to be originated from a single ancestor, this is only plausible for haplotypes spanning a short region on a chromosome. Now we consider long haplotypes possibly bearing multiple ancestors due to recombinations between an unknown number of founders. 2.2 Hidden Markov Dirichlet Process (HMDP) In a standard HMM, state-transitions across a discrete time- or space-interval take place in a fixeddimensional state space, thus it can be fully parameterized by, say, a K-dimensional initial-state probability vector and a K × K state-transition probability matrix. As first proposed in Beal et al. [2001], and later discussed in Teh et al. [2004], one can “open” the state space of an HMM by treating the now infinite number of discrete states of the HMM as the support of a DP, and the transition probabilities to these states from some source as the masses associated with these states. In particular, for each source state, the possible transitions to the target states need to be modeled by a unique DP. Since all possible source states and target states are taken from the same infinite state space, overall we need an open set of DPs with different mass distributions on the SAME support (to capture the fact that different source states can have different transition probabilities to any target state). In the sequel, we describe such a nonparametric Bayesian HMM using an intuitive hierarchical P´olya urn construction. We call this model a hidden Markov Dirichlet process. In an HMDP, both the columns and rows of the transition matrix are infinite dimensional. To construct such an stochastic matrix, we will exploit the fact that in practice only a finite number of states (although we don’t know what they are) will be visited by each source state, and we only need to keep track of these states. The following sampling scheme based on a hierarchical P´olya urn scheme captures this spirit and yields a constructive definition of HMDP. We set up a single “stock” urn at the top level, which contains balls of colors that are represented by at least one ball in one or multiple urns at the bottom level. At the bottom level, we have a set of distinct urns which are used to define the initial and transition probabilities of the HMDP model (and are therefore referred as HMM-urns). Specifically, one of HMM urns, u0, is set aside to hold colored balls to be drawn at the onset of the HMM state-transition sequence. Each of the remaining HMM urns is painted with a color represented by at least one ball in the stock urn, and is used to hold balls to be drawn during the execution of a Markov chain of state-transitions. Now let’s suppose that at time t the stock urn contains n balls of K distinct colors indexed by an integer set C = {1, 2, . . . , K}; the number of balls of color k in this urn is denoted by nk, k ∈C. For urn u0 and urns u1, . . . , uK, let mj,k denote the number of balls of color k in urn uj, and mj = P k∈C mj,k denote the total number of balls in urn uj. Suppose that at time t −1, we had drawn a ball with color k′. Then at time t, we either draw a ball randomly from urn uk′, and place back two balls both of that color; or with probability τ mj+τ we turn to the top level. From the stock urn, we can either draw a ball randomly and put back two balls of that color to the stock urn and one to uk′, or obtain a ball of a new color K + 1 with probability γ n+γ and put back a ball of this color to both the stock urn and urn uk′ of the lower level. Essentially, we have a master DP (the stock urn) that serves as a base measure for infinite number of child DPs (HMM-urns). As pointed out in Teh et al. [2004], this model can be viewed as an instance of the hierarchical Dirichlet process mixture model. As discussed in Xing et al. [2006], associating each color k with an ancestor configuration φk = {ak, θk} whose values are drawn from the base measure F ≡Beta(θ)p(a), conditioning on the Dirichlet process underlying the stock urn, the samples in the jth bottom-level urn are also distributed as marginals under a Dirichlet measure: φmj|φ−mj ∼ K X k=1 mj,k + τ nk n−1+γ mj −1 + τ δφ∗ k(φmj) + τ mj −1 + τ γ n −1 + γ F(φmj) = K X k=1 πj,kδφ∗ k(φmj) + πj,K+1F(φmj), (1) where πj,k ≡ mj,k+τ nk n−1+γ mj−1+τ , πj,K+1 ≡ τ mj−1+τ γ n−1+γ . Let πj ≡[πj,1, πj,2, . . .], now we have an infinite-dimensional Bayesian HMM that, given F, γ, τ, and all initial states and transitions sampled so far, follows an initial states distribution parameterized by π0, and transition matrix Π whose rows are defined by {πj : j > 0}. As in Xing et al. [2006], we also introduce vague inverse Gamma priors for the concentration parameters γ and τ. 2.3 HMDP Model for Recombination and Inheritance Now we describe a stochastic model, based on an HMDP, for generating individual haplotypes in a modern population from a hypothetical pool of ancestral haplotypes via recombination and mutations (i.e., random mating with neutral selection). For each modern chromosome i, let Ci = [Ci,1, . . . , Ci,T ] denote the sequence of inheritance variables specifying the index of the ancestral chromosome at each SNP locus. When no recombination takes place during the inheritance process that produces haplotype Hi (say, from ancestor k), then Ci,t = k, ∀t. When a recombination occurs, say, between loci t and t + 1, we have Ci,t ̸= Ci,t+1. We can introduce a Poisson point process to control the duration of non-recombinant inheritance. That is, given that Ci,t = k, then with probability e−dr+(1−e−dr)πkk, where d is the physical distance between two loci, r reflects the rate of recombination per unit distance, and πkk is the self-transition probability of ancestor k defined by HMDM, we have Ci,t+1 = Ci,t; otherwise, the source state (i.e., ancestor chromosome k) pairs with a target state (e.g., ancestor chromosome k′) between loci t and t+1, with probability (1−e−dr)πkk′. Hence, each haplotype Hi is a mosaic of segments of multiple ancestral chromosomes from the ancestral pool {Ak,·}∞ k=1. Essentially, the model we described so far is a time-inhomogeneous infinite HMM. When the physical distance information between loci is not available, we can simply set r to be infinity so that we are back to a standard stationary HMDP model. The emission process of the HMDM corresponds to an inheritance model from an ancestor to the matching descendent. For simplicity, we adopt the single-locus mutation model in Xing et al. [2004]: p(ht|at, θ) = θI(ht=at) 1 −θ |B| −1 I(ht̸=at) , (2) where ht and at denote the alleles at locus t of an individual haplotype and its corresponding ancestor, respectively; θ indicates the ancestor-specific mutation rate; and |B| denotes the number of possible alleles. As discussed in Liu et al. [2001], this model corresponds to a star genealogy resulted from infrequent mutations over a shared ancestor, and is widely used in statistical genetics as an approximation to a full coalescent genealogy. Following Xing et al. [2004], assume that the mutation rate θ admits a Beta prior, the marginal conditional likelihood of a haplotype given its matching ancestor can be computed by integrating out θ under the Bayesian rule. 3 Posterior Inference Now we proceed to describe a Gibbs sampling algorithm for posterior inference under HMDP. The variables of interest include {Ci,t}, the inheritance variables specifying the origins of SNP alleles of all loci on each haplotype; and {Ak,t}, the founding alleles at all loci of each ancestral haplotype. The Gibbs sampler alternates between two sampling stages. First it samples the inheritance variables {ci,t}, conditioning on all given individual haplotypes h = {h1, . . . , h2N}, and the most recently sampled configuration of the ancestor pool a = {a1, . . . , aK}; then given h and current values of the ci,t’s, it samples every ancestor ak. To improve the mixing rate, we sample the inheritance variables one block at a time. That is, every time we sample δ consecutive states ct+1, . . . , ct+δ starting at a randomly chosen locus t + 1 along a haplotype. (For simplicity we omit the haplotype index i here and in the forthcoming expositions when it is clear from context that the statements or formulas apply to all individual haplotypes.) Let c−denote the set of previously sampled inheritance variables. Let n denote the totality of occupancy records of the top-level DP (i.e. the “stock urn”) — {n} ∪{nk : ∀k}; and m denote the totality of the occupancy records of each lower-level DPs (i.e., the urns corresponding to the recombination choices by each ancestor) — {mk : ∀k} ∪{mk,k′ : ∀k, k′}. And let lk denote the sufficient statistics associated with all haplotype instances originated from ancestor k. The predictive distribution of a δ-block of inheritance variables can be written as: p(ct+1:t+δ |c−, h, a) ∝ p(ct+1:t+δ |ct, ct+δ+1, m, n)p(ht+1:t+δ|act+1,t+1, . . . , act+δ,t+δ) ∝ t+δ Y j=t p(cj+1|cj, m, n) t+δ Y j=t+1 p(hj|acj,j, lcj). (3) This expression is simply Bayes’ theorem with p(ht+1:t+δ|act+1,t+1, . . . , act+δ,t+δ) playing the role of the likelihood and p(ct+1:t+δ |c−, h, a) playing the role of the prior. One should be careful that the sufficient statistics n, m and l employed here should exclude the contributions by samples associated with the δ-block to be sampled. Note that naively, the sampling space of an inheritance block of length δ is |A|δ where |A| represents the cardinality of the ancestor pool. However, if we assume that the recombination rate is low and block length is not too big, then the probability of having two or more recombination events within a δ-block is very small and thus can be ignored. This approximation reduces the sampling space of the δ-block to O(|A|δ), i.e., |A| possible recombination targets times δ possible recombination locations. Accordingly, Eq. (3) reduces to: p(ct+1:t+δ |c−, h, a) ∝p(ct′ |ct′−1 = ct, m, n)p(ct+δ+1 |ct+δ = ct′, m, n) t+δ Y j=t′ p(hj|act′,j, lct′) for some t′ ∈[t + 1, t + δ]. Recall that in an HMDP model for recombination, given that the total recombination probability between two loci d-units apart is λ ≡1 −e−dr ≈dr (assuming d and r are both very small), the transition probability from state k to state k′ is: p(ct′ = k′ |ct′−1 = k, m, n, r, d) =  λπk,k′ + (1 −λ)δ(k, k′) for k′ ∈{1, ..., K}, i.e., transition to an existing ancestor, λπk,K+1 for k′ = K + 1, i.e., transition to a new ancestor, (4) where πk represents the transition probability vector for ancestor k under HMDP, as defined in Eq. (1). Note that when a new ancestor aK+1 is instantiated, we need to immediately instantiate a new DP under F to model the transition probabilities from this ancestor to all instantiated ancestors (including itself). Since the occupancy record of this DP, mK+1 := {mK+1} ∪{mK+1,k : k = 1, . . . , K + 1}, is not yet defined at the onset, with probability 1 we turn to the top-level DP when departing from state K + 1 for the first time. Specifically, we define p(·|ct′ = K + 1) according to the occupancy record of ancestors in the stock urn. For example, at the distal boarder of the δ-block, since ct+δ+1 always indexes a previously inherited ancestor (and therefore must be present in the stock-urn), we have: p(ct+δ+1 |ct+δ = K + 1, m, n) = λ × nct+δ+1 n + γ . (5) Now we can substitute the relevant terms in Eq. (3) with Eqs. (4) and (5). The marginal likelihood term in Eq. (3) can be readily computed based on Eq. (2), by integrating out the mutation rate θ under a Beta prior (and also the ancestor a under a uniform prior if ct′ refers to an ancestor to be newly instantiated) [Xing et al., 2004]. Putting everything together, we have the proposal distribution for a block of inheritance variables. Upon sampling every ct, we update the sufficient statistics n, m and {lk} as follows. First, before drawing the sample, we erase the contribution of ct to these sufficient statistics. In particular, if an ancestor gets no occupancy in either the stock and the HMM urns afterwards, we remove it from our repository. Then, after drawing a new ct, we increment the relevant counts accordingly. In particular, if ct = K + 1 (i.e., a new ancestor is to be drawn), we update n = n + 1, set nK+1 = 1, mct = mct + 1, mct,K+1 = 1, and set up a new (empty) HMM urn with color K + 1 (i.e. instantiating mK+1 with all elements equal to zero). Now we move on to sample the founders {ak,t}, following the same proposal given in Xing et al. [2006], which is adapted below for completeness: p(ak,t|c, h) ∝ Y i,t|ci,t=k p(hi,t|ak,t) = Γ(αh + lk,t)Γ(βh + l ′ k,t) Γ(αh + βh + lk,t + l ′ k,t)(|B| −1)l′ k,t R(αh, βh), (6) where lk,t is the number of allelic instances originating from ancestor k at locus t that are identical to the ancestor, when the ancestor has the pattern ak,t; and l ′ k,t = P i I(ci,t = k|ak,t) −lk,t represents the complement. If k is not represented previously, we can just set lk,t and l ′ k,t both to zero. Note that when sampling a new ancestor, we can only condition on a small segment of an individual haplotype. To instantiate a complete ancestor, after sampling the alleles in the ancestor corresponding to the segment according to Eq. (6), we first fill in the rest of the loci with random alleles. When another segment of an individual haplotype needs a new ancestor, we do not naively create a new fulllength ancestor; rather, we use the empty slots (those with random alleles) of one of the previously instantiated ancestors, if any, so that the number of ancestors does not grow unnecessarily. 4 Experiments We applied the HMDP model to both simulated and real haplotype data. Our analyses focus on the following three popular problems in statistical genetics: 1. Ancestral Inference: estimating the number of founders in a population and reconstructing the ancestor haplotypes; 2) LD-block Analysis: inferring the recombination sites in each individual haplotype and uncover population-level recombination hotspots on the chromosome region; 3) Population Structural Analysis: mapping the genetic origins of all loci of each individual haplotype in a population. a. b.. ancestor reconstruction error empirical recombination rate c. 1 2 3 5 4 Figure 2: Analysis of simulated haplotype populations. (a) A comparison of ancestor reconstruction errors for the five ancestors (indexed along x-axis). The vertical lines show ±1 standard deviation over 30 populations. (b) A plot of the empirical recombination rates along 100 SNP loci in one of the 30 populations. The dotted lines show the prespecified recombination hotspots. (c) The true (panel 1) and estimated (panel 2 for HMDP, and panel 3-5 for 3 HMMs) population maps of ancestral compositions in a simulated population. Figures were generated using the software distruct from Rosenberg et al [2002]. 4.1 Analyzing simulated haplotype population To simulate a population of individual haplotypes, we started with a fixed number, Ks (unknown to the HMDP model), of randomly generated ancestor haplotypes, on each of which a set of recombination hotspots are (randomly) pre-specified. Then we applied a hand-specified recombination process, which is defined by a Ks-dimensional HMM, to the ancestor haplotypes to generate Ns individual haplotypes, via sequentially recombining segments of different ancestors according to the simulated HMM states at each locus, and mutating certain ancestor SNP alleles according to the emission model. At the hotspots, we defined the recombination rate to be 0.05, otherwise it is 0.00001. Each individual was forced to have at least one recombination. Overall, 30 datasets each containing 100 individuals (i.e., 200 haplotypes) with 100 SNPs were generated from Ks = 5 ancestor haplotypes. As baseline models, we also implemented 3 standard fixed-dimensional HMM, with 3, 5 (the true number of ancestors for the simulated) and 10 hidden states, respectively. Ancestral Inference Using HMDP, we successfully recovered the correct number (i.e., K = 5) of ancestors in 21 out of 30 simulated populations; for the remaining 9 populations, we inferred 6 ancestors. From samples of ancestor states {ak,t}, we reconstructed the ancestral haplotypes under the HMDP model. For comparison, we also inferred the ancestors under the 3 standard HMM using an EM algorithm. We define the ancestor reconstruction error ϵa for each ancestor to be the ratio of incorrectly recovered loci over all the chromosomal sites. The average ϵa over 30 simulated populations under 4 different models are shown in Fig 2a. In particular, the average reconstruction errors of HMDP for each of the five ancestors are 0.026, 0.078, 0.116, 0.168, and 0.335, respectively. There is a good correlation between the reconstruction quality and the population frequency of each ancestor. Specifically, the average (over all simulated populations) fraction of SNP loci originated from each ancestor among all loci in the population is 0.472,0.258,0.167,0.068 and 0.034, respectively. As one would expect, the higher the population frequency an ancestor is, the better its reconstruction accuracy. Interestingly, under the fixed-dimensional HMM, even when we use the correct number of ancestor states, i.e., K = 5, the reconstruction error is still very high (Fig 2), typically 2.5 times or higher than the error of HMDP. We conjecture that this is because the non-parametric Bayesian treatment of the transition rates and ancestor configurations under the HMDP model leads to a desirable adaptive smoothing effect and also less constraints on the model parameters, which allow them to be more accurately estimated. Whereas under a parametric setting, parameter estimation can easily go sub-optimum due to lack of appropriate smoothing or prior constraints, or deficiency of the learning algorithm (e.g., local-optimality of EM). a . b . Figure 3: Analysis of the Daly data. (a) A plot of λe estimated via HMDP; and the haplotype block boundaries according to HMDP (black solid line), HMM [Daly et al., 2001] (red dotted line), and MDL [Anderson and Novembre, 2003]) (blue dashed line). (b) IT scores for haplotype blocks from each method. LD-block Analysis From samples of the inheritance variables {ci,t} under HMDP, we can infer the recombination status of each locus of each haplotype. We define the empirical recombination rates λe at each locus to be the ratio of individuals who had recombinations at that locus over the total number of haploids in the population. Fig 2b shows a plot of the λe in one of the 30 simulated populations. We can identify the recombination hotspots directly from such a plot based on an empirical threshold λt (i.e., λt = 0.05). For comparison, we also give the true recombination hotspots (depicted as dotted vertical lines) chosen in the ancestors for simulating the recombinant population. The inferred hotspots (i.e., the λe peaks) show reasonable agreement with the reference. Population Structural Analysis Finally, from samples of the inheritance variables {ci,t}, we can also uncover the genetic origins of all loci of each individual haplotype in a population. For each individual, we define an empirical ancestor composition vector ηe, which records the fractions of every ancestor in all the ci,t’s of that individuals. Fig 2c displays a population map constructed from the ηe’s of all individual. In the population map, each individual is represented by a thin vertical line which is partitioned into colored segments in proportion to the ancestral fraction recorded by ηe. Five population maps, corresponding to (1) true ancestor compositions, (2) ancestor compositions inferred by HMDP, and (3-5) ancestor compositions inferred by HMMs with 3, 5, 10 states, respectively, are shown in Fig 2c. To assess the accuracy of our estimation, we calculated the distance between the true ancestor compositions and the estimated ones as the mean squared distance between true and the estimated ηe over all individuals in a population, and then over all 30 simulated populations. We found that the distance between the HMDP-derived population map and the true map is 0.190, whereas the distance between HMM-map and true map is 0.319, significantly worse than that of HMDP even though the HMM is set to have the true number of ancestral states (i.e., K = 5). Because of dimensionality incompatibility and apparent dissimilarity to the true map for other HMMs (i.e., K = 3 and 10), we forgo the above quantitative comparison for these two cases. 4.2 Analyzing two real haplotype datasets We applied HMDP to two real haplotype datasets, the single-population Daly data [Daly et al., 2001], and the two-population (CEPH: Utah residents with northern/western European ancestry; and YRI: Yoruba in Ibadan and Nigeria) HapMap data [Thorisson et al., 2005]. These data consist of trios of genotypes, so most of the true haplotypes can be directly inferred from the genotype data. We first analyzed the 256 individuals from Daly data We compared the recovered recombination hotspots with those reported in Daly et al. [2001] (which is based on an HMM employing different number of states at different chromosome segments) and in Anderson and Novembre [2003] (which is based on a minimal description length (MDL) principle). Fig. 3a shows the plot of empirical recombination rates estimated under HMDP, side-by-side with the reported recombination hotspots. There is no ground truth to judge which one is correct; hence we computed information-theoretic (IT) scores based on the estimated within-block haplotype frequencies and the between-block transition probabilities under each model for a comparison. The left panel of Fig 3b shows the total pairwise mutual information between adjacent haplotype blocks segmented by the recombination hotspots uncovered by the three methods. The right panel shows the average entropies of haplotypes within each block. The number above each bar denotes the total number of blocks. The pairwise mutual information score of the HMDP block structure is similar to that of the Daly structure, but smaller than that of MDL. Similar tendencies are observed for average entropies. Note that the Daly and the MDL methods allow the number of haplotype founders to vary across blocks to get the most compact local ancestor constructions. Thus their reported scores might be an underestimate of the true global score because certain segments of an ancestor haplotype that are not or rarely inherited are not counted in the score. Thus the low IT scores achieved by HMDP suggest that HMDP can effectively avoid inferring spurious global and local ancestor patterns. This is confirmed by the population map shown in Fig 4a, which shows that HMDP recovered 6 ancestors and among them the 3 dominant ancestors account for 98% of all the modern haplotypes in the population. a b Figure 4: The estimated population maps: (a) Daly data. (b) HapMap data. The HapMap data contains 60 individuals from CEPH and 60 from YRI. We applied HMDP to the union of the populations, with a random individual order. The two-population structure is clearly retrieved from the population map constructed from the population composition vectors ηe for every individual. As seen in Fig. 4b, the left half of the map clearly represents the CEPH population and the right half the YRI population. We found that the two dominant haplotypes covered over 85% of the CEPH population (and the overall breakup among all four ancestors is 0.5618,0.3036,0.0827,0.0518). On the other hand, the frequencies of each ancestor in YRI population are 0.2141,0.1784,0.3209,0.1622,0.1215 and 0.0029, showing that the YRI population is much more diverse than CEPH. Due to space limit, we omit the recombination map of this dataset. 5 Conclusion We have proposed a new Bayesian approach for joint modeling genetic recombinations among possibly infinite founding alleles and coalescence-with-mutation events in the resulting genealogies. By incorporating a hierarchical DP prior to the stochastic matrix underlying an HMM, which facilitates well-defined transition process between infinite ancestor space, our proposed method can efficiently infer a number of important genetic variables, such as recombination hotspot, mutation rates, haplotype origin, and ancestor patterns, jointly under a unified statistical framework. Emprirically, on both simulated and real data, our approach compares favorably to its parametric counterpart—a fixed-dimensional HMM (even when the number of its hidden state, i.e., the ancestors, is correctly specified) and a few other specialized methods, on ancestral inference, haplotypeblock uncovering and population structural analysis. We are interested in further investigating the behavior of an alternative scheme based on reverse-jump MCMC over Bayesian HMMs with different latent states in comparison with HMDP; and we intend to apply our methods to genome-scale LD and demographic analysis using the full HapMap data. While our current model employs only phased haplotype data, it is straightforward to generalize it to unphased genotype data as provided by the HapMap project. HMDP can also be easily adapted to many engineering and information retrieval contexts such as object and theme tracking in open space. Due to space limit, we left out some details of the algorithms and more results of our experiments, which are available in the full version of this paper [Xing and Sohn, 2006]. References [Anderson and Novembre, 2003] E. C. Anderson and J. Novembre. Finding haplotype block boundaries by using the minimum-descriptionlength principle. Am J Hum Genet, 73:336–354, 2003. [Beal et al., 2001] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in Neural Information Processing Systems 13, 2001. [Blackwell and MacQueen, 1973] D. Blackwell and J. B. MacQueen. Ferguson distributions via polya urn schemes. Annals of Statistics, 1:353–355, 1973. [Daly et al., 2001] M. J. Daly, J. D. Rioux, S. F. Schaffner, T. J. Hudson, and E. S. Lander. High-resolution haplotype structure in the human genome. Nature Genetics, 29(2):229–232, 2001. [Greenspan and Geiger, 2003] D. Greenspan and D. Geiger. Model-based inference of haplotype block variation. In Proceedings of RECOMB 2003, 2003. [Liu et al., 2001] J. S. Liu, C. Sabatti, J. Teng, B.J.B. Keats, and N. Risch. Bayesian analysis of haplotypes for linkage disequilibrium mapping. Genome Res., 11:1716–1724, 2001. [Niu et al., 2002] T. Niu, S. Qin, X. Xu, and J. Liu. Bayesian haplotype inference for multiple linked single nucleotide polymorphisms. American Journal of Human Genetics, 70:157–169, 2002. [Patil et al., 2001] N. Patil, A. J. Berno, D. A. Hinds, et al. Blocks of limited haplotype diversity revealed by high-resolution scanning of human chromosome 21. Science, 294:1719–1723, 2001. [Rosenberg et al., 2002] N. A. Rosenberg, J. K. Pritchard, J. L. Weber, H. M. Cann, K. K. Kidd, L. A. Zhivotovsky, and M. W. Feldman. Genetic structure of human populations. Science, 298:2381–2385, 2002. [Teh et al., 2004] Y. Teh, M. I. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Technical Report 653, Department of Statistics, University of California, Berkeley, 2004. [Thorisson et al., 2005] G.A. Thorisson, A.V. Smith, L. Krishnan, and L.D. Stein. The international hapmap project web site. Genome Research, 15:1591–1593, 2005. [Xing et al., 2004] E.P. Xing, R. Sharan, and M.I Jordan. Bayesian haplotype inference via the Dirichlet process. In Proceedings of the 21st International Conference on Machine Learning, 2004. [Xing et al., 2006] E.P. Xing, K.-A. Sohn, M.I Jordan, and Y. W. Teh. Bayesian multi-population haplotype inference via a hierarchical dirichlet process mixture. In Proceedings of the 23st International Conference on Machine Learning, 2006. [Xing and Sohn, 2006] E.P. Xing and K.-A. Sohn. Hidden Markov Dirichlet Process: Modeling Genetic Recombination in Open Ancestral Space. In Bayesian Analysis, to appear, 2007. [Zhang et al., 2002] K. Zhang, M. Deng, T. Chen, M. Waterman, and F. Sun. A dynamic programming algorithm for haplotype block partitioning. Proc. Natl. Acad. Sci. USA, 99(11):7335–39, 2002.
2006
19
3,021
Unsupervised Regression with Applications to Nonlinear System Identification Ali Rahimi Intel Research Seattle Seattle, WA 98105 ali.rahimi@intel.com Ben Recht California Institute of Technology Pasadena, CA 91125 brecht@ist.caltech.edu Abstract We derive a cost functional for estimating the relationship between highdimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment. 1 Introduction Measurements from sensor systems typically serve as a proxy for latent variables of interest. To recover these latent variables, the parameters of the sensor system must first be determined. When pairs of measurements and their corresponding latent variables are available, fully supervised regression techniques may be applied to learn a mapping between latent states and measurements. In many applications, however, latent states cannot be observed and only a diffuse prior on them is available. In such cases, marginalizing over the latent variables and searching for the model parameters using Expectation Maximization (EM) has become a popular approach [3,9,19]. Unfortunately, such algorithms are prone to local minima and require very careful initialization in practice. Using a simple change-of-variable model, we derive an approximation algorithm for the Unsupervised Regression problem – estimating the nonlinear relationship between latent-states and their observations when no example pairs are available, when the observation function is invertible, and when the measurement noise is small. Our method is not susceptible to local minima and provides a guarantee on the quality of the recovered observation function. We identify conditions under which our estimate of the mapping is asymptotically consistent and empirically evaluate the quality of our solutions and their stability under variations of the prior. Because our algorithm takes advantage of an explicit prior on the latent variables, it recovers latent variables more accurately than manifold learning algorithms when applied to similar tasks. Our method may be applied to estimate the observation function in nonlinear dynamical systems by enforcing a Markovian dynamics prior over the latent states. We demonstrate this approach to nonlinear system identification by learning to track a moving object in a field of completely uncalibrated sensor nodes whose measurement functions are unknown. Given that the object moves smoothly over time, our algorithm learns a function that maps the raw measurements from the sensor network to the target’s location. In another experiment, we learn to track Radio Frequency ID (RFID) tags given a sequence of voltage measurements induced by the tag in a set of antennae . Given only these measurements and that the tag moves smoothly over time, we can recover a mapping from the voltages to the position of the tag. These results are surprising because no parametric sensor model is available in either scenario. We are able to recover the measurement model up to an affine transform given only raw measurement sequences and a diffuse prior on the state sequence. 2 A diffeomorphic warping model for unsupervised regression We assume that the set X = {xi}1···N of latent variables is drawn (not necessarily iid) from a known distribution, pX(X) = pX(x1, · · · , xN). The set of measurements Y = {yi}1···N is the output of an unknown invertible nonlinearity applied to each latent variable, yi = f0(xi). We assume that observations, yi ∈RD, are higher dimensional than latent variables xi ∈Rd. Computing a MAP estimate of f0 requires marginalizing over X and maximizing over f. EM, or some other form of coordinate ascent on a Jensen bound of the likelihood, is a common way of estimating the parameters of this model, but such methods suffer from local minima. Because we have assumed that f0 is invertible and that there is no observation noise, this process describes a change of variables. The true distribution pY(Y) over Y can be computed in closed form using a generalization of the standard change of variables formula (see [14, thm 9.3.1] and [7, chap 11]): pY(Y) = pY(Y; f0) = pX(f −1 0 (y1), · · · , f −1 0 (yN)) N Y i=1 det  ∇f f −1 0 (yi) ′ ∇f f −1 0 (yi)  −1 2 . (1) The determinant corrects the warping of each infinitesimal volume element around f −1 0 (yi) by accounting for the stretching induced by the nonlinearity. The change of variables formula immediately yields a likelihood over f, circumventing the need for integrating over the latent variables. We assume f0 diffeomorphically maps a ball in Rd containing the data onto its image. In this case, there exists a function g defined on an open set containing the image of f such that g(f(x)) = x and ∇g∇f = I for all x in the open set [5]. Consequently, we can substitute g for f −1 in (1) and, taking advantage of the identity det(∇f ′∇f)−1 = det ∇g∇g′, write its log likelihood as lY(Y; g) = log pY(Y; g) = log pX(g(y1), . . . , g(yN)) + 1 2 N X i=1 log det (∇g(yi)∇g(yi)′) . (2) For many common priors pX, the maximum likelihood g yields an asymptotically consistent estimate of the true distribution pY. When certain conditions on pX are met (including stationarity, ergodicity, and kth-order Markov approximability), a generalized version of the Shannon-McMillanBreiman theorem [1] guarantees that log pY(Y; g) asymptotically converges to the relative entropy rate between the true pY(Y) and pY(Y; g). This quantity is maximized when these two distributions are equal. Therefore, if the true pY follows the change of variable model (1), the recovered g converges to the true f −1 0 in the sense that they both describe a change of variable from the prior distribution pX to the distribution pY. Note that although our generative model assumes no observation noise, some noise in Y can be tolerated if we constrain our search over smooth functions g. This way, small perturbations in y due to observation noise produce small perturbations in g(y). 3 Approximation algorithms for finding the inverse mapping We constrain our search for g to a subset of smooth functions by requiring that g have a finite representation as a weighted sum of positive definite kernels k centered on observed data, g(y) = PN i=1 cik(y, yi), with the weight vectors ci ∈Rd. Accordingly, applying g to the set of observations gives g(Y) = CK, where C = [ c1···cN ] and K is the kernel matrix with Kij = k(yi, yj). In addition, ∇g(y) = C∆(y), where ∆(y) is an N × D matrix whose ith row is ∂k(yi,y) ∂y . We tune the smoothness of g by regularizing (2) with the RKHS norm [17] of g. This norm has the form ∥g∥2 k = tr CKC′, and the regularization parameter is set to λ 2 . For simplicity, we require pX to be a Gaussian with mean zero and inverse covariance ΩX, but we note our methods can be extended to any log-concave distribution. Substituting into (2) and adding the smoothness penalty on g, we obtain: max C −vec (KC′)′ ΩXvec (KC′) −λtrCKC′ + N X i=1 log det C∆(yi)∆(yi)′C′, (3) where the vec (·) operator stacks up the columns of its matrix argument into a column vector. Equation (3) is not concave in C and is likely to be hard to maximize exactly. This is because log det(A′A) is not concave for A ∈Rd×D. Since the cost is non-concave, gradient descent methods may converge to local minima. Such local minima, in addition to the burdensome time and storage requirements, rule out descent strategies for optimizing (3). Our first algorithm for approximately solving this optimization problem constructs a semidefinite relaxation using a standard approach that replaces outer products of vectors with positive definite matrices. Rewrite (3) as max C −tr  Mvec (C′) vec (C′)′ + N X i=1 log det  tr Jkl i vec (C′) vec (C′)′  (4) M = (Id ⊗K)ΩX(Id ⊗K) + λ(Id ⊗K) , Jkl i = Elk ⊗∆(yi)∆(yi)′ (5) where the klth entry of the matrix argument of the logdet is as specified, and the matrix Eij is zero everywhere except for 1 in its ijth entry. This optimization is equivalent to max Z⪰0 −tr (MZ) + N X i=1 log det  tr Jkl i Z  , (6) subject to the additional constraint that rank(Z) = 1. Dropping the rank constraint yields a concave relaxation for (3). Standard interior point methods [20] or subgradient methods [2] can efficiently compute the optimal Z for this relaxed problem. A set of coefficients C can then be extracted from the top eigenvectors of the optimal Z, yielding an approximate solution to (3). Since (6) without the rank constraint is a relaxation of (3), the optimum of (6) is an upper bound on that of (3). Thus we can bound the difference in the value of the extracted solution and that of the global maximum of (3). As we will see in the following section, this method produces high quality solutions for a diverse set of learning problems. In practice, standard algorithms for (6) run slowly for large data sets, so we have developed an intuitive algorithm that also provides good approximations and runs much more quickly. The nonconcave logdet term serves to prevent the optimal solution of (2) from collapsing to g(y) = 0, since X = 0 is the most likely setting for the zero-mean Gaussian prior pX. To circumvent the non-concavity of the logdet term, we replace it with constraints requiring that the sample mean and covariance of g(Y) match the expected mean and covariance of the random variables X. These moment constraints prevent the optimal solution from collapsing to zero while remaining in the typical set of pX. The expected covariance of X, denoted by ¯ΛX, can be computed by averaging the block diagonals of Ω−1 X . However, the particular choice of ¯ΛX only influences the final solution up to a scaling and rotation on g, so in practice, we set it to the identity matrix. We thus obtain the following optimization problem: min C vec (KC′)′ ΩXvec (KC′) + λtrCKC′ (7) s.t. 1 N CK(CK)′ = ¯ΛX (8) 1 N CK1 = 0, (9) where 1 is a column vector of 1s. This optimization problem searches for a g that transforms observations into variables that are given high probability by pX and match its stationary statistics. This is a quadratic minimization with a single quadratic constraint and, after eliminating the linear constraints with a change of variables, can be solved as a generalized eigenvalue problem [4]. 4 Related Work Manifold learning algorithms and unsupervised nonlinear system identification algorithms solve variants of the unsupervised regression problem considered here. Our method provides a statistical model that augments manifold learning algorithms with a prior on latent variables. Our spectral algorithm from Section 3 reduces to a variant of KPCA [15] when X are drawn iid from a spherical Gaussian. By adopting a nearest-neighbors form for g instead of the RBF form, we obtain an algorithm that is similar to embedding step of LLE [12, chap 5]. In addition to our use of dynamics, a notable difference between our method and principal manifold methods [16] is that instead of learning a mapping from states to observations, we learn mappings from observations to states. This reduces the storage and computational requirements when processing high-dimensional data. As far as we are aware, in the manifold learning literature, only Jenkins and Mataric [6] explicitly take temporal coherency into account, by increasing the affinity of temporally adjacent points and applying Isomap [18]. State-of-the-art nonlinear system identification techniques seek to recover all the parameters of a continuous hidden Markov chain with nonlinear state transitions and observation functions given noisy observations [3,8,9,19]. Because these models are so rich and have so many unknowns, these algorithms resort to coordinate ascent (for example, via EM), making them susceptible to local minima. In addition, each iteration of coordinate ascent requires some form of nonlinear smoothing over the latent variables, which is itself both computationally costly and becomes prone to local minima when the estimated observation function becomes non-invertible during the iterations. Further, because mappings from low-dimensional to high-dimensional vectors require many parameters to represent, existing approaches tend to be unsuitable for large-scale sensor network or image analysis problems. Our algorithms do not have local minima and represent the more compact inverse observation function where high-dimensional observations appear only in pairwise kernel evaluations. Comparisons with a semi-supervised variant of these algorithms [13] show that weak priors on the latent variables are extremely informative and that additional labeled data is often only necessary to fix the coordinate system. 5 Experiments The following experiments show that latent states and observation functions can be accurately and efficiently recovered up to a linear coordinate transformation given only raw measurements and a generic prior over the latent variables. We compare against various manifold learning and nonlinear system identification algorithms. We also show that our algorithm is robust to variations in the choice of the prior. As a measure of quality, we report the affine registration error, the average residual per data point after registering the recovered latent variables with their ground truth values using an affine transformation: err = minA,b 1 N qPN t=1 ∥Axt −x0 t + b∥2 2, where x0 t is the ground truth setting for xt. All of our experiments use a spherical Gaussian kernel. To define the Gaussian prior pX, we start with a linear Gaussian Markov chain st = Ast−1 + ωt, where A and the covariance of ω are block diagonal and define d Markov chains that evolve independently from each other according to Newtonian motion. The latent variables xt extract the position components of st. The inverse covariance matrix corresponding to this process can be obtained in closed form. More details and additional experiments can be found in [12]. We begin with a low-dimensional data set to simplify visualization and comparison to systems that do not scale well with the dimensionality of the observations. Figure 1(b) shows the embedding of a 1500 step 2D random walk shown in Figure 1(a) into R3 by the function f(x, y) = (x, y cos(2y), y sin(2y)). Note that the 2D walk was not generated by a linear Gaussian model, as it bounces off the edges of its bounding box. Lifted points were passed to our algorithm, which returned the 2D variables shown in Figure 1(c). The true 2D coordinates are recovered up to a scale, a flip, and some shrinking in the lower left corner. Therefore the recovered g is close the inverse of the original mapping, up to a linear transform. Figure 1(d) shows states recovered by the algorithm of Roweis and Ghahramani [3]. Smoothing with the recovered function simply projects the (a) (b) 0 2 4 6 −5 0 5 −6 −5 −4 −3 −2 −1 0 1 2 3 4 (c) (d) (e) (f) Figure 1: (a) 2D ground truth trajectory. Brighter colors indicate greater distance to the origin. (b) Embedding of the trajectory into R3. (c) Latent variables are recovered up to a linear transformation and minor distortion. Roweis-Ghahramani (d), Isomap (e), and Isomap+temporal coherence (f) recovered low-dimensional coordinates that exhibit folding and other artifacts that cannot be corrected by a linear transformation. observations without unrolling the roll. The joint-max version of this algorithm took about an hour to converge on a 1Ghz Pentium III and converges only when started at solutions that are sufficiently close to the true solution. Our spectral algorithm took about 10 seconds. Isomap (Figure 1(e)) performs poorly on this data set due to the low sampling rate on the manifold and the fact that the true mapping f is not isometric. Including temporal neighbors into Isomap’s neighborhood structure (as per ST-Isomap) creates some folding, and the true underlying walk is not recovered (Figure 1(f)). KPCA (not shown) chooses a linear projection that simply eliminates the first coordinate. We found the optimal parameter settings for Isomap, KPCA, and ST-Isomap by a fine grid search over the parameter space of each algorithm. The upper bound on the log-likelihood returned by the relaxation (6) serves as a diagnostic on the quality of our approximations. This bound was −3.9 × 10−3 for this experiment. Rounding the result of the relaxation returned a g with log likelihood −5.5 × 10−3. The spectral approximation (7) also returned a solution with log likelihood −5.5 × 10−3, confirming our experience that these algorithms usually return similar solutions. For comparison, log-likelihood of KPCA’s solution was −1.69 × 10−2, significantly less likely than our solutions, or the upper bound. 5.1 Learning to track in an uncalibrated sensor network We consider an artificial distributed sensor network scenario where many sensor nodes are deployed randomly in a field in order to track a moving target (Figure 2(a)). The location of the sensor nodes is unknown, and the sensors are uncalibrated, so that it is not known how the position of the target maps to the reported measurements. This situation arises when it is not feasible to calibrate each sensor prior to deployment or when variations in environmental conditions affect each sensor differently. Given only the raw measurements produced by the network from watching a smoothly moving target, we wish to learn a mapping from these measurements to the location of the target, even though no functional form for the measurement model is available. A similar problem was considered by [11], who sought to recover the location of sensor nodes using off-the-shelf manifold learning algorithms. Each latent state xt is the unknown position of the target at time t. The unknown function f(xt) gives the set of measurements yt reported by the sensor network at time t. Figure 2(b) shows the time series of measurements from observing the target. In this case, measurements were generated by having each sensor s report its true distance ds t to the target at time t and passing it through a random nonlinearity of the form αs exp(−βsds t). Note that only f, not the measurement function of each sensor, needs be invertible. This is equivalent to requiring that a memoryless mapping from measurements to positions must exist. (a) −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 (b) 1000 1010 1020 1030 1040 1050 1060 1070 1080 1090 1100 0 0.5 1 1.5 2 2.5 (c) −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 (d) −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 (e) 0.007 0.008 0.009 0.01 0.011 0.012 0.013 0.014 0.015 0.016 −14 −13 −12 −11 −10 −9 −8 −7 −6 −5 −4 x 10 −3 (f)−0.027 −0.0265 −0.026 −0.0255 −0.025 −0.0245 −0.024 −0.0235 −0.023 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 x 10 −3 Figure 2: (a) A target followed a smooth trajectory (dotted line) in a field of 100 randomly placed uncalibrated sensors with random and unknown observation functions (circle). (b) Time series of measurements produced by the sensor network in response to the target’s motion. (c) The recovered trajectory given only raw sensor measurements, and no information about the observation function (other than smoothness and invertibility). It is recovered up to scaling and a rotation. (d) To test the recovered mapping further, the target was made to follow a zigzag pattern. (e) Output of g on the resulting measurements. The resulting trajectory is again similar to the ground truth zigzag, up to minor distortion. (f) The mapping obtained by KPCA cannot recover the zigzag, because KPCA does not utilize the prior on latent states. Assuming only that the target vaguely follows linear-Gaussian dynamics, and given only the time series of the raw measurements from the sensor network, our learning algorithm finds a transformation that maps observations from the sensor network to the position of the target up to a linear coordinate transform (Figure 2(c)). The recovered function g implicitly performs all the triangulation necessary for recovering the position of the target, even though the position or characteristics of the sensors were not known a priori. The bottom row of Figure 2 tests the recovered g by applying it to a new measurement set. To show that this sensor network problem is not trivial, the figure also shows the output of the mapping obtained by KPCA. 5.2 Learning to Track with the Sensetable The Sensetable is a hardware platform for tracking the position of radio frequency identification (RFID) tags. It consists of 10 antennae woven into a flat surface 30 × 30 cm. As an RFID tag moves along the flat surface, the strength of the RF signal induced by RFID tag in each antenna is reported, producing a time series of 10 numbers. We wish to learn a mapping from these 10 voltage measurements to the 2D position of the RFID tag. Previously, such a mapping was recovered by hand, by meticulous physical modeling of this system, followed by trial-and-error to refine these mappings; a process that took about 3 months in total [10]. We show that it is possible to recover this mapping automatically, up to an affine transformation, given only the raw time series of measurements generated by moving the RFID tag by hand on the Sensetable for about 5 minutes. This is a challenging task because the relationship between the tag’s position and the observed measurements is highly oscillatory. (Figure 3(a)). Once it is learned, we can use the mapping to track RFID tags. This experiment serves as a real-world instantiation of the sensor network setup of the previous section in that each antenna effectively acts as an uncalibrated sensor node with an unknown and highly oscillatory measurement function. Figure 3(b) shows the ground truth trajectory of the RFID tag in this data set. Given only the 5 minute-long time series of raw voltage measurements, our algorithm recovered the trajectory shown in Figure 3(c). These recovered coordinates are scaled down and flipped about both axes as compared to the ground truth coordinates. There is also some additional shrinkage in the upper right corner, but the coordinates are otherwise recovered accurately, with an affine registration error of 1.8 cm per pixel. Figure 4 shows the result of LLE, KPCA, Isomap and ST-Isomap on this data set under their best parameter settings (again found by a grid search on each algorithm’s search space). None of these algorithms recover low-dimensional coordinates that resemble the ground truth. LLE, in addition to collapsing the coordinates to one dimension, exhibits severe folding, obtaining an affine registration (a) 10 20 30 40 50 60 −100 −80 −60 −40 −20 0 20 40 60 80 100 (b) 50 100 150 200 250 300 350 400 450 350 400 450 500 550 600 Ground truth (c) −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 Figure 3: (a) The output of the Sensetable over a six second period, while moving the tag from the left edge of the table to the right edge. The observation function is highly complex and oscillatory. (b) The ground truth trajectory of the tag. Brighter points have greater ground truth y-value. (c) The trajectory recovered by our spectral algorithm is correct up to flips about both axes, a scale change, and some shrinkage along the edge. LLE k=15 KPCA Isomap K=7 Figure 4: From left to right, the trajectories recovered by LLE, KPCA, Isomap, ST-Isomap. All of these trajectories exhibit folding and severe distortions. error of 8.5 cm. KPCA also exhibited folding and large holes, with an affine registration error of 7.2 cm. Of these, Isomap performed the best with an affine registration error of 3.4 cm, though it exhibited some folding and a large hole in the center. Isomap with temporal coherency performed similarly, with a best affine registration error of 3.1 cm. Smoothing the output of these algorithms using the prior sometimes improves their accuracy by a few millimeters, but more often diminishes their accuracy by causing overshoots. To further test the mapping recovered by our algorithm, we traced various trajectories with an RFID tag and passed the resulting voltages through the recovered g. Figure 5 plots the results (after a flip about the y-axis). These shapes resemble the trajectories we traced. Noise in the recovered coordinates is due to measurement noise. The algorithm is robust to perturbations in pX. To demonstrate this, we generated 2000 random perturbations of the parameters of the inverse covariance of X used to generate the Sensetable results, and evaluated the resulting affine registration error. The random perturbations were generated by scaling the components of A and the diagonal elements of the covariance of ω over four orders of magnitude using a log uniform scaling. The affine registration error was below 3.6 cm for 38% of these 2000 perturbations. Typically, only the parameters of the kernel need to be tuned. In practice, we simply choose the kernel bandwidth parameter so that the minimum entry in K is approximately 0.1. −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.06 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.03 −0.02 −0.01 0 0.01 0.02 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 Figure 5: Tracking RFID tags using the recovered mapping. 6 Conclusions and Future Work We have shown how to recover the latent variables in a dynamical system given an approximate prior on the dynamics of these variables and observations of these states through an unknown invertible nonlinearity. The requirement that the observation function be invertible is similar to the requirement in manifold learning algorithms that the manifold not intersect itself. Our algorithm enhances manifold learning algorithms by leveraging a prior on the latent variables. Because we search for a mapping from observations to unknown states (as opposed to from states to observations), we can devise algorithms that are stable and avoid local minima. We applied this methodology to learning to track objects given only raw measurements from sensors with no constraints on the observation model other than invertibility and smoothness. We are currently evaluating various ways to relax the invertibility requirement on the observation function by allowing invertibility up to a linear subspace. We are also exploring different prior models, and experimenting with ways to jointly optimize over g and the parameters of pX. References [1] P.H. Algoet and T.M. Cover. A sandwich proof of the Shannon-McMillan-Breiman theorem. The Annals of Probability, 16:899–909, 1988. [2] Aharon Ben-Tal and Arkadi Nemirovski. Non-euclidean restricted memory level method for large-scale convex optimization. Mathematical Programming, 102:407–456, 2005. [3] Z. Ghahramani and S. Roweis. Learning nonlinear dynamical systems using an EM algorithm. In Advances in Neural Information Processing Systems (NIPS), 1998. [4] G. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1989. [5] V. Guilleman and A. Pollack. Differential Topology. Prentice Hall, Englewood Cliffs, New Jersey, 1974. [6] O. Jenkins and M. Mataric. A spatio-temporal extension to isomap nonlinear dimension reduction. In International Conference on Machine Learning (ICML), 2004. [7] F. Jones. Advanced Calculus. http://www.owlnet.rice.edu/˜fjones, unpublished. [8] A. Juditsky, H. Hjalmarsson, A. Benveniste, B. Delyon, L. Ljung, J. Sj¨oberg, and Q. Zhang. Nonlinear black-box models in system identification: Mathematical foundations. Automatica, 31(12):1725–1750, 1995. [9] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In Advances in Neural Information Processing Systems (NIPS), 2004. [10] J. Patten, H. Ishii, J. Hines, and G. Pangaro. Sensetable: A wireless object tracking platform for tangible user interfaces. In CHI, 2001. [11] N. Patwari and A. O. Hero. Manifold learning algorithms for localization in wireless sensor networks. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2004. [12] A. Rahimi. Learning to Transform Time Series with a Few Examples. PhD thesis, Massachusetts Institute of Technology, Computer Science and AI Lab, Cambridge, Massachusetts, USA, 2005. [13] A. Rahimi, B. Recht, and T. Darrell. Learning appearance manifolds from video. In Computer Vision and Pattern Recognition (CVPR), 2005. [14] I. K. Rana. An Introduction to Measure Theory and Integration. AMA, second edition, 2002. [15] B. Sch¨olkopf, A. Smola, and K-R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [16] A. Smola, S. Mika, B. Schoelkopf, and R. C. Williamson. Regularized principal manifolds. Journal of Machine Learning, 1:179–209, 2001. [17] M. Pontil T. Evgeniou and T. Poggio. Regularization networks and support vector machines. Advances in Computational Mathematics, 2000. [18] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [19] H. Valpola and J. Karhunen. An unsupervised ensemble learning method for nonlinear dynamic statespace models. Neural Computation, 14(11):2647–2692, 2002. [20] Lieven Vandenberghe, Stephen Boyd, and Shao-Po Wu. Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19(2):499–533, 1998.
2006
190
3,022
Large Margin Hidden Markov Models for Automatic Speech Recognition Fei Sha Computer Science Division University of California Berkeley, CA 94720-1776 feisha@cs.berkeley.edu Lawrence K. Saul Department of Computer Science and Engineering University of California (San Diego) La Jolla, CA 92093-0404 saul@cs.ucsd.edu Abstract We study the problem of parameter estimation in continuous density hidden Markov models (CD-HMMs) for automatic speech recognition (ASR). As in support vector machines, we propose a learning algorithm based on the goal of margin maximization. Unlike earlier work on max-margin Markov networks, our approach is specifically geared to the modeling of real-valued observations (such as acoustic feature vectors) using Gaussian mixture models. Unlike previous discriminative frameworks for ASR, such as maximum mutual information and minimum classification error, our framework leads to a convex optimization, without any spurious local minima. The objective function for large margin training of CD-HMMs is defined over a parameter space of positive semidefinite matrices. Its optimization can be performed efficiently with simple gradient-based methods that scale well to large problems. We obtain competitive results for phonetic recognition on the TIMIT speech corpus. 1 Introduction As a result of many years of widespread use, continuous density hidden Markov models (CDHMMs) are very well matched to current front and back ends for automatic speech recognition (ASR) [21]. Typical front ends compute real-valued feature vectors from the short-time power spectra of speech signals. The distributions of these acoustic feature vectors are modeled by Gaussian mixture models (GMMs), which in turn appear as observation models in CD-HMMs. Viterbi decoding is used to solve the problem of sequential classification in ASR—namely, the mapping of sequences of acoustic feature vectors to sequences of phonemes and/or words, which are modeled by state transitions in CD-HMMs. The simplest method for parameter estimation in CD-HMMs is the Expectation-Maximization (EM) algorithm. The EM algorithm is based on maximizing the joint likelihood of observed feature vectors and label sequences. It is widely used due to its simplicity and scalability to large data sets, which are common in ASR. A weakness of this approach, however, is that the model parameters of CDHMMs are not optimized for sequential classification: in general, maximizing the joint likelihood does not minimize the phoneme or word error rates, which are more relevant metrics for ASR. Noting this weakness, many researchers in ASR have studied alternative frameworks for parameter estimation based on conditional maximum likelihood [11], minimum classification error [4] and maximum mutual information [20]. The learning algorithms in these frameworks optimize discriminative criteria that more closely track actual error rates, as opposed to the EM algorithm for maximum likelihood estimation. These algorithms do not enjoy the simple update rules and relatively fast convergence of EM, but carefully and skillfully implemented, they lead to lower error rates [13, 20]. Recently, in a new approach to discriminative acoustic modeling, we proposed the use of “large margin GMMs” for multiway classification [15]. Inspired by support vector machines (SVMs), the learning algorithm in large margin GMMs is designed to maximize the distance between labeled examples and the decision boundaries that separate different classes [19]. Under mild assumptions, the required optimization is convex, without any spurious local minima. In contrast to SVMs, however, large margin GMMs are very naturally suited to problems in multiway (as opposed to binary) classification; also, they do not require the kernel trick for nonlinear decision boundaries. We showed how to train large margin GMMs as segment-based phonetic classifiers, yielding significantly lower error rates than maximum likelihood GMMs [15]. The integrated large margin training of GMMs and transition probabilities in CD-HMMs, however, was left as an open problem. We address that problem in this paper, showing how to train large margin CD-HMMs in the more general setting of sequential (as opposed to multiway) classification. In this setting, the GMMs appear as acoustic models whose likelihoods are integrated over time by Viterbi decoding. Experimentally, we find that large margin training of HMMs for sequential classification leads to significant improvement beyond the frame-based and segment-based discriminative training in [15]. Our framework for large margin training of CD-HMMs builds on ideas from many previous studies in machine learning and ASR. It has similar motivation as recent frameworks for sequential classification in the machine learning community [1, 6, 17], but differs in its focus on the real-valued acoustic feature representations used in ASR. It has similar motivation as other discriminative paradigms in ASR [3, 4, 5, 11, 13, 20], but differs in its goal of margin maximization and its formulation of the learning problem as a convex optimization over positive semidefinite matrices. The recent margin-based approach of [10] is closest in terms of its goals, but entirely different in its mechanics; moreover, its learning is limited to the mean parameters in GMMs. 2 Large margin GMMs for multiway classification Before developing large margin HMMs for ASR, we briefly review large margin GMMs for multiway classification [15]. The problem of multiway classification is to map inputs x ∈ℜd to labels y ∈{1, 2, . . ., C}, where C is the number of classes. Large margin GMMs are trained from a set of labeled examples {(xn, yn)}N n=1. They have many parallels to SVMs, including the goal of margin maximization and the use of a convex surrogate to the zero-one loss [19]. Unlike SVMs, where classes are modeled by half-spaces, in large margin GMMs the classes are modeled by collections of ellipsoids. For this reason, they are more naturally suited to problems in multiway as opposed to binary classification. Sections 2.1–2.3 review the basic framework for large margin GMMs: first, the simplest setting in which each class is modeled by a single ellipsoid; second, the formulation of the learning problem as a convex optimization; third, the general setting in which each class is modeled by two or more ellipsoids. Section 2.4 presents results on handwritten digit recognition. 2.1 Parameterization of the decision rule The simplest large margin GMMs model each class by a single ellipsoid in the input space. The ellipsoid for class c is parameterized by a centroid vector µc ∈ℜd and a positive semidefinite matrix Ψc ∈ℜd×d that determines its orientation. Also associated with each class is a nonnegative scalar offset θc ≥0. The decision rule labels an example x ∈ℜd by the class whose centroid yields the smallest Mahalanobis distance: y = argmin c  (x−µc)TΨc(x−µc) + θc . (1) The decision rule in eq. (1) is merely an alternative way of parameterizing the maximum a posterior (MAP) label in traditional GMMs with mean vectors µc, covariance matrices Ψ−1 c , and prior class probabilities pc, given by y = argminc { pc N(µc, Ψ−1 c ) }. The argument on the right hand side of the decision rule in eq. (1) is nonlinear in the ellipsoid parameters µc and Ψc. As shown in [15], however, a useful reparameterization yields a simpler expression. For each class c, the reparameterization collects the parameters {µc, Φc, θc} in a single enlarged matrix Φc ∈ℜ(d+1)×(d+1): Φc =  Ψc −Ψc µc −µT c Ψc µT c Ψcµc + θc  . (2) Note that Φc is positive semidefinite. Furthermore, if Φc is strictly positive definite, the parameters {µc, Ψc, θc} can be uniquely recovered from Φc. With this reparameterization, the decision rule in eq. (1) simplifies to: y = argmin c  zTΦc z where z =  x 1  . (3) The argument on the right hand side of the decision rule in eq. (3) is linear in the parameters Φc. In what follows, we will adopt the representation in eq. (3), implicitly constructing the “augmented” vector z for each input vector x. Note that eq. (3) still yields nonlinear (piecewise quadratic) decision boundaries in the vector z. 2.2 Margin maximization Analogous to learning in SVMs, we find the parameters {Φc} that minimize the empirical risk on the training data—i.e., parameters that not only classify the training data correctly, but also place the decision boundaries as far away as possible. The margin of a labeled example is defined as its distance to the nearest decision boundary. If possible, each labeled example is constrained to lie at least one unit distance away from the decision boundary to each competing class: ∀c ̸= yn, zT n(Φc −Φyn) zn ≥1. (4) Fig. 1 illustrates this idea. Note that in the “realizable” setting where these constraints can be simultaneously satisfied, they do not uniquely determine the parameters {Φc}, which can be scaled to yield arbitrarily large margins. Therefore, as in SVMs, we propose a convex optimization that selects the “smallest” parameters that satisfy the large margin constraints in eq. (4). In this case, the optimization is an instance of semidefinite programming [18]: min P c trace(Ψc) s.t. 1 + zT n(Φyn −Φc)zn ≤0, ∀c ̸= yn, n = 1, 2, . . ., N Φc ≻0, c = 1, 2, . . ., C (5) Note that the trace of the matrix Ψc appears in the above objective function, as opposed to the trace of the matrix Φc, as defined in eq. (2); minimizing the former imposes the scale regularization only on the inverse covariance matrices of the GMM, while the latter would improperly regularize the mean vectors as well. The constraints Φc ≻0 restrict the matrices to be positive semidefinite. The objective function must be modified for training data that lead to infeasible constraints in eq. (5). As in SVMs, we introduce nonnegative slack variables ξnc to monitor the amount by which the margin constraints in eq. (4) are violated [15]. The objective function in this setting balances the margin violations versus the scale regularization: min P nc ξnc + γ P c trace(Ψc) s.t. 1 + zT n(Φyn −Φc)zn ≤ξnc, ξnc ≥0, ∀c ̸= yn, n = 1, 2, . . ., N Φc ≻0, c = 1, 2, . . ., C (6) where the balancing hyperparameterγ >0 is set by some form of cross-validation. This optimization is also an instance of semidefinite programming. 2.3 Softmax margin maximization for multiple mixture components Lastly we review the extension to mixture modeling where each class is represented by multiple ellipsoids [15]. Let Φcm denote the matrix for the mth ellipsoid (or mixture component) in class c. We imagine that each example xn has not only a class label yn, but also a mixture component label mn. Such labels are not provided a priori in the training data, but we can generate “proxy” labels by fitting GMMs to the examples in each class by maximum likelihood estimation, then for each example, computing the mixture component with the highest posterior probability. In the setting where each class is represented by multiple ellipsoids, the goal of learning is to ensure that each example is closer to its “target” ellipsoid than the ellipsoids from all other classes. Specifically, for a labeled example (xn, yn, mn), the constraint in eq. (4) is replaced by the M constraints: ∀c ̸= yn, ∀m, zT n(Φcm −Φynmn)zn ≥1, (7) d e c i s i o n b o u n d a r y m a r g i n m a r g i n Figure 1: Decision boundary in a large margin GMM: labeled examples lie at least one unit of distance away. mixture EM margin 1 4.2% 1.4% 2 3.4% 1.4% 4 3.0% 1.2% 8 3.3% 1.5% Table 1: Test error rates on MNIST digit recognition: maximum likelihood versus large margin GMMs. where M is the number of mixture components (assumed, for simplicity, to be the same for each class). We fold these multiple constraints into a single one by appealing to the “softmax” inequality: minm am ≥−log P m e−am. Specifically, using the inequality to derive a lower bound on minm zT nΦcm zn, we replace the M constraints in eq. (7) by the stricter constraint: ∀c ̸= yn, −log X m e−zT nΦcmzn −zT nΦynmn zn ≥1. (8) We will use a similar technique in section 3 to handle the exponentially many constraints that arise in sequential classification. Note that the inequality in eq. (8) implies the inequality of eq. (7) but not vice versa. Also, though nonlinear in the matrices {Φcm}, the constraint in eq. (8) is still convex. The objective function in eq. (6) extends straightforwardly to this setting. It balances a regularizing term that sums over ellipsoids versus a penalty term that sums over slack variables, one for each constraint in eq. (8). The optimization is given by: min P nc ξnc + γ P cm trace(Ψcm) s.t. 1 + zT nΦynmn zn + log P m e−zT nΦcmzn ≤ξnc, ξnc ≥0, ∀c ̸= yn, n = 1, 2, . . ., N Φcm ≻0, c = 1, 2, . . ., C, m = 1, 2, . . . , M (9) This optimization is not an instance of semidefinite programming, but it is convex. We discuss how to perform the optimization efficiently for large data sets in appendix A. 2.4 Handwritten digit recognition We trained large margin GMMs for multiway classification of MNIST handwritten digits [8]. The MNIST data set has 60000 training examples and 10000 test examples. Table 1 shows that the large margin GMMs yielded significantly lower test error rates than GMMs trained by maximum likelihood estimation. Our best results are comparable to the best SVM results (1.0-1.4%) on deskewed images [8] that do not make use of prior knowledge. For our best model, with four mixture components per digit class, the core training optimization over all training examples took five minutes on a PC. (Multiple runs of this optimization on smaller validation sets, however, were also required to set two hyperparameters: the regularizer for model complexity, and the termination criterion for early stopping.) 3 Large margin HMMs for sequential classification In this section, we extend the framework in the previous section from multiway classification to sequential classification. Particularly, we have in mind the application to ASR, where GMMs are used to parameterize the emission densities of CD-HMMs. Strictly speaking, the GMMs in our framework cannot be interpreted as emission densities because their parameters are not constrained to represent normalized distributions. Such an interpretation, however, is not necessary for their use as discriminative models. In sequential classification by CD-HMMs, the goal is to infer the correct hidden state sequence y = [y1, y2, . . . , yT ] given the observation sequence X = [x1, x2, . . . , xT ]. In the application to ASR, the hidden states correspond to phoneme labels, and the observations are acoustic feature vectors. Note that if an observation sequence has length T and each label can belong to C classes, then the number of incorrect state sequences grows as O(CT ). This combinatorial explosion presents the main challenge for large margin methods in sequential classification: how to separate the correct hidden state sequence from the exponentially large number of incorrect ones. The section is organized as follows. Section 3.1 explains the way that margins are computed for sequential classification. Section 3.2 describes our algorithm for large margin training of CD-HMMs. Details are given only for the simple case where the observations in each hidden state are modeled by a single ellipsoid. The extension to multiple mixture components closely follows the approach in section 2.3 and can be found in [14, 16]. Margin-based learning of transition probabilities is likewise straightforward but omitted for brevity. Both these extensions were implemented, however, for the experiments on phonetic recognition in section 3.3. 3.1 Margin constraints for sequential classification We start by defining a discriminant function over state (label) sequences of the CD-HMM. Let a(i, j) denote the transition probabilities of the CD-HMM, and let Φs denote the ellipsoid parameters of state s. The discriminant function D(X, s) computes the score of the state sequence s = [s1, s2, . . . , sT ] on an observation sequence X = [x1, x2, . . . , xT ] as: D(X, s) = X t log a(st−1, st) − T X t=1 zT t Φstzt. (10) This score has the same form as the log-probability log P(X, s) in a CD-HMM with Gaussian emission densities. The first term accumulates the log-transition probabilities along the state sequence, while the second term accumulates “acoustic scores” computed as the Mahalanobis distances to each state’s centroid. In the setting where each state is modeled by multiple mixture components, the acoustic scores from individual Mahalanobis distances are replaced with “softmax” distances of the form log PM m=1 e−zT t Φstmzt, as described in section 2.3 and [14, 16]. We introduce margin constraints in terms of the above discriminant function. Let H(s, y) denote the Hamming distance (i.e., the number of mismatched labels) between an arbitrary state sequence s and the target state sequence y. Earlier, in section 2 on multiway classification, we constrained each labeled example to lie at least one unit distance from the decision boundary to each competing class; see eq. (4). Here, by extension, we constrain the score of each target sequence to exceed that of each competing sequence by an amount equal to or greater than the Hamming distance: ∀s ̸= y, D(X, y) −D(X, s) ≥H(s, y) (11) Intuitively, eq. (11) requires that the (log-likelihood) gap between the score of an incorrect sequence s and the target sequence y should grow in proportion to the number of individual label errors. The appropriateness of such proportional constraints for sequential classification was first noted by [17]. 3.2 Softmax margin maximization for sequential classification The challenge of large margin sequence classification lies in the exponentially large number of constraints, one for each incorrect sequence s, embodied by eq. (11). We will use the same softmax inequality, previously introduced in section 2.3, to fold these multiple constraints into one, thus considerably simplifying the optimization required for parameter estimation. We first rewrite the constraint in eq. (11) as: −D(X, y) + max s̸=y {H(s, y) + D(X, s)} ≤0 (12) We obtain a more manageable constraint by substituting a softmax upper bound for the max term and requiring that the inequality still hold: −D(X, y) + log X s̸=y eH(s,y)+D(X,s) ≤0 (13) Note that eq. (13) implies eq. (12) but not vice versa. As in the setting for multiway classification, the objective function for sequential classification balances two terms: one regularizing the scale of the GMM parameters, the other penalizing margin violations. Denoting the training sequences by {Xn, yn}N n=1 and the slack variables (one for each training sequence) by ξn ≥0, we obtain the following convex optimization: min P n ξn + γ P cm trace(Ψcm) s.t. −D(Xn, yn) + log P s̸=yn eH(s,yn)+D(Xn,s) ≤ξn, ξn ≥0, n = 1, 2, . . . , N Φcm ≻0, c = 1, 2, . . ., C, m = 1, 2, . . . , M (14) It is worth emphasizing several crucial differences between this optimization and previous ones [4, 11, 20] for discriminative training of CD-HMMs for ASR. First, the softmax large margin constraint in eq. (13) is a differentiable function of the model parameters, as opposed to the “hard” maximum in eq. (12) and the number of classification errors in the MCE training criteria [4]. The constraint and its gradients with respect to GMM parameters Φcm and transition parameters a(·, ·) can be computed efficiently using dynamic programming, by a variant of the standard forward-backward procedure in HMMs [14]. Second, due to the reparameterization in eq. (2), the discriminant function D(Xn, yn) and the softmax function are convex in the model parameters. Therefore, the optimization eq. (14) can be cast as convex optimization, avoiding spurious local minima [14]. Third, the optimization not only increases the log-likelihood gap between correct and incorrect state sequences, but also drives the gap to grow in proportion to the number of individually incorrect labels (which we believe leads to more robust generalization). Finally, compared to the large margin framework in [17], the softmax handling of exponentially large number of margin constraints makes it possible to train on larger data sets. We discuss how to perform the optimization efficiently in appendix A. 3.3 Phoneme recognition We used the TIMIT speech corpus [7, 9, 12] to perform experiments in phonetic recognition. We followed standard practices in preparing the training, development, and test data. Our signal processing front-end computed 39-dimensional acoustic feature vectors from 13 mel-frequency cepstral coefficients and their first and second temporal derivatives. In total, the training utterances gave rise to roughly 1.2 million frames, all of which were used in training. We trained baseline maximum likelihood recognizers and two different types of large margin recognizers. The large margin recognizers in the first group were “low-cost” discriminative CD-HMMs whose GMMs were merely trained for frame-based classification. In particular, these GMMs were estimated by solving the optimization in eq. (8), then substituted into first-order CD-HMMs for sequence decoding. The large margin recognizers in the second group were fully trained for sequential classification. In particular, their CD-HMMs were estimated by solving the optimization in eq. (14), generalized to multiple mixture components and adaptive transition parameters [14, 16]. In all the recognizers, the acoustic feature vectors were labeled by 48 phonetic classes, each represented by one state in a first-order CD-HMM. For each recognizer, we compared the phonetic state sequences obtained by Viterbi decoding to the “ground-truth” phonetic transcriptions provided by the TIMIT corpus. For the purpose of computing error rates, we followed standard conventions in mapping the 48 phonetic state labels down to 39 broader phone categories. We computed two different types of phone error rates, one based on Hamming distance, the other based on edit distance. The former was computed simply from the percentage of mismatches at the level of individual frames. The latter was computed by aligning the Viterbi and ground truth transcriptions using dynamic programming [9] and summing the substitution, deletion, and insertion error rates from the alignment process. The “frame-based” phone error rate computed from Hamming distances is more closely tracked by our objective function for large margin training, while the “string-based” phone error rate computed from edit distances provides a more relevant metric for ASR. Tables 2 and 3 show the results of our experiments. For both types of error rates, and across all model sizes, the best performance was consistently obtained by large margin CD-HMMs trained for sequential classification. Moreover, among the two different types of large margin recognizers, utterance-based training generally yielded significant improvement over frame-based training. Discriminative learning of CD-HMMs is an active research area in ASR. Two types of algorithms have been widely used: maximum mutual information (MMI) [20] and minimum classification ermixture baseline margin margin (per state) (EM) (frame) (utterance) 1 45% 37% 30% 2 45% 36% 29% 4 42% 35% 28% 8 41% 34% 27% Table 2: Frame-based phone error rates, from Hamming distance, of different recognizers. See text for details. mixture baseline margin margin (per state) (EM) (frame) (utterance) 1 40.1% 36.3% 31.2% 2 36.5% 33.5% 30.8% 4 34.7% 32.6% 29.8% 8 32.7% 31.0% 28.2% Table 3: String-based phone error rates, from edit distance, of different recognizers. See text for details. ror [4]. In [16], we compare the large margin training proposed in this paper to both MMI and MCE systems for phoneme recognition trained on the exact same acoustic features. There we find that the large margin approach leads to lower error rates, owing perhaps to the absence of local minima in the objective function and/or the use of margin constraints based on Hamming distances. 4 Discussion Discriminative learning of sequential models is an active area of research in both ASR [10, 13, 20] and machine learning [1, 6, 17]. This paper makes contributions to lines of work in both communities. First, in distinction to previous work in ASR, we have proposed a convex, margin-based cost function that penalizes incorrect decodings in proportion to their Hamming distance from the desired transcription. The use of the Hamming distance in this context is a crucial insight from the work of [17] in the machine learning community, and it differs profoundly from merely penalizing the log-likelihood gap between incorrect and correct transcriptions, as commonly done in ASR. Second, in distinction to previous work in machine learning, we have proposed a framework for sequential classification that naturally integrates with the infrastructure of modern speech recognizers. Using the softmax function, we have also proposed a novel way to monitor the exponentially many margin constraints that arise in sequential classification. For real-valued observation sequences, we have shown how to train large margin HMMs via convex optimizations over their parameter space of positive semidefinite matrices. Finally, we have demonstrated that these learning algorithms lead to improved sequential classification on data sets with over one million training examples (i.e., phonetically labeled frames of speech). In ongoing work, we are applying our approach to large vocabulary ASR and other tasks such as speaker identification and visual object recognition. A Solver The optimizations in eqs. (5), (6), (9) and (14) are convex: specifically, in terms of the matrices that parameterize large margin GMMs and HMMs, the objective functions are linear, while the constraints define convex sets. Despite being convex, however, these optimizations cannot be managed by off-the-shelf numerical optimization solvers or generic interior point methods for problems as large as the ones in this paper. We devised our own special-purpose solver for these purposes. For simplicity, we describe our solver for the optimization of eq. (6), noting that it is easily extended to eqs. (9) and (14). To begin, we eliminate the slack variables and rewrite the objective function in terms of the hinge loss function: hinge(z)=max(0, z). This yields the objective function: L = X n,c̸=yn hinge 1 + zT n(Φyn−Φc)zn  + γ X c trace(Ψc), (15) which is convex in terms of the positive semidefinite matrices Φc. We minimize L using a projected subgradient method [2], taking steps along the subgradient of L, then projecting the matrices {Φc} back onto the set of positive semidefinite matrices after each update. This method is guaranteed to converge to the global minimum, though it typically converges very slowly. For faster convergence, we precede this method with an unconstrained conjugate gradient optimization in the square-root matrices {Ωc}, where Φc = ΩcΩT c . The latter optimization is not convex, but in practice it rapidly converges to an excellent starting point for the projected subgradient method. Acknowledgment This work was supported by the National Science Foundation under grant Number 0238323. We thank F. Pereira, K. Crammer, and S. Roweis for useful discussions and correspondence. Part of this work was conducted while both authors were affiliated with the University of Pennsylvania. References [1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In T. Fawcett and N. Mishra, editors, Proceedings of the Twentieth International Conference (ICML 2003), pages 3–10, Washington, DC, USA, 2003. AAAI Press. [2] D. P. Bertsekas. Nonlinear programming. Athena Scientific, 2nd edition, 1999. [3] P. S. Gopalakrishnan, D. Kanevsky, A. N´adas, and D. Nahamoo. An inequality for rational functions with applications to some statistical estimation problems. IEEE Trans. Info. Theory, 37(1):107—113, 1991. [4] B.-H. Juang and S. Katagiri. Discriminative learning for minimum error classification. IEEE Trans. Sig. Proc., 40(12):3043–3054, 1992. [5] S. Kapadia, V. Valtchev, and S. Young. MMI training for continuous phoneme recognition on the TIMIT database. In Proc. of ICASSP 93, volume 2, pages 491–494, Minneapolis, MN, 1993. [6] J. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilisitc models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning (ICML 2001), pages 282–289. Morgan Kaufmann, San Francisco, CA, 2001. [7] L. F. Lamel, R. H. Kassel, and S. Seneff. Speech database development: design and analsysis of the acoustic-phonetic corpus. In L. S. Baumann, editor, Proceedings of the DARPA Speech Recognition Workshop, pages 100–109, 1986. [8] Y. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit recognition. In F. Fogelman and P. Gallinari, editors, Proceedings of the International Conference on Artificial Neural Networks, pages 53–60, 1995. [9] K. F. Lee and H. W. Hon. Speaker-independent phone recognition using hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11):1641–1648, 1988. [10] X. Li, H. Jiang, and C. Liu. Large margin HMMs for speech recognition. In Proceedings of ICASSP 2005, pages 513–516, Philadelphia, 2005. [11] A. N´adas. A decision-theoretic formulation of a training problem in speech recognition and a comparison of training by unconditional versus conditional maximum likelihood. IEEE Transactions on Acoustics, Speech and Signal Processing, 31(4):814–817, 1983. [12] T. Robinson. An application of recurrent nets to phone probability estimation. IEEE Transactions on Neural Networks, 5(2):298–305, 1994. [13] J. L. Roux and E. McDermott. Optimization methods for discriminative training. In Proceedings of Nineth European Conference on Speech Communication and Technology (EuroSpeech 2005), pages 3341–3344, Lisbon, Portgual, 2005. [14] F. Sha. Large margin training of acoustic models for speech recognition. PhD thesis, University of Pennsylvania, 2007. [15] F. Sha and L. K. Saul. Large margin Gaussian mixture modeling for phonetic classification and recognition. In Proceedings of ICASSP 2006, pages 265–268, Toulouse, France, 2006. [16] F. Sha and L. K. Saul. Comparison of large margin training to other discriminative methods for phonetic recognition by hidden Markov models. In Proceedings of ICASSP 2007, Hawaii, 2007. [17] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems (NIPS 16). MIT Press, Cambridge, MA, 2004. [18] L. Vandenberghe and S. P. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, March 1996. [19] V. Vapnik. Statistical Learning Theory. Wiley, N.Y., 1998. [20] P. C. Woodland and D. Povey. Large scale discriminative training of hidden Markov models for speech recognition. Computer Speech and Language, 16:25–47, 2002. [21] S. J. Young. Acoustic modelling for large vocabulary continuous speech recognition. In K. Ponting, editor, Computational Models of Speech Pattern Processing, pages 18–39. Springer, 1999.
2006
191
3,023
No-regret Algorithms for Online Convex Programs Geoffrey J. Gordon Department of Machine Learning Carnegie Mellon University Pittsburgh, PA 15213 ggordon@cs.cmu.edu Abstract Online convex programming has recently emerged as a powerful primitive for designing machine learning algorithms. For example, OCP can be used for learning a linear classifier, dynamically rebalancing a binary search tree, finding the shortest path in a graph with unknown edge lengths, solving a structured classification problem, or finding a good strategy in an extensive-form game. Several researchers have designed no-regret algorithms for OCP. But, compared to algorithms for special cases of OCP such as learning from expert advice, these algorithms are not very numerous or flexible. In learning from expert advice, one tool which has proved particularly valuable is the correspondence between no-regret algorithms and convex potential functions: by reasoning about these potential functions, researchers have designed algorithms with a wide variety of useful guarantees such as good performance when the target hypothesis is sparse. Until now, there has been no such recipe for the more general OCP problem, and therefore no ability to tune OCP algorithms to take advantage of properties of the problem or data. In this paper we derive a new class of no-regret learning algorithms for OCP. These Lagrangian Hedging algorithms are based on a general class of potential functions, and are a direct generalization of known learning rules like weighted majority and external-regret matching. In addition to proving regret bounds, we demonstrate our algorithms learning to play one-card poker. 1 Introduction In a sequence of trials we must pick hypotheses y1, y2, . . . ∈Y. After we choose yt, the correct answer is revealed as a convex loss function ℓt(yt).1 Just before seeing the tth example, our total loss is therefore Lt = Pt−1 i=1 ℓi(yi). If we had predicted using some fixed hypothesis y instead, then our loss would have been Pt−1 i=1 ℓi(y). Our total regret at time t is the difference between these two losses, with positive regret meaning that we would have preferred y to our actual plays: ρt(y) = Lt − t−1 X i=1 ℓi(y) ρt = sup y∈Y ρt(y) We assume that Y is a compact convex subset of Rd that has at least two elements. In classical no-regret algorithms such as weighted majority, Y is a simplex: the corners of Y represent pure actions, the interior points of Y represent probability distributions over pure actions, and the number of corners n is the same as the number of dimensions d. In a more general OCP, Y may have 1Many problems use loss functions of the form ℓt(yt) = ℓ(yt, ytrue t ), where ℓis a fixed function such as squared error and ytrue t is a target output. The more general notation allows for problems where there may be more than one correct prediction. many more extreme points than dimensions, n ≫d. For example, Y could be a convex set like {y | Ay = b, y ≥0} for some matrix A and vector b, or it could even be a sphere. The shape of Y captures the structure in our prediction problem. Each point in Y is a separate hypothesis, but the losses of different hypotheses are related to each other because they are all embedded in the common representation space Rd. While we could run a standard no-regret algorithm such as weighted majority on a structured Y by giving it hypotheses corresponding to the extreme points c1 . . . cn of Y, this transformation would lose the connections among hypotheses (with a corresponding loss in runtime and generalization ability). Our algorithms below are stated in terms of linear loss functions, ℓt(y) = ct · y. If ℓt is nonlinear but convex, we can substitute the derivative at the current prediction, ∂ℓt(yt), for ct, and our regret bounds will still hold (see [1, p. 53]). We will write C for the set of possible gradient vectors ct. 2 Related Work A large number of researchers have studied online prediction in general and OCP in particular. The OCP problem dates back to Hannan in 1957 [2]. The name “online convex programming” is due to Zinkevich [3], who gave a clever gradient-descent algorithm. A similar algorithm and a weaker bound were presented somewhat earlier in [1]: that paper’s GGD algorithm, using potential function ℓ0(w) = k∥w∥2 2, is equivalent to Zinkevich’s “lazy projection” with a fixed learning rate. Another clever algorithm for OCP was presented by Kalai and Vempala [4]. Compared to the above papers, the most important contribution of the current paper is its generality: no previous family of OCP algorithms can use as flexible a class of potential functions. As an illustration of the importance of this generality, consider the problem of learning from expert advice. Well-known regret bounds for this problem are logarithmic in the number of experts (e.g., [5]); no previous bounds for general OCP algorithms are sublinear in the number of experts, but logarithmic bounds follow directly as a special case of our results [6, sec. 8.1.2]. Despite this generality, our core result, Thm. 4 below, takes only half a dozen short equations to prove. From the online prediction literature, the closest related work is that of Cesa-Bianchi and Lugosi [7], which follows in the tradition of an algorithm and proof by Blackwell [8]. Cesa-Bianchi and Lugosi consider choosing predictions from an essentially-arbitrary decision space and receiving outcomes from an essentially-arbitrary outcome space. Together a decision and an outcome determine how a marker Rt ∈Rd will move. Given a potential function G, they present algorithms which keep G(Rt) from growing too quickly. This result is similar in flavor to our Thm. 4, and both Thm. 4 and the results of Cesa-Bianchi and Lugosi are based on Blackwell-like conditions. In fact, our Thm. 4 can be thought of as the first generalization of well-known online learning results such as Cesa-Bianchi and Lugosi’s to online convex programming. The main differences between the Cesa-Bianchi–Lugosi results and ours are the restrictions on their potential functions. They write their potential function as G(u) = f(Φ(u)); they require Φ to be additive (that is, Φ(u) = P i φi(ui) for one-dimensional functions φi), nonnegative, and twice differentiable, and they require f : R+ 7→R+ to be increasing, concave, and twice differentiable. These restrictions rule out many of the potential functions used here, and in fact they rule out most online convex programming problems. The most restrictive requirement is additivity; for example, when defining potentials for OCPs via Eq. (7) below, unless the set ¯Y can be factored as ¯Y1 × ¯Y2 × . . . × ¯YN the potentials are generally not expressible as f(Φ(u)) for additive Φ. During the preparation of this manuscript, we became aware of the recent work of Shalev-Shwartz and Singer [9]. This work generalizes some of the theorems in [6] and provides a very simple and elegant proof technique for algorithms based on convex potential functions. However, it does not consider the problem of defining appropriate potential functions for the feasible regions of OCPs (as discussed in Sec. 5 below and in more detail in [6]); finding such functions is an important requirement for applying potential-based algorithms to OCPs. In addition to the general papers above, there are many no-regret algorithms for specific OCPs, such as predicting as well as the best pruning of a decision tree [10], reorganizing a binary search tree so that frequent items are near the root [4], and picking paths in a graph with unknown edge costs [11]. −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 Figure 1: A set Y = {y1 +y2 = 1, y ≥0} (thick dark line) and its safe set S (light shaded region). s1 ←0 for t ←1, 2, . . . ¯yt ←f(st) (*) if ¯yt · u > 0 then yt ←¯yt/(¯yt · u) else yt ←arbitrary element of Y fi Observe ct, compute st+1 from (1) end Figure 2: The gradient form of the Lagrangian Hedging algorithm. 3 Regret Vectors Lagrangian Hedging algorithms maintain their state in a regret vector, st, defined by the recursion st+1 = st + (yt · ct)u −ct (1) with the base case s1 = 0. Here u is an arbitrary vector which satisfies y · u = 1 for all y ∈Y. (If necessary we can append a constant element to each y so that such a u exists.) The regret vector contains information about our actual losses and the gradients of our loss functions: from st we can find our regret versus any y as follows. (This property justifies the name “regret vector.”) y · st = t−1 X i=1 (yi · ci)y · u − t−1 X i=1 y · ci = Lt − t−1 X i=1 y · ci = ρt(y) We can define a safe set, in which our regret is guaranteed to be nonpositive: S = {s | (∀y ∈Y) y · s ≤0} (2) The goal of the Lagrangian Hedging algorithm is to keep its regret vector st near the safe set S. S is a convex cone: it is closed under positive linear combinations of its elements. And, it is polar [12] to the cone of unnormalized hypotheses: S⊥= ¯Y ≡{λy | y ∈Y, λ ≥0} (3) 4 The Main Algorithm We will present the general LH algorithm first, then (in Sec. 5) a specialization which is often easier to implement. The two versions are called the gradient form and the optimization form. The gradient form is shown in Fig. 2. At each step it chooses its play based on the current regret vector st (Eq. (1)) and a closed convex potential function F(s) : Rd 7→R with subgradient f(s) : Rd 7→Rd. This potential function is what distinguishes one instance of the LH algorithm from another. F(s) should be small when s is in the safe set, and large when s is far from the safe set. For example, suppose that Y is the probability simplex in Rd, so that S is the negative orthant in Rd. (This choice of Y would be appropriate for playing a matrix game or predicting from expert advice.) For this Y, two possible potential functions are F1(s) = ln X i eηsi −ln d F2(s) = X i [si]2 +/2 where η > 0 is a learning rate and [s]+ = max(s, 0). The potential F1 leads to the Hedge [5] and weighted majority [13] algorithms, while the potential F2 results in external-regret matching [14, Theorem B]. For more examples of useful potential functions, see [6]. To ensure the LH algorithm chooses legal hypotheses yt ∈Y, we require the following (note the constant 0 is arbitrary; any other k would work as well) F(s) ≤0 ∀s ∈S (4) Theorem 1 The LH algorithm is well-defined: define S as in (2) and fix a finite convex potential function F. If F(s) ≤0 for all s ∈S, then the LH algorithm picks hypotheses yt ∈Y for all t. (Omitted proofs are given in [6].) We can also define a version of the LH algorithm with an adjustable learning rate: replacing F(s) with F(ηs) is equivalent to updating st with learning rate η. Adjustable learning rates will help us obtain regret bounds for some classes of potentials. 5 The Optimization Form Even if we have a convenient representation of our hypothesis space Y, it may not be easy to work directly with the safe set S. In particular, it may be difficult to define, evaluate, and differentiate a potential function F which has the necessary properties. To avoid these difficulties, we can work with an alternate form of the LH algorithm. This form, called the optimization form, defines F in terms of a simpler function W which we will call the hedging function. It uses the same pseudocode as the gradient form (Fig. 2), but on each step it computes F and ∂F by solving an optimization problem involving W and the hypothesis set Y (Eq. (8) below). For example, two possible hedging functions are W1(¯y) =  P i ¯yi ln ¯yi + ln d if ¯y ≥0, P i ¯yi = 1 ∞ otherwise (5) W2(¯y) = X i ¯y2 i /2 (6) If Y is the probability simplex in Rd, it will turn out that W1(¯y/η) and W2(¯y) correspond to the potentials F1 and F2 from Section 4 above. So, these hedging functions result in the weighted majority and external-regret matching algorithms. For an example where the hedging function is easy to write analytically but the potential function is much more complicated, see Sec. 8 or [6]. The optimization form of the LH algorithm using hedging function W is defined to be equivalent to the gradient form using F(s) = sup ¯y∈¯Y (s · ¯y −W(¯y)) (7) Here ¯Y is defined as in (3).2 To implement the LH algorithm using the F of Eq. (7), we need an efficient way to compute ∂F. As Thm. 2 below shows, there is always a ¯y which satisfies ¯y ∈arg max ¯y∈¯Y (s · ¯y −W(¯y)) (8) and any such ¯y is an element of ∂F. So, the optimization form of the LH algorithm uses the same pseudocode as the gradient form (Fig. 2), but uses Eq. (8) with s = st to compute ¯yt in line (∗). To gain an intuition for Eqs. (7–8), consider the example of external-regret matching. Since Y is the unit simplex in Rd, ¯Y is the positive orthant in Rd. So, with W2(¯y) = ∥¯y∥2 2/2, the optimization problem (8) will be equivalent to ¯y = arg min ¯y∈Rd + 1 2∥s −¯y∥2 2 That is, ¯y is the projection of s onto Rd + by minimum Euclidean distance. It is not hard to verify that this projection replaces the negative elements of s with zeros, ¯y = [s]+. Substituting this value for ¯y back into (7) and using the fact that s · [s]+ = [s]+ · [s]+, the resulting potential function is F2(s) = s · [s]+ − X i [si]2 +/2 = X i [si]2 +/2 as claimed above. This potential function is the standard one for analyzing external-regret matching. Theorem 2 Let W be convex, dom W ∩¯Y be nonempty, and W(¯y) ≥0 for all ¯y. Suppose the sets {¯y | W(¯y) + s · ¯y ≤k} are compact for all s and k. Define F as in (7). Then F is finite and F(s) ≤0 for all s ∈S. And, the optimization form of the LH algorithm using the hedging function W is equivalent to the gradient form of the LH algorithm with potential function F. 2Eq. (7) is similar to the definition of the convex dual W ∗, but the supremum is over ¯y ∈¯Y instead of over all ¯y. As a result, F and W ∗can be very different functions. As discussed in [6], F can be expressed as the dual of a function related to W. 6 Theoretical Results Our main theoretical results are regret bounds for the LH algorithm. The bounds depend on the curvature of our potential F, the size of the hypothesis set Y, and the possible slopes C of our loss functions. Intuitively, F must be neither too curved nor too flat on the scale of the updates to st from Eq. (1): if F is too curved then ∂F will change too quickly and our hypothesis yt will jump around a lot, while if F is too flat then we will not react quickly enough to changes in regret. We will state our results for the gradient form of the LH algorithm. For the optimization form, essentially the same results hold, but the constants are defined in terms of the hedging function instead. Therefore, we never need to work with (or even be able to write down) the corresponding potential function. For more details, see [6]. One result which is slightly tricky to carry over is tuning learning rates. The choice of learning rate below and the resulting bound are the same as for the gradient form, but the implementation is slightly different: to set a learning rate η > 0, we replace W(¯y) with W(¯y/η). We will need upper and lower bounds on F. We will assume F(s + ∆) ≤F(s) + ∆· f(s) + C∥∆∥2 (9) for all regret vectors s and increments ∆, and [F(s) + A]+ ≥inf s′∈S B∥s −s′∥p (10) for all s. Here ∥· ∥is an arbitrary finite norm, and A ≥0, B > 0, C > 0, and 1 ≤p ≤2 are constants. Eq. (9), together with the convexity of F, implies that F is differentiable and f is its gradient; the LH algorithm is applicable if F is not differentiable, but its regret bounds are weaker. We will bound the size of Y by assuming that ∥y∥◦≤M (11) for all y in Y. Here, ∥· ∥◦is the dual of the norm used in Eq. (9) [12]. The size of our update to st (in Eq. (1)) depends on the hypothesis set Y, the cost vector set C, and the vector u. We have already bounded Y; rather than bounding C and u separately, we will assume that there is a constant D so that E(∥st+1 −st∥2 | st) ≤D (12) Here the expectation is taken with respect to our choice of hypothesis, so the inequality must hold for all possible values of ct. (The expectation is only necessary if we randomize our choice of hypothesis, as would happen if Y is the convex hull of some non-convex set. If interior points of Y are valid plays, we need not randomize, so we can drop the expectation in (12) and below.) Our theorem then bounds our regret in terms of the above constants. Since the bounds are sublinear in t, they show that Lagrangian Hedging is a no-regret algorithm when we choose an appropriate potential F. Theorem 3 Suppose the potential function F is convex and satisfies Eqs. (4), (9) and (10). Suppose that the problem definition is bounded according to (11) and (12). Then the LH algorithm (Fig. 2) achieves expected regret E(ρt+1(y)) ≤M((tCD + A)/B)1/p = O(t1/p) versus any hypothesis y ∈Y. If p = 1 the above bound is O(t). But, suppose that we know ahead of time the number of trials t we will see. Define G(s) = F(ηs), where η = p A/(tCD) Then the LH algorithm with potential G achieves regret E(ρt+1(y)) ≤(2M/B) √ tACD = O( √ t) for any hypothesis y ∈Y. The full proof of Thm. 3 appears in [6]; here, we sketch the proof of one of the most important intermediate results. Thm. 4 shows that, if we can guarantee E(st+1 −st) · ∂F(t) ≤0, then F(st) cannot grow too quickly. This result is analogous to Blackwell’s approachability theorem: since the level sets of F are related to S, we will be able to show st/t →S, implying no regret. Theorem 4 (Gradient descent) Let F(s) and f(s) satisfy Equation (9) with seminorm ∥· ∥and constant C. Let x0, x1, . . . be a sequence of random vectors. Write st = Pt−1 i=0 xi, and let D be a constant so that E(∥xt∥2 | st) ≤D. Suppose that, for all t, E(xt · f(st) | st) ≤0. Then for all t, E(F(st+1) | s1) −F(s1) ≤tCD PROOF: The proof is by induction: for t ≥2, assume E(F(st) | s1) ≤F(s1) + (t −1)CD. (It is obvious that the base case holds for t = 1.) Then: F(st+1) = F(st + xt) ≤ F(st) + xt · f(st) + C∥xt∥2 E(F(st+1) | st) ≤ F(st) + CD E(F(st+1) | s1) ≤ E(F(st) | s1) + CD E(F(st+1) | s1) ≤ F(s1) + (t −1)CD + CD which is the desired result. 2 7 Examples The classical applications of no-regret algorithms are learning from expert advice and learning to play a repeated matrix game. These two tasks are essentially equivalent, since they both use the probability simplex Y = {y | y ≥0, P iyi = 1} for their hypothesis set. This choice of Y simplifies the required algorithms greatly; with appropriate choices of potential functions, it can be shown that standard no-regret algorithms such as Freund and Schapire’s Hedge [5], Littlestone and Warmuth’s weighted majority [13], and Hart and Mas-Colell’s external-regret matching [14, Theorem B] are all special cases of the LH algorithm. A large variety of other online prediction problems can also be cast in our framework. These problems include path planning when costs are chosen by an adversary [11], planning in a Markov decision process when costs are chosen by an adversary [15], online pruning of a decision tree [16], and online balancing of a binary search tree [4]. More uses of online convex programming are given in [1, 3, 4]. In each case the bounds for the LH algorithm will be polynomial or better in the dimensionality of the appropriate hypothesis set and sublinear in the number of trials. 8 Experiments To demonstrate that our theoretical bounds translate to good practical performance, we implemented the LH algorithm with the potential function W2 from (6) and used it to learn policies for the game of one-card poker. (The hypothesis space for this learning problem is the set of sequence weight vectors, which is convex because one-card poker is an extensive-form game [17].) In one-card poker, two players (called the gambler and the dealer) each ante $1 and receive one card from a 13-card deck. The gambler bets first, adding either $0 or $1 to the pot. Then the dealer gets a chance to bet, again either $0 or $1. Finally, if the gambler bet $0 and the dealer bet $1, the gambler gets a second chance to bring her bet up to $1. If either player bets $0 when the other has already bet $1, that player folds and loses her ante. If neither player folds, the higher card wins the pot, resulting in a net gain of either $1 or $2 (equal to the other player’s ante plus the bet of $0 or $1). In contrast to the usual practice in poker we assume that the payoff vector ct is observable after each hand; the partially-observable extension is beyond the scope of this paper. One-card poker is a simple game; nonetheless it has many of the elements of more complicated games, including incomplete information, chance events, and multiple stages. And, optimal play requires behaviors like randomization and bluffing. The biggest strategic difference between onecard poker and larger variants such as draw, stud, or hold-em is the idea of hand potential: while 0 50 100 150 200 250 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 Gambler bound Dealer bound Avg payoff Minimax value 0 50 100 150 200 250 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 Gambler bound Dealer bound Avg payoff Minimax value Figure 3: Performance in self-play (left) and against a fixed opponent (right). 45679 and 24679 are almost equally strong hands in a showdown (they are both 9-high), holding 45679 early in the game is much more valuable because replacing the 9 with either a 3 or an 8 turns it into a straight. Fig. 3 shows the results of two typical runs: in both panels the dealer is using our no-regret algorithm. In the left panel the gambler is also using our no-regret algorithm, while in the right panel the gambler is playing a fixed policy. The x-axis shows number of hands played; the y-axis shows the average payoff per hand from the dealer to the gambler. The value of the game, −$0.064, is indicated with a dotted line. The middle solid curve shows the actual performance of the dealer (who is trying to minimize the payoff). The upper curve measures the progress of the dealer’s learning: after every fifth hand we extracted a strategy yavg t by taking the average of our algorithm’s predictions so far. We then plotted the worst-case value of yavg t . That is, we plotted the payoff for playing yavg t against an opponent which knows yavg t and is optimized to maximize the dealer’s losses. Similarly, the lower curve measures the progress of the gambler’s learning. In the right panel, the dealer quickly learns to win against the non-adaptive gambler. The dealer never plays a minimax strategy, as shown by the fact that the upper curve does not approach the value of the game. Instead, she plays to take advantage of the gambler’s weaknesses. In the left panel, the gambler adapts and forces the dealer to play more conservatively; in this case, the limiting strategies for both players are minimax. The curves in the left panel of Fig. 3 show an interesting effect: the small, damping oscillations result from the dealer and the gambler “chasing” each other around a minimax strategy. One player will learn to exploit a weakness in the other, but in doing so will open up a weakness in her own play; then the second player will adapt to try to take advantage of the first, and the cycle will repeat. Each weakness will be smaller than the last, so the sequence of strategies will converge to a minimax equilibrium. This cycling behavior is a common phenomenon when two learning players play against each other. Many learning algorithms will cycle so strongly that they fail to achieve the value of the game, but our regret bounds eliminate this possibility. 9 Discussion We have presented the Lagrangian Hedging algorithms, a family of no-regret algorithms for OCP based on general potential functions. We have proved regret bounds for LH algorithms and demonstrated experimentally that these bounds lead to good predictive performance in practice. The regret bounds for LH algorithms have low-order dependences on d, the number of dimensions in the hypothesis set Y. This low-order dependence means that the LH algorithms can learn well in prediction problems with complicated hypothesis sets; these problems would otherwise require an impractical amount of training data and computation time. Our work builds on previous work in online learning and online convex programming. Our contributions include a new, deterministic algorithm; a simple, general proof; the ability to build algorithms from a more general class of potential functions; and a new way of building good potential functions from simpler hedging functions, which allows us to construct potential functions for arbitrary convex hypothesis sets. Future work includes a no-internal-regret version of the LH algorithm, as well as a bandit-style version. The former will guarantee convergence to a correlated equilibrium in nonzero-sum games, while the latter will allow us to work from incomplete observations of the cost vector (e.g., as might happen in an extensive-form game such as poker). Acknowledgments Thanks to Amy Greenwald, Martin Zinkevich, and Sebastian Thrun, as well as Yoav Shoham and his research group. This work was supported by NSF grant EF-0331657 and DARPA contracts F30602-01-C-0219, NBCH-1020014, and HR0011-06-0023. The opinions and conclusions are the author’s and do not reflect those of the US government or its agencies. References [1] Geoffrey J. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon University, 1999. [2] James F. Hannan. Approximation to Bayes risk in repeated play. In M. Dresher, A. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume 3, pages 97–139. Princeton University Press, 1957. [3] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning. AAAI Press, 2003. [4] Adam Kalai and Santosh Vempala. Geometric algorithms for online optimization. Technical Report MIT-LCS-TR-861, Massachusetts Institute of Technology, 2002. [5] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT 95, pages 23–37. Springer-Verlag, 1995. [6] Geoffrey J. Gordon. No-regret algorithms for structured prediction problems. Technical Report CMU-CALD-05-112, Carnegie Mellon University, 2005. [7] Nicol`o Cesa-Bianchi and G´abor Lugosi. Potential-based algorithms in on-line prediction and game theory. Machine Learning, 51:239–261, 2003. [8] David Blackwell. An analogue of the minimax theorem for vector payoffs. Pacific Journal of Mathematics, 6(1):1–8, 1956. [9] Shai Shalev-Shwartz and Yoram Singer. Convex repeated games and Fenchel duality. In B. Sch¨olkopf, J.C. Platt, and T. Hofmann, editors, Advances in Neural Information Processing Systems, volume 19, Cambridge, MA, 2007. MIT Press. [10] David P. Helmbold and Robert E. Schapire. Predicting nearly as well as the best pruning of a decision tree. In Proceedings of COLT, pages 61–68, 1995. [11] Eiji Takimoto and Manfred Warmuth. Path kernels and multiplicative updates. In COLT, 2002. [12] R. Tyrell Rockafellar. Convex Analysis. Princeton University Press, New Jersey, 1970. [13] Nick Littlestone and Manfred Warmuth. The weighted majority algorithm. Technical Report UCSC-CRL-91-28, University of California Santa Cruz, 1992. [14] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000. [15] H. Brendan McMahan, Geoffrey J. Gordon, and Avrim Blum. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. [16] David P. Helmbold and Robert E. Schapire. Predicting nearly as well as the best pruning of a decision tree. In COLT, 1995. [17] D. Koller, N. Meggido, and B. von Stengel. Efficient computation of equilibria for extensive two-person games. Games and Economic Behaviour, 14(2), 1996.
2006
192
3,024
A PAC-Bayes Risk Bound for General Loss Functions Pascal Germain D´epartement IFT-GLO Universit´e Laval Qu´ebec, Canada Pascal.Germain.1@ulaval.ca Alexandre Lacasse D´epartement IFT-GLO Universit´e Laval Qu´ebec, Canada Alexandre.Lacasse@ift.ulaval.ca Franc¸ois Laviolette D´epartement IFT-GLO Universit´e Laval Qu´ebec, Canada Francois.Laviolette@ift.ulaval.ca Mario Marchand D´epartement IFT-GLO Universit´e Laval Qu´ebec, Canada Mario.Marchand@ift.ulaval.ca Abstract We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a wide class of loss functions (which includes the exponential loss and the logistic loss). Our numerical experiments with Adaboost indicate that the proposed upper bound, computed on the training set, behaves very similarly as the true loss estimated on the testing set. 1 Intoduction The PAC-Bayes approach [1, 2, 3, 4, 5] has been very effective at providing tight risk bounds for large-margin classifiers such as the SVM [4, 6]. Within this approach, we consider a prior distribution P over a space of classifiers that characterizes our prior belief about good classifiers (before the observation of the data) and a posterior distribution Q (over the same space of classifiers) that takes into account the additional information provided by the training data. A remarkable result that came out from this line of research, known as the “PAC-Bayes theorem”, provides a tight upper bound on the risk of a stochastic classifier (defined on the posterior Q) called the Gibbs classifier. In the context of binary classification, the Q-weighted majority vote classifier (related to this stochastic classifier) labels any input instance with the label output by the stochastic classifier with probability more than half. Since at least half of the Q measure of the classifiers err on an example incorrectly classified by the majority vote, it follows that the error rate of the majority vote is at most twice the error rate of the Gibbs classifier. Therefore, given enough training data, the PAC-Bayes theorem will give a small risk bound on the majority vote classifier only when the risk of the Gibbs classifier is small. While the Gibbs classifiers related to the large-margin SVM classifiers have indeed a low risk [6, 4], this is clearly not the case for the majority vote classifiers produced by bagging [7] and boosting [8] where the risk of the associated Gibbs classifier is normally close to 1/2. Consequently, the PAC-Bayes theorem is currently not able to recognize the predictive power of the majority vote in these circumstances. In an attempt to progress towards a theory giving small risk bounds for low-risk majority votes having a large risk for the associated Gibbs classifier, we provide here a risk bound for convex combinations of classifiers under quite arbitrary loss functions, including those normally used for boosting (like the exponential loss) and those that can give a tighter upper bound to the zero-one loss of weighted majority vote classifiers (like the sigmoid loss). Our numerical experiments with Adaboost [8] indicate that the proposed upper bound for the exponential loss and the sigmoid loss, computed on the training set, behaves very similarly as the true loss estimated on the testing set. 2 Basic Definitions and Motivation We consider binary classification problems where the input space X consists of an arbitrary subset of Rn and the output space Y = {−1, +1}. An example is an input-output (x, y) pair where x ∈X and y ∈Y. Throughout the paper, we adopt the PAC setting where each example (x, y) is drawn according to a fixed, but unknown, probability distribution D on X × Y. We consider learning algorithms that work in a fixed hypothesis space H of binary classifiers and produce a convex combination fQ of binary classifiers taken from H. Each binary classifier h ∈H contribute to fQ with a weight Q(h) ≥0. For any input example x ∈X, the real-valued output fQ(x) is given by fQ(x) = X h∈H Q(h)h(x) , where h(x) ∈{−1, +1}, fQ(x) ∈[−1, +1], and P h∈H Q(h) = 1. Consequently, Q(h) will be called the posterior distribution1. Since fQ(x) is also the expected class label returned by a binary classifier randomly chosen according to Q, the margin yfQ(x) of fQ on example (x, y) is related to the fraction WQ(x, y) of binary classifiers that err on (x, y) under measure Q as follows. Let I(a) = 1 when predicate a is true and I(a) = 0 otherwise. We then have: WQ(x, y) −1 2 = E h∼Q · I(h(x) ̸= y) −1 2 ¸ = E h∼Q −yh(x) 2 = −1 2 X h∈H Q(h)yh(x) = −1 2yfQ(x) . Since E (x,y)∼D WQ(x, y) is the Gibbs error rate (by definition), we see that the expected margin is just one minus twice the Gibbs error rate. In contrast, the error for the Q-weighted majority vote is given by E (x,y)∼D I µ WQ(x, y) > 1 2 ¶ = E (x,y)∼D lim β→∞ 1 2 tanh (β [2WQ(x, y) −1]) + 1 2 ≤ E (x,y)∼D tanh (β [2WQ(x, y) −1]) + 1 (∀β > 0) ≤ E (x,y)∼D exp (β [2WQ(x, y) −1]) (∀β > 0) . Hence, for large enough β, the sigmoid loss (or tanh loss) of fQ should be very close to the error rate of the Q-weighted majority vote. Moreover, the error rate of the majority vote is always upper bounded by twice that sigmoid loss for any β > 0. The sigmoid loss is, in turn, upper bounded by the exponential loss (which is used, for example, in Adaboost [9]). More generally, we will provide tight risk bounds for any loss function that can be expanded by a Taylor series around WQ(x, y) = 1/2. Hence we consider any loss function ζQ(x, y) that can be written as ζQ(x, y) def = 1 2 + 1 2 ∞ X k=1 g(k) (2WQ(x, y) −1)k (1) = 1 2 + 1 2 ∞ X k=1 g(k) µ E h∼Q −yh(x) ¶k , (2) and our task is to provide tight bounds for the expected loss ζQ that depend on the empirical loss c ζQ measured on a training sequence S = ⟨(x1, y1), . . . , (xm, ym)⟩of m examples, where ζQ def = E (x,y)∼D ζQ(x, y) ; c ζQ def = 1 m m X i=1 ζQ(xi, yi) . (3) Note that by upper bounding ζQ, we are taking into account all moments of WQ. In contrast, the PAC-Bayes theorem [2, 3, 4, 5] currently only upper bounds the first moment E (x,y)∼D WQ(x, y). 1When H is a continuous set, Q(h) denotes a density and the summations over h are replaced by integrals. 3 A PAC-Bayes Risk Bound for Convex Combinations of Classifiers The PAC-Bayes theorem [2, 3, 4, 5] is a statement about the expected zero-one loss of a Gibbs classifier. Given any distribution over a space of classifiers, the Gibbs classifier labels any example x ∈X according to a classifier randomly drawn from that distribution. Hence, to obtain a PACBayesian bound for the expected general loss ζQ of a convex combination of classifiers, let us relate ζQ to the zero-one loss of a Gibbs classifier. For this task, let us first write E (x,y)∼D µ E h∼Q −yh(x) ¶k = E h1∼Q E h2∼Q · · · E hk∼Q E (x,y) (−y)kh1(x)h2(x) · · · hk(x) . Note that the product h1(x)h2(x) · · · hk(x) defines another binary classifier that we denote as h1−k(x). We now define the error rate R(h1−k) of h1−k as R(h1−k) def = E (x,y)∼D I µ (−y)kh1−k(x) = sgn(g(k)) ¶ (4) = 1 2 + 1 2 · sgn(g(k)) E (x,y)∼D (−y)kh1−k(x) , where sgn(g) = +1 if g > 0 and −1 otherwise. If we now use E h1−k∼Qk to denote E h1∼Q E h2∼Q · · · E hk∼Q , Equation 2 now becomes ζQ = 1 2 + 1 2 ∞ X k=1 g(k) E (x,y)∼D µ E h∼Q −yh(x) ¶k = 1 2 + 1 2 ∞ X k=1 |g(k)| · sgn(g(k)) E h1−k∼Qk E (x,y)∼D (−y)kh1−k(x) = 1 2 + ∞ X k=1 |g(k)| E h1−k∼Qk µ R(h1−k) −1 2 ¶ . (5) Apart, from constant factors, Equation 5 relates ζQ the the zero-one loss of a new type of Gibbs classifier. Indeed, if we define c def = ∞ X k=1 |g(k)| , (6) Equation 5 can be rewritten as 1 c µ ζQ −1 2 ¶ + 1 2 = 1 c ∞ X k=1 |g(k)| E h1−k∼Qk R(h1−k) def = R(GQ) . (7) The new type of Gibbs classifier is denoted above by GQ, where Q is a distribution over the product classifiers h1−k with variable length k. More precisely, given an example x to be labelled by GQ, we first choose at random a number k ∈N+ according to the discrete probability distribution given by |g(k)|/c and then we choose h1−k randomly according to Qk to classify x with h1−k(x). The risk R(GQ) of this new Gibbs classifier is then given by Equation 7. We will present a tight PAC-Bayesien bound for R(GQ) which will automatically translate into a bound for ζQ via Equation 7. This bound will depend on the empirical risk RS(GQ) which relates to the the empirical loss c ζQ (measured on the training sequence S of m examples) through the equation 1 c µ c ζQ −1 2 ¶ + 1 2 = 1 c ∞ X k=1 |g(k)| E h1−k∼Qk RS(h1−k) def = RS(GQ) , (8) where RS(h1−k) def = 1 m m X i=1 I µ (−yi)kh1−k(xi) = sgn(g(k)) ¶ . Note that Equations 7 and 8 imply that ζQ −c ζQ = c · h R(GQ) −RS(GQ) i . Hence, any looseness in the bound for R(GQ) will be amplified by the scaling factor c on the bound for ζQ. Therefore, within this approach, the bound for ζQ can be tight only for small values of c. Note however that loss functions having a small value of c are commonly used in practice. Indeed, learning algorithms for feed-forward neural networks, and other approaches that construct a realvalued function fQ(x) ∈[−1, +1] from binary classification data, typically use a loss function of the form |fQ(x) −y|r/2, for r ∈{1, 2}. In these cases we have 1 2|fQ(x) −y|r = 1 2 ¯¯¯¯ E h∼Q yh(x) −1 ¯¯¯¯ r = 2r−1 |WQ(x, y)|r , which gives c = 1 for r = 1, and c = 3 for r = 2. Given a set H of classifiers, a prior distribution P on H, and a training sequence S of m examples, the learner will output a posterior distribution Q on H which, in turn, gives a convex combination fQ that suffers the expected loss ζQ. Although Equation 7 holds only for a distribution Q defined by the absolute values of the Taylor coefficients g(k) and the product distribution Qk, the PAC-Bayesian theorem will hold for any prior P and posterior Q defined on H∗def = [ k∈N+ Hk , (9) and for any zero-one valued loss function ℓ(h(x), y)) defined ∀h ∈H∗and ∀(x, y) ∈X × Y (not just the one defined by Equation 4). This PAC-Bayesian theorem upper-bounds the value of kl ¡ RS(GQ) °°R(GQ) ¢ , where kl(q∥p) def = q ln q p + (1 −q) ln 1 −q 1 −p denotes the Kullback-Leibler divergence between the Bernoulli distributions with probability of success q and probability of success p. Note that an upper bound on kl ¡ RS(GQ) °°R(GQ) ¢ provides both and upper and a lower bound on R(GQ). The upper bound on kl ¡ RS(GQ) °°R(GQ) ¢ depends on the value of KL(Q∥P), where KL(Q∥P) def = E h∼Q ln Q(h) P(h) denotes the Kullback-Leibler divergence between distributions Q and P defined on H∗. In our case, since we want a bound on R(GQ) that translates into a bound for ζQ, we need a Q that satisfies Equation 7. To minimize the value of KL(Q∥P), it is desirable to choose a prior P having properties similar to those of Q. Namely, the probabilities assigned by P to the possible values of k will also be given by |g(k)|/c. Moreover, we will restrict ourselves to the case where the k classifiers from H are chosen independently, each according to the prior P on H (however, other choices for P are clearly possible). In this case we have KL(Q∥P) = 1 c ∞ X k=1 |g(k)| E h1−k∼Qk ln |g(k)| · Qk(h1−k) |g(k)| · P k(h1−k) = 1 c ∞ X k=1 |g(k)| E h1∼Q . . . E hk∼Q k X i=1 ln Q(hi) P(hi) = 1 c ∞ X k=1 |g(k)| · k E h∼Q ln Q(h) P(h) = k · KL(Q∥P) , (10) where k def = 1 c ∞ X k=1 |g(k)| · k . (11) We then have the following theorem. Theorem 1 For any set H of binary classifiers, any prior distribution P on H∗, and any δ ∈(0, 1], we have Pr S∼Dm µ ∀Q on H∗: kl ¡ RS(GQ) °°R(GQ) ¢ ≤1 m · KL(Q∥P) + ln m + 1 δ ¸¶ ≥1 −δ . Proof The proof directly follows from the fact that we can apply the PAC-Bayes theorem of [4] to priors and posteriors defined on the space H∗of binary classifiers with any zero-one valued loss function. Note that Theorem 1 directly provides upper and lower bounds on ζQ when we use Equations 7 and 8 to relate R(GQ) and RS(GQ) to ζQ and c ζQ and when we use Equation 10 for KL(Q∥P). Consequently, we have the following theorem. Theorem 2 Consider any loss function ζQ(x, y) defined by Equation 1. Let ζQ and c ζQ be, respectively, the expected loss and its empirical estimate (on a sample of m examples) as defined by Equation 3. Let c and k be defined by Equations 6 and 11 respectively. Then for any set H of binary classifiers, any prior distribution P on H, and any δ ∈(0, 1], we have Pr S∼Dm à ∀Q on H : kl à 1 c · c ζQ −1 2 ¸ + 1 2 °°°°° 1 c · ζQ −1 2 ¸ + 1 2 ! ≤1 m · k · KL(Q∥P) + ln m + 1 δ ¸! ≥1 −δ . 4 Bound Behavior During Adaboost We have decided to examine the behavior of the proposed bounds during Adaboost since this learning algorithm generally produces a weighted majority vote having a large Gibbs risk E (x,y)WQ(x, y) (i.e., small expected margin) and a small Var (x,y)WQ(x, y) (i.e., small variance of the margin). Indeed, recall that one of our main motivations was to find a tight risk bound for the majority vote precisely under these circumstances. We have used the “symmetric” version of Adaboost [10, 9] where, at each boosting round t, the weak learning algorithm produces a classifier ht with the smallest empirical error ϵt = m X i=1 Dt(i)I[ht(xi) ̸= yi] with respect to the boosting distribution Dt(i) on the indices i ∈{1, . . . , m} of the training examples. After each boosting round t, this distribution is updated according to Dt+1(i) = 1 Zt Dt(i) exp(−yiαtht(xi)) , where Zt is the normalization constant required for Dt+1 to be a distribution, and where αt = 1 2 ln µ1 −ϵt ϵt ¶ . Since our task is not to obtain the majority vote with the smallest possible risk but to investigate the tightness of the proposed bounds, we have used the standard “decision stumps” for the set H of classifiers that can be chosen by the weak learner. Each decision stump is a threshold classifier that depends on a single attribute: it outputs +y when the tested attribute exceeds the threshold and predicts −y otherwise, where y ∈{−1, +1}. For each decision stump h ∈H, its boolean complement is also in H. Hence, we have 2[k(i) −1] possible decision stumps on an attribute i having k(i) possible (discrete values). Hence, for data sets having n attributes, we have exactly |H| = 2 Pn i=1 2[k(i) −1] classifiers. Data sets having continuous-valued attributes have been discretized in our numerical experiments. From Theorem 2 and Equation 10, the bound on ζQ depends on KL(Q∥P). We have chosen a uniform prior P(h) = 1/|H| ∀h ∈H. We therefore have KL(Q∥P) = X h∈H Q(h) ln Q(h) P(h) = X h∈H Q(h) ln Q(h) + ln |H| def = −H(Q) + ln |H| . At boosting round t, Adaboost changes the distribution from Dt to Dt+1 by putting more weight on the examples that are incorrectly classified by ht. This strategy is supported by the propose bound on ζQ since it has the effect of increasing the entropy H(Q) as a function of t. Indeed, apart from tiny fluctuations, the entropy was seen to be nondecreasing as a function of t in all of our boosting experiments. We have focused our attention on two different loss functions: the exponential loss and the sigmoid loss. 4.1 Results for the Exponential Loss The exponential loss EQ(x, y) is the obvious choice for boosting since, the typical analysis [8, 10, 9] shows that the empirical estimate of the exponential loss is decreasing at each boosting round 2. More precisely, we have chosen EQ(x, y) def = 1 2 exp (β [2WQ(x, y) −1]) . (12) For this loss function, we have c = eβ −1 k = β 1 −e−β . Since c increases exponentially rapidly with β, so will the risk upper-bound for EQ. Hence, unfortunately, we can obtain a tight upper-bound only for small values of β. All the data sets used were obtained from the UCI repository. Each data set was randomly split into two halves of the same size: one for the training set and the other for the testing set. Figure 1 illustrates the typical behavior for the exponential loss bound on the Mushroom and Sonar data sets containing 8124 examples and 208 examples respectively. We first note that, although the test error of the majority vote (generally) decreases as function of the number T of boosting rounds, the risk of the Gibbs classifier, E (x,y)WQ(x, y) increases as a function of T but its variance Var (x,y)WQ(x, y) decreases dramatically. Another striking feature is the fact that the exponential loss bound curve, computed on the training set, is essentially parallel to the true exponential loss curve computed on the testing set. This same parallelism was observed for all the UCI data sets we have examined so far.3 Unfortunately, as we can see in Figure 2, the risk bound increases rapidly as a function of β. Interestingly however, the risk bound curves remain parallel to the true risk curves. 4.2 Results for the Sigmoid Loss We have also investigated the sigmoid loss TQ(x, y) defined by TQ(x, y) def = 1 2 + 1 2 tanh (β [2WQ(x, y) −1]) . (13) 2In fact, this is true only for the positive linear combination produced by Adaboost. The empirical exponential risk of the convex combination fQ is not always decreasing as we shall see. 3These include the following data sets: Wisconsin-breast, breast cancer, German credit, ionosphere, kr-vskp, USvotes, mushroom, and sonar. 0.4 0.3 0.2 0.1 0 T 160 120 80 40 0 EQ bound EQ on test E(WQ) on test MV error on test Var(WQ) on test 0.6 0.5 0.4 0.3 0.2 0.1 0 T 160 120 80 40 0 EQ bound EQ on test E(WQ) on test MV error on test Var(WQ) on test Figure 1: Behavior of the exponential risk bound (EQ bound), the true exponential risk (EQ on test), the Gibbs risk (E(WQ) on test), its variance (Var(WQ) on test), and the test error of the majority vote (MV error on test) as of function of the boosting round T for the Mushroom (left) and the Sonar (right) data sets. The risk bound and the true risk were computed for β = ln 2. 0.5 0.4 0.3 0.2 0.1 0 T 160 120 80 40 1 β = 1 β = 2 β = 3 β = 4 MV error on test 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 T 160 120 80 40 1 β = 1 β = 2 β = 3 β = 4 MV error on test Figure 2: Behavior of the true exponential risk (left) and the exponential risk bound (right) for different values of β on the Mushroom data set. Since the Taylor series expansion for tanh(x) about x = 0 converges only for |x| < π/2, we are limited to β ≤π/2. Under these circumstances, we have c = tan(β) k = 1 cos(β) sin(β) . Similarly as in Figure 1, we see on Figure 3 that the sigmoid loss bound curve, computed on the training set, is essentially parallel to the true sigmoid loss curve computed on the testing set. Moreover, the bound appears to be as tight as the one for the exponential risk on Figure 1. 5 Conclusion By trying to obtain a tight PAC-Bayesian risk bound for the majority vote, we have obtained a PAC-Bayesian risk bound for any loss function ζQ that has a convergent Taylor expansion around WQ = 1/2 (such as the exponential loss and the sigmoid loss). Unfortunately, the proposed risk 0.4 0.3 0.2 0.1 0 T 160 120 80 40 0 TQ bound TQ on test E(WQ) on test MV error on test Var(WQ) on test 0.6 0.5 0.4 0.3 0.2 0.1 0 T 160 120 80 40 0 TQ bound TQ on test E(WQ) on test MV error on test Var(WQ) on test Figure 3: Behavior of the sigmoid risk bound (TQ bound), the true sigmoid risk (TQ on test), the Gibbs risk (E(WQ) on test), its variance (Var(WQ) on test), and the test error of the majority vote (MV error on test) as of function of the boosting round T for the Mushroom (left) and the Sonar (right) data sets. The risk bound and the true risk were computed for β = ln 2. bound is tight only for small values of the scaling factor c involved in the relation between the expected loss ζQ of a convex combination of binary classifiers and the zero-one loss of a related Gibbs classifier GQ. However, it is quite encouraging to notice in our numerical experiments with Adaboost that the proposed loss bound (for the exponential loss and the sigmoid loss), behaves very similarly as the true loss. Acknowledgments Work supported by NSERC Discovery grants 262067 and 122405. References [1] David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355–363, 1999. [2] Matthias Seeger. PAC-Bayesian generalization bounds for gaussian processes. Journal of Machine Learning Research, 3:233–269, 2002. [3] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5–21, 2003. [4] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6:273–306, 2005. [5] Franc¸ois Laviolette and Mario Marchand. PAC-Bayes risk bounds for sample-compressed Gibbs classifiers. Proceedings of the 22nth International Conference on Machine Learning (ICML 2005), pages 481–488, 2005. [6] John Langford and John Shawe-Taylor. PAC-Bayes & margins. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 423– 430. MIT Press, Cambridge, MA, 2003. [7] Leo Breiman. Bagging predictors. Machine Learning, 24:123–140, 1996. [8] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119–139, 1997. [9] Robert E. Schapire and Yoram Singer. Improved bosting algorithms using confidence-rated predictions. Machine Learning, 37:297–336, 1999. [10] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26:1651– 1686, 1998.
2006
193
3,025
Distributed Inference in Dynamical Systems Stanislav Funiak Carlos Guestrin Carnegie Mellon University Mark Paskin Google Rahul Sukthankar Intel Research Abstract We present a robust distributed algorithm for approximate probabilistic inference in dynamical systems, such as sensor networks and teams of mobile robots. Using assumed density filtering, the network nodes maintain a tractable representation of the belief state in a distributed fashion. At each time step, the nodes coordinate to condition this distribution on the observations made throughout the network, and to advance this estimate to the next time step. In addition, we identify a significant challenge for probabilistic inference in dynamical systems: message losses or network partitions can cause nodes to have inconsistent beliefs about the current state of the system. We address this problem by developing distributed algorithms that guarantee that nodes will reach an informative consistent distribution when communication is re-established. We present a suite of experimental results on real-world sensor data for two real sensor network deployments: one with 25 cameras and another with 54 temperature sensors. 1 Introduction Large-scale networks of sensing devices have become increasingly pervasive, with applications ranging from sensor networks and mobile robot teams to emergency response systems. Often, nodes in these networks need to perform probabilistic dynamic inference to combine a sequence of local, noisy observations into a global, joint estimate of the system state. For example, robots in a team may combine local laser range scans, collected over time, to obtain a global map of the environment; nodes in a camera network may combine a set of image sequences to recognize moving objects in a heavily cluttered scene. A simple approach to probabilistic dynamic inference is to collect the data to a central location, where the processing is performed. Yet, collecting all the observations is often impractical in large networks, especially if the nodes have a limited supply of energy and communicate over a wireless network. Instead, the nodes need to collaborate, to solve the inference task in a distributed manner. Such distributed inference techniques are also necessary in online control applications, where nodes of the network need estimates of the state in order to make decisions. Probabilistic dynamic inference can often be efficiently solved when all the processing is performed centrally. For example, in linear systems with Gaussian noise, the inference tasks can be solved in a closed form with a Kalman Filter [3]; for large systems, assumed density filtering can often be used to approximate the filtered estimate with a tractable distribution (c.f., [2]). Unfortunately, distributed dynamic inference is substantially more challenging. Since the observations are distributed across the network, nodes must coordinate to incorporate each others’ observations and propagate their estimates from one time step to the next. Online operation requires the algorithm to degrade gracefully when nodes run out of processing time before the observations propagate throughout the network. Furthermore, the algorithm needs to robustly address node failures and interference that may partition the communication network into several disconnected components. We present an efficient distributed algorithm for dynamic inference that works on a large family of processes modeled by dynamic Bayesian networks. In our algorithm, each node maintains a (possibly approximate) marginal distribution over a subset of state variables, conditioned on the measurements made by the nodes in the network. At each time step, the nodes condition on the observations, using a modification of the robust (static) distributed inference algorithm [7], and then advance their estimates to the next time step locally. The algorithm guarantees that, with sufficient communication at each time step, the nodes obtain the same solution as the corresponding centralized algorithm [2]. Before convergence, the algorithm introduces principled approximations in the form of independence assertions in the node estimates and in the transition model. In the presence of unreliable communication or high latency, the nodes may not be able to condition their estimates on all the observations in the network, e.g., when interference causes a network partition, or when high latency prevents messages from reaching every node. Once the estimates are advanced to the next time step, it is difficult to condition on the observations made in the past [10]. Hence, the beliefs at the nodes may be conditioned on different evidence and no longer form a consistent global probability distribution over the state space. We show that such inconsistencies can lead to poor results when nodes attempt to combine their estimates. Nevertheless, it is often possible to use the inconsistent estimates to form an informative globally consistent distribution; we refer to this task as alignment. We propose an online algorithm, optimized conditional alignment (OCA), that obtains the global distribution as a product of conditionals from local estimates and optimizes over different orderings to select a global distribution of minimal entropy. We also propose an alternative, more global optimization approach that minimizes a KL divergence-based criterion and provides accurate solutions even when the communication network is highly fragmented. We present experimental results on real-world sensor data, covering sensor calibration [7] and distributed camera localization [5]. These results demonstrate the convergence properties of the algorithm, its robustness to message loss and network partitions, and the effectiveness of our method at recovering from inconsistencies. Distributed dynamic inference has received some attention in the literature. For example, particle filtering (PF) techniques have been applied to these settings: Zhao et al. [11] use (mostly) independent PFs to track moving objects, and Rosencrantz et al. [10] run PFs in parallel, sharing measurements as appropriate. Pfeffer and Tai [9] use loopy belief propagation to approximate the estimation step in a continuous-time Bayesian network. When compared to these techniques, our approach addresses several additional challenges: we do not assume point-to-point communication between nodes, we provide robustness guarantees to node failures and network partitions, and we identify and address the belief inconsistency problem that arises in distributed systems. 2 The distributed dynamic inference problem Following [7], we assume a network model where each node can perform local computations and communicate with other nodes over some channel. The nodes of the network may change over time: existing nodes can fail, and new nodes may be introduced. We assume a message-level error model: messages are either received without error, or they are not received at all. The likelihood of successful transmissions (link qualities) are unknown and can change over time, and link qualities of several node pairs may be correlated. We model the system as a dynamic Bayesian network (DBN). A DBN consists of a set of state processes, X = {X1, . . . , XL} and a set of observed measurement processes Z = {Z1, . . . , ZK}; each measurement process Zk corresponds to one of the sensors on one of the nodes. State processes are not associated with unique nodes. A DBN defines a joint probability model over steps 1 . . . T as p(X(1:T ), Z(1:T )) = p(X(1)) | {z } initial prior × T Y t=2 p(X(t) | X(t−1)) | {z } transition model × T Y t=1 p(Z(t) | X(t)) | {z } measurement model . The initial prior is given by a factorized probability model p(X(1)) ∝Q h ψ(A(1) h ), where each Ah ⊆X is a subset of the state processes. The transition model factors as QL i=1 p(X(t) i | Pa[X(t) i ]), where Pa[X(t) i ] are the parents of X(t) i in the previous time step. The measurement model factors as QK k=1 p(Z(t) k | Pa[Z(t) k ]), where Pa[Z(t) k ] ⊆X(t) are the parents of Z(t) k in the current time step. In the distributed dynamic inference problem, each node n is associated with a set of processes Qn ⊆X; these are the processes about which node n wishes to reason. The nodes need to collaborate so that each node can obtain (an approximation to) the posterior distribution over Q(t) n given all measurements made in the network up to the current time step t: p(Q(t) i | z(1:t)). We assume that node clocks are synchronized, so that transitions to the next time step are simultaneous. 3 Filtering in dynamical systems The goal of (centralized) filtering is to compute the posterior distribution p(X(t) | z(1:t)) for t = 1, 2, . . . as the observations z(1), z(2), . . . arrive. The basic approach is to recursively compute p(X(t+1) | z(1:t)) from p(X(t) | z(1:t−1)) in three steps: 1. Estimation: p(X(t) | z(1:t)) ∝p(X(t) | z(1:t−1)) × p(z(t) | X(t)); 2. Prediction: p(X(t), X(t+1) | z(1:t)) = p(X(t) | z(1:t)) × p(X(t+1) | X(t)); 3. Roll-up: p(X(t+1) | z(1:t)) = R p(x(t), X(t+1) | z(1:t)) dx(t). Exact filtering in DBNs is usually expensive or intractable because the belief state rapidly loses all conditional independence structure. An effective approach, proposed by Boyen and Koller [2], hereby denoted “B&K98”, is to periodically project the exact posterior to a distribution that satisfies independence assertions encoded in a junction tree [3]. Given a junction tree T , with cliques {Ci} and separators  Si,j , the projection operation amounts to computing the clique marginals, hence the filtered distribution is approximated as p(X(t) | z(1:t−1)) ≈˜p(X(t) | z(1:t−1)) = Q i∈NT ˜p(C(t) i | z(1:t−1)) Q {i,j}∈ET ˜p(S(t) i,j | z(1:t−1)) , (1) where NT and ET are the nodes and edges of T , respectively. With this representation, the estimation step is implemented by multiplying each observation likelihood p(z(t) k | Pa[Z(t) k ]) to a clique marginal; the clique and separator potentials are then recomputed with message passing, so that the posterior distribution is once again written as a ratio of clique and separator marginals: ˜p(X(t) | z(1:t)) = hQ i∈NT ˜p(C(t) i | z(1:t)) i . hQ {i,j}∈ET ˜p(S(t) i,j | z(1:t)) i . The prediction step is performed independently for each clique C(t+1) i : we multiply ˜p(X(t) | z(1:t)) with the transition model p(X(t+1) | Pa[X(t+1)]) for each variable X(t+1) ∈C(t+1) i and, using variable elimination, compute the marginals over the clique at the next time step p(C(t+1) i | z(1:t)). 4 Approximate distributed filtering In principle, the centralized filtering approach described in the previous section could be applied to a distributed system, e.g., by communicating the observations made in the network to a central location that performs all computations, and distributing the answer to every node in the network. While conceptually simple, this approach has substantial drawbacks, including the high communication bandwidth, the introduction of a single point of failure to the system, and the fact that nodes do not have valid estimates when the network is partitioned. In this section, we present a distributed filtering algorithm where each node obtains an approximation to the posterior distribution over subset of the state variables. Our estimation step builds on the robust distributed inference algorithm of Paskin et al. [7, 8], while the prediction, roll-up, and projection steps are performed locally at each node. 4.1 Estimation as a robust distributed probabilistic inference In the distributed inference approach of Paskin et al. [8], the nodes collaborate so that each node n can obtain the posterior distribution over some set of variables Qn given all measurements made throughout the network. In our setting, Qn contains the variables in a subset Ln of the cliques used in our assumed density representation. In their architecture, nodes form a distributed data structure along a routing tree in the network, where each node in this tree is associated with a cluster of variables Dn that includes Qn, as well as any other variables, needed to preserve the flow of information between the nodes, a property equivalent to the running intersection property in junction trees [3]. We refer to this tree as the network junction tree, and, for clarity, we refer to the junction tree used for the assumed density as the external junction tree. Using this architecture, Paskin and Guestrin developed a robust distributed probabilistic inference algorithm, RDPI [7], for static inference settings, where nodes compute the posterior distribution p(Qn | z) over Qn given all measurements throughout the network z. RDPI provides two crucial properties: convergence, if there are no network partitions, these distributed estimates converge to the true posteriors; and, smooth degradation even before convergence, the estimates provide a principled approximation to the true posterior (which introduces additional independence assertions). In RDPI, each node n maintains the current belief βn of p(Qn | z). Initially, node n knows only the marginals of the prior distribution {p(Ci) : i ∈Ln} for a subset of cliques Ln in the external junction tree, and its local observation model p(zn | Pa[Zn]) for each of its sensors. We assume that Pa[Zn] ⊆Ci for some i ∈Ln; thus, βn is represented as a collection of priors over cliques of variables, and of observation likelihood functions over these variables. Messages are then sent between neighboring nodes, in an analogous fashion to the sum-product algorithm for junction trees [3]. However, messages in RDPI are always represented as a collection of priors {πi(Ci)} over cliques of variables Ci, and of measurement likelihood functions {λi(Ci)} over these cliques. This decomposition into prior and likelihood factors is the key to the robustness properties of the algorithm [7]. With sufficient communication, βn converges to p(Qn | z). In our setting, at each time step t, each prior πi(C(t) i ) is initialized to p(C(t) i | z(1:t−1)). The likelihood functions are similarly initialized to λi(C(t) i ) = p(z(t) i | C(t) i ), if some sensor makes an observation about these variables, or to 1 otherwise. Through message passing βn converges to ˜p(Q(t) n | z(1:t)). An important property of RDPI that will be useful in the remainder of the paper is: Property 1. Let βn be the result computed by the RDPI algorithm at convergence at node n. Then the cliques in βn form a subtree of an external junction tree that covers Qn. 4.2 Prediction, roll-up and projection The previous section shows that the estimation step can be implemented in a distributed manner, using RDPI. At convergence, each node n obtains the calibrated marginals ˜p(C(t) i | z(1:t)), for i ∈ Ln. In order to advance to the next time step, each node must perform prediction and roll-up, obtaining the marginals ˜p(C(t+1) i | z(1:t)). Recall from Section 3 that, in order to compute a marginal ˜p(C(t+1) i | z(1:t)), this node needs ˜p(X(t) | z(1:t)). Due to the conditional independencies encoded in ˜p(X(t) | z(1:t)), it is sufficient to obtain a subtree of the external junction tree that covers the parents Pa[C(t+1) i ] of all variables in the clique. The next time step marginal ˜p(C(t+1) i | z(1:t)) can then be computed by multiplying this subtree with the transition model p(X(t+1) | Pa[X(t+1)]) for each X(t+1) ∈C(t+1) i and eliminating all variables but C(t+1) i (recall that Pa[X(t+1)] ⊆X(t)). This procedure suggests the following distributed implementation of prediction, roll-up, and projection: after completing the estimation step, each node selects a subtree of the (global) external junction tree that covers Pa[C(t+1) i ] and collects the marginals of this tree from other nodes in the network. Unfortunately, it is unclear how to allocate the running time between estimation and collection of marginals in time-critical applications, when the estimation step may not run to completion. Instead, we propose a simple approach that performs both steps at once: run the distributed inference algorithm, described in the previous section, to obtain the posterior distribution over the parents of each clique maintained at the node. This task can be accomplished by ensuring that these parent variables are included in the query variables of node n: Pa[C(t+1) i ] ⊆Qn, ∀i ∈Ln. When the estimation step cannot be run to convergence within the allotted time, the variables Scope[βn] covered by the distribution βn that node n obtains may not cover the entire parent set Pa[C(t+1) i ]. In this case, multiplying in the standard transition model is equivalent to assuming an uniform prior for the missing variables, which can lead to very poor solutions in practice. When the transition model is learned from data, p(X(t+1) | Pa[X(t+1)]) is usually computed from the empirical distribution ˆp(X(t+1), Pa[X(t+1)]), e.g., pMLE(X(t+1) | Pa[X(t+1)]) = ˆp(X(t+1), Pa[X(t+1)])/ˆp(Pa[X(t+1)]). Building on these empirical distributions, we can obtain an improved solution for the prediction and roll-up steps, when we do not have a distribution over the entire parent set Pa[C(t+1) i ]. Specifically, we obtain a valid approximate transition model ˜p(X(t+1) | W(t)), where W(t) = Scope[βn] ∩Pa[X(t+1)], online by simply marginalizing the empirical distribution ˆp(X(t+1), Pa[X(t+1)]) down to ˆp(X(t+1), W(t)). This procedure is equivalent to introducing an additional independence assertion to the model: at time step t + 1, X(t+1) is independent of Pa[X(t+1)] −W(t), given W(t). 4.3 Summary of the algorithm Our distributed approximate filtering algorithm can be summarized as follows: • Using the architecture in [8], construct a network junction tree s.t. the query variables Qn at each node n cover S i∈Ln C(t) i  ∪ S i∈Ln Pa[C(t+1) i ]  . • For t = 1, 2, . . ., at each node n, – run RDPI [7] until the end of step t, obtaining a (possibly approximate) belief βn; – for each X(t+1) ∈C(t+1) i , i ∈Ln, compute an approximate transition model ˜p(X(t+1) | W(t) X ), where W(t) X = Scope[βn] ∩Pa[X(t+1)]; – for each clique C(t+1) i , i ∈Ln, compute the clique marginal ˜p(C(t+1) i | z(1:t)) from βn and from each ˜p(X(t+1) | W(t) X ), locally, using variable elimination. Using the convergence properties of the RDPI algorithm, we prove that, given sufficient communication, our distributed algorithm obtains the same solution as the centralized B&K98 algorithm: Theorem 1. For a set of nodes running our distributed filtering algorithm, if at each time step there is sufficient communication for the RDPI algorithm to converge, and the network is not partitioned, then, for each node n, for each clique i ∈Ln, the distribution ˜p(C(t) i | z(1:t−1)) obtained by node n is equal to the distribution obtained by the B&K98 algorithm with assumed density given by T . 1 2 3 4 (a) BK solution 1 2 3 4 (b) alignment rooted at 1 1 2 3 4 (c) alignment rooted at 4 1 2 3 4 (d) min. KL divergence Figure 1: Alignment results after partition (shown by vertical line). circles represent 95% confidence intervals in the estimate of the camera location. (a) The exact solution, computed by the BK algorithm in the absence of partitions. (b) Solution obtained when aligning from node 1. (c) Solution obtained when aligning from node 4. (d) Solution obtained by joint optimized alignment. 5 Robust distributed filtering In the previous section, we introduced an algorithm for distributed filtering with dynamic Bayesian networks that, with sufficient communication, converges to the centralized B&K98 algorithm. In some settings, for example when interference causes a network partition, messages may not be propagated long enough to guarantee convergence before nodes must roll-up to the next time step. Consider the example, illustrated in Figure 1, in which a network of cameras localizes itself by observing a moving object. Each camera i carries a clique marginal over the location of the object M (t), its own camera pose variable Ci, and the pose of one of its neighboring cameras: π1(C1,2, M (t)), π2(C2,3, M (t)), and π3(C3,4, M (t)). Suppose communication were interrupted due to a network partition: observations would not propagate, and the marginals carried by the nodes would no longer form a consistent distribution, in the sense that π1,π2,π3 might not agree on their marginals, e.g., π1(C2, M (t)) ̸= π2(C2, M (t)). The goal of alignment is to obtain a consistent distribution ˜p(X(t) | z(1:t−1)) from marginals π1, π2, π3 that is close to the true posterior p(X(t) | z(1:t−1)) (as measured, for example, by the root-mean-square error of the estimates). For simplicity of notation, we omit time indices t and conditioning on the past evidence z(1:t−1) throughout this section. 5.1 Optimized conditional alignment One way to define a consistent distribution ˜p is to start from a root node r, e.g., 1, and allow each clique marginal to decide the conditional density of Ci given its parent, e.g., ˜p1(C1:4, M) = π1(C1,2, M) × π2(C3 | C2, M) × π3(C4 | C3, M). This density ˜p1 forms a coherent distribution over C1:4, M, and we say that ˜p1 is rooted at node 1. Thus, π1 fully defines the marginal density over C1,2, M, π2 defines the conditional density of C3 given C2, M, and so on. If node 3 were the root, then node 1 would only contribute π1(C1 | C2, M), and we would obtain a different approximate distribution. In general, given a collection of marginals πi(Ci) over the cliques of a junction tree T , and a root node r ∈NT , the distribution obtained by conditional alignment from r can be written as ˜pr(X) = πr(Cr) × Y i∈(NT −{r}) πi(Ci −Sup(i),i | Sup(i),i), (2) where up(i) denotes the upstream neighbor of i on the (unique) path between r and i. The choice of the root r often crucially determines how well the aligned distribution ˜pr approximates the true prior. Suppose that, in the example in Figure 1, the nodes on the left side of the partition do not observe the person while the communication is interrupted, and the prior marginals π1, π2 are uncertain about M. If we were to align the distribution from π2, multiplying π3(C4 | C3, M) into the marginal π2(C2,3, M) would result in a distribution that is uncertain in both M and C4 (Figure 1(b)), while a better choice of root could provide a much better estimate (Figure 1(c)). One possible metric to optimize when choosing the root r for the alignment is the entropy of the resulting distribution ˜pr. For example, the entropy of ˜p2 in the previous example can be written as H˜p2(C1:4, M) = Hπ2(C2,3, M) + Hπ3(C4 | C3, M) + Hπ1(C1 | C2, M), (3) where we use the fact that, for Gaussians, the conditional entropy of C4 given C3,M only depends on the conditional distribution ˜p2(C4 | C3, M) = π3(C4 | C3, M). A na¨ıve algorithm for obtaining the best root would exploit this decomposition to compute the entropy of each ˜p2, and pick the root that leads to a lowest total entropy; the running time of this algorithm is O(|NT |2). We propose a dynamic programming approach that significantly reduces the running time. Comparing Equation 3 with the entropy of the distribution rooted at a neighboring node 3, we see that they share a common term Hπ1(C1 | C2, M), and H˜p3(C1:4, M) −H˜p2(C1:4, M) = Hπ3(S2,3) −Hπ2(S2,3) ≜△2,3. If △2,3 is positive, node 2 is a better root than 3, △2,3 is negative, we have the reverse situation. Thus, when comparing neighboring nodes as root candidates, the difference in entropy of the resulting distribution is simply the difference in entropy their local distributions assign to their separator. This property generalizes to the following dynamic programming algorithm that determines the root r with minimal H˜pr(X) in O(|NT |) time: • For any node i ∈NT , define the message from i to its neighbor j as mi→j = △i,j if mk→i < 0, ∀k ̸= j △i,j + maxk̸=j mk→i otherwise , where △i,j = Hπj (Si,j) −Hπi(Si,j), and k varies over the neighbors of i in T . • If maxk mk→i < 0 then i is the optimal root; otherwise, up(i) = argmaxk mk→i. Intuitively, the message mi→j represents the loss (entropy) with root node j, compared to the best root on i’s side of the tree. Ties between nodes, if any, can be resolved using node IDs. 5.2 Distributed optimized conditional alignment In the absence of an additional procedure, RDPI can be viewed as performing conditional alignment. However, the alignment is applied to the local belief at each node, rather than the global distribution, and the nodes may not agree on the choice of the root r. Thus, the network is not guaranteed to reach a globally consistent, aligned distribution. In this section, we show that RDPI can be extended to incorporate the optimized conditional alignment (OCA) algorithm from the previous section. By Property 1, at convergence, the priors at each node form a subtree of an external junction tree for the assumed density. Conceptually, if we were to apply OCA to this subtree, the node would have an aligned distribution, but nodes may not be consistent with each other. Intuitively, this happens because the optimization messages mi→j were not propagated between different nodes. In RDPI, node n’s belief βn includes a collection of (potentially inconsistent) priors {πi(Ci)}. In the standard sum-product inference algorithm, an inference message µm→n from node m to node n is computed by marginalizing out some variables from the factor µ+ m→n ≜ψm × Q k̸=n µk→m that combines the messages received from node m’s other neighbors with node m’s local belief. The inference message in RDPI involves a similar marginalization, which corresponds to pruning some cliques from µ+ m→n [7]. When such pruning occurs, any likelihood information λi(Ci) associated with the pruned clique i is transferred to its neighbor j. Our distributed OCA algorithm piggy-backs on this pruning, computing an optimization message mi→j, which is stored in clique j. (To compute this message, cliques must also carry their original, unaligned priors.) At convergence, the nodes will not only have a subtree of an external tree, but also the incoming optimization messages that result from pruning of all other cliques of the external tree. In order to determine the globally optimal root, each node (locally) selects a root for its subtree. If this root is one of the initial cliques associated with n, then n, and in particular this clique, is the root of the conditional alignment. The alignment is propagated throughout the network. If the optimal root is determined to be a clique that came from a message received from a neighbor, then the neighbor (or another node upstream) is the root, and node n aligns itself with respect to the neighbor’s message. With an additional tie-breaking rule that ensures that all the nodes make consistent choices about their subtrees [4], this procedure is equivalent to running the OCA algorithm centrally: Theorem 2. Given sufficient communication and in the absence of network partitions, nodes running distributed OCA reach a globally consistent belief based on conditional alignment, selecting the root clique that leads to the joint distribution of minimal entropy. In the presence of partitions, each partition will reach a consistent belief that minimizes the entropy within this partition. 5.3 Jointly optimized alignment While conceptually simple, there are situations where such a rooted alignment will not provide a good aligned distribution. For example, if in the example in Figure 1, cameras 2 and 3 carry marginals π2(C2,3, M) and π2′(C2,3, M), respectively, and both observe the person, node 2 will have a better estimate of C2, while node 3’s estimate of C3 will be more accurate. If either node is chosen as the root, the aligned distribution will have a worse estimate of the pose of one of the cameras, because performing rooted alignment from either direction effectively overwrites the marginal of the other node. In this example, rather than fixing a root, we want an aligned distribution that attempts to simultaneously optimize the distance to both π2(C2,3, M) and π2′(C2,3, M). (a) 25-camera testbed 0 50 100 150 200 250 300 0 0.1 0.2 0.3 0.4 time step RMS error Camera 7 Camera 10 Camera 3 (b) Convergence, cameras 0 20 40 60 0.2 0.25 0.3 0.35 0.4 0.45 RMS error epochs per time step 35 nodes 54 nodes (c) Convergence, temperature Figure 2: (a) Testbed of 25 cameras used for the SLAT experiments. (b) Convergence results for individual cameras in one experiment. Horizontal lines indicate the cooresponding centralized solution at the end of the experiment. (c) Convergence versus amount of communication for a temperature network of 54 real sensors. We propose the following optimization problem that minimizes the sum of reverse KL divergence from the aligned distribution to the clique marginals πi(Ci): ˜p(X) = argmin q(X),q|=T X i∈NT D(q(Ci) ∥πi(Ci)), where q |= T denotes the constraint that ˜p factorizes according to the junction tree T . This method will often provide very good aligned distributions (e.g., Figure (d)). For Gaussian distributions, this optimization problem corresponds to minµCi,ΣCi X i∈NT −log |ΣCi| + ⟨Σ−1 i , ΣCi⟩+ X i∈NT (µi −µCi)T Σ−1 i (µi −µCi), subject to ΣCi ⪰0, ∀i ∈NT , (4) where µCi, ΣCi are the means and covariances of q over the variables Ci, and µi, Σi are the means and covariances of the marginals πi. The problem in Equation 4 consists of two independent convex optimization problems over the means and covariances of q, respectively. The former problem can be solved in a distributed manner using distributed linear regression [6], while the latter can be solved using a distributed version of an iterative methods, such as conjugate gradient descent [1]. 6 Experimental results We evaluated our approach on two applications: a camera localization problem [5] (SLAT), in which a set of cameras simultaneously localizes itself by tracking a moving object, and temperature monitoring application, analogous to the one presented in [7]. Figure 2(a) shows some of the 25 ceiling-mounted cameras used to collect the data in our camera experiments. We implemented our distributed algorithm in a network simulator that incorporates message loss and used data from these real sensors as our observations. Figure 2(b) shows the estimates obtained by three cameras in one of our experiments. Note that each camera converges to the estimate obtained by the centralized B&K98 algorithm. In Figure 2(c), we evaluate the sensitivity of the algorithm to incomplete communication. We see that, with a modest number of rounds of communication performed in each time step, the algorithm obtains a high quality of the solution and converges to the centralized solution. In the second set of experiments, we evaluate the alignment methods, presented in Section 5. In Figure 3(a), the network is split into four components; in each component, the nodes communicate fully, and we evaluate the solution if the communication were to be restored after a given number of time steps. The vertical axis shows the RMS error of estimated camera locations at the end of the experiment. For the unaligned solution, the nodes may not agree on the estimated pose of a camera, so it is not clear which node’s estimate should be used in the RMS computation; the plot shows an “omniscient envelope” of the RMS error, where, given the (unknown) true camera locations, we select the best and worst estimates available in the network for each camera’s pose. The results show that, in the absence of optimized alignment, inconsistencies can degrade the solution: observations collected after the communication is restored may not make up for the errors introduced by the partition. The third experiment evaluates the performance of the distributed algorithm in highlydisconnected scenarios. Here, the sensor network is hierarchically partitioned into smaller disconnected components by selecting a random cut through the largest component. The communication is restored shortly before the end of the experiment. Figures 3(b) shows the importance of aligning from the correct node: the difference between the optimized root and an arbitrarily chosen root is significant, particularly when the network becomes more and more fractured. In our experiments, large errors often resulted from the nodes having uncertain beliefs, hence justifying the objective function. We see that the jointly optimized alignment described in Section 5.3, min. KL, tends to provide the best aligned distribution, though often close to the optimized root, which is simpler 0 50 100 0 0.1 0.2 0.3 RMS error Duration of the partition fixed root optimized root unaligned lower bound upper bound (a) camera localization 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 RMS error Number of partitions fixed root optimized root min. KL unaligned upper bound lower bound (b) camera localization 0 5 10 0 0.1 0.2 0.3 0.4 0.5 RMS error Number of partitions fixed root optimized root min. KL unaligned lower bound upper bound (c) temperature monitoring Figure 3: Comparison of the alignment methods. (a) RMS error vs. duration of the partition. For the unaligned solution, the plot shows bounds on the error: given the (unknown) camera locations, we select the best and worst estimates available in the network for each camera’s pose. In the absence of optimized alignment, inconsistencies can degrade the quality of the solution. (b, c) RMS error vs. number of partitions. In camera localization (b), the difference between the optimized alignment and the alignment from an arbitrarily chosen fixed root is significant. For the temperature monitoring (c), the differences are less pronounced, but follow the same trend. to compute. Finally, 3(c) shows the alignment results on the temperature monitoring application. Compared to SLAT, the effects of network partitions on the results for the temperature data are less severe. One contributing factor is that every node in a partition is making local temperature observations, and the approximate transition model for temperatures in each partition is quite accurate, hence all the nodes continue to adjust their estimates meaningfully while the partition is in progress. 7 Conclusions This paper presents a new distributed approach to approximate dynamic filtering based on a distributed representation of the assumed density in the network. Distributed filtering is performed by first conditioning on evidence using a robust distributed inference algorithm [7], and then advancing to the next time step locally. With sufficient communication in each time step, our distributed algorithm converges to the centralized B&K98 solution. In addition, we identify a significant challenge for probabilistic inference in dynamical systems: nodes can have inconsistent beliefs about the current state of the system, and an ineffective handling of this situation can lead to very poor estimates of the global state. We address this problem by developing a distributed algorithm that obtains an informative consistent distribution, optimizing over various choices of the root node, and an alternative joint optimization approach that minimizes a KL divergence-based criterion. We demonstrate the effectiveness of our approach on a suite of experimental results on real-world sensor data. Acknowledgments This research was supported by grants NSF-NeTS CNS-0625518 and CNS-0428738 NSF ITR. S. Funiak was supported by the Intel Research Scholar Program; C. Guestrin was partially supported by an Alfred P. Sloan Fellowship. References [1] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Athena Scientific; 1st edition (January 1997), 1997. [2] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. of UAI, 1998. [3] R. Cowell, P. Dawid, S. Lauritzen, and D. Spiegelhalter. Probabilistic Networks and Expert Systems. Springer, New York, NY, 1999. [4] S. Funiak, C. Guestrin, M. Paskin, and R. Sukthankar. Robust probabilistic filtering in distributed systems. Technical Report CMU-CALD-05-111, Carnegie Mellon University, 2005. [5] S. Funiak, C. Guestrin, M. Paskin, and R. Sukthankar. Distributed localization of networked cameras. In Proc. of Fifth International Conference on Information Processing in Sensor Networks (IPSN-06), 2006. [6] C. Guestrin, R. Thibaux, P. Bodik, M. A. Paskin, and S. Madden. Distributed regression: an efficient framework for modeling sensor network data. In Proc. of IPSN, 2004. [7] M. A. Paskin and C. E. Guestrin. Robust probabilistic inference in distributed systems. In UAI, 2004. [8] M. A. Paskin, C. E. Guestrin, and J. McFadden. A robust architecture for inference in sensor networks. In Proc. of IPSN, 2005. [9] A. Pfeffer and T. Tai. Asynchronous dynamic Bayesian networks. In Proc. UAI 2005, 2005. [10] M. Rosencrantz, G. Gordon, and S. Thrun. Decentralized sensor fusion with distributed particle filters. In Proc. of UAI, 2003. [11] F. Zhao, J. Liu, J. Liu, L. Guibas, and J. Reich. Collaborative signal and information processing: An information directed approach. Proceedings of the IEEE, 91(8):1199–1209, 2003.
2006
194
3,026
Modelling transcriptional regulation using Gaussian processes Neil D. Lawrence School of Computer Science University of Manchester, U.K. neill@cs.man.ac.uk Guido Sanguinetti Department of Computer Science University of Sheffield, U.K. guido@dcs.shef.ac.uk Magnus Rattray School of Computer Science University of Manchester, U.K. magnus@cs.man.ac.uk Abstract Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies. Introduction Recent advances in molecular biology have brought about a revolution in our understanding of cellular processes. Microarray technology now allows measurement of mRNA abundance on a genomewide scale, and techniques such as chromatin immunoprecipitation (ChIP) have largely unveiled the wiring of the cellular transcriptional regulatory network, identifying which genes are bound by which transcription factors. However, a full quantitative description of the regulatory mechanism of transcription requires the knowledge of a number of other biological quantities: first of all the concentration levels of active transcription factor proteins, but also a number of gene-specific constants such as the baseline expression level for a gene, the rate of decay of its mRNA and the sensitivity with which target genes react to a given transcription factor protein concentration. While some of these quantities can be measured (e.g. mRNA decay rates), most of them are very hard to measure with current techniques, and have therefore to be inferred from the available data. This is often done following one of two complementary approaches. One can formulate a large scale simplified model of regulation (for example assuming a linear response to protein concentrations) and then combine network architecture data and gene expression data to infer transcription factors’ protein concentrations on a genome-wide scale. This line of research was started in [3] and then extended further to include gene-specific effects in [10, 11]. Alternatively, one can formulate a realistic model of a small subnetwork where few transcription factors regulate a small number of established target genes, trying to include the finer points of the dynamics of transcriptional regulation. In this paper we follow the second approach, focussing on the simplest subnetwork consisting of one transcription factor regulating its target genes, but using a detailed model of the interaction dynamics to infer the transcription factor concentrations and the gene specific constants. This problem was recently studied by Barenco et al. [1] and by Rogers et al. [9]. In these studies, parametric models were developed describing the rate of production of certain genes as a function of the concentration of transcription factor protein at some specified time points. Markov chain Monte Carlo (MCMC) methods were then used to carry out Bayesian inference of the protein concentrations, requiring substantial computational resources and limiting the inference to the discrete time-points where the data was collected. We show here how a Gaussian process model provides a simple and computationally efficient method for Bayesian inference of continuous transcription factor concentration profiles and associated model parameters. Gaussian processes have been used effectively in a number of machine learning and statistical applications [8] (see also [2, 6] for the work that is most closely related to ours). Their use in this context is novel, as far as we know, and leads to several advantages. Firstly, it allows for the inference of continuous quantities (concentration profiles) without discretization, therefore accounting naturally for the temporal structure of the data. Secondly, it avoids the use of cumbersome interpolation techniques to estimate mRNA production rates from mRNA abundance data, and it allows us to deal naturally with the noise inherent in the measurements. Finally, it greatly outstrips MCMC techniques in terms of computational efficiency, which we expect to be crucial in future extensions to more complex (and realistic) regulatory networks. The paper is organised as follows: in the first section we discuss linear response models. These are simplified models in which the mRNA production rate depends linearly on the transcription factor protein concentration. Although the linear assumption is not verified in practice, it has the advantage of giving rise to an exactly tractable inference problem. We then discuss how to extend the formalism to model cases where the dependence of mRNA production rate on transcription factor protein concentration is not linear, and propose a MAP-Laplace approach to carry out Bayesian inference. In the third section we test our model on the leukemia data set studied in [1]. Finally, we discuss further extensions of our work. MATLAB code to recreate the experiments is available on-line. 1 Linear Response Model Let the data set under consideration consist of T measurements of the mRNA abundance of N genes. We consider a linear differential equation that relates a given gene j’s expression level xj(t) at time t to the concentration of the regulating transcription factor protein f(t), dxj dt = Bj + Sjf (t) −Djxj (t) . (1) Here, Bj is the basal transcription rate of gene j, Sj is the sensitivity of gene j to the transcription factor and Dj is the decay rate of the mRNA. Crucially, the dependence of the mRNA transcription rate on the protein concentration (response) is linear. Assuming a linear response is a crude simplification, but it can still lead to interesting results in certain modelling situations. Equation (1) was used by Barenco et al. [1] to model a simple network consisting of the tumour suppressor transcription factor p53 and five of its target genes. We will consider more general models in section 2. The equation given in (1) can be solved to recover xj (t) = Bj Dj + kj exp (−Djt) + Sj exp (−Djt) Z t 0 f (u) exp (Dju) du (2) where kj arises from the initial conditions, and is zero if we assume an initial baseline expression level xj(0) = Bj/Dj. We will model the protein concentration f as a latent function drawn from a Gaussian process prior distribution. It is important to notice that equation (2) involves only linear operations on the function f (t). This implies immediately that the mRNA abundance levels will also be modelled as a Gaussian process, and the covariance function of the marginal distribution p (x1, . . . , xN) can be worked out explicitly from the covariance function of the latent function f. Let us rewrite equation (2) as xj (t) = Bj Dj + Lj [f] (t) where we have set the initial conditions such that kj in equation (2) is equal to zero and Lj [f] (t) = Sj exp (−Djt) Z t 0 f (u) exp (Dju) du (3) is the linear operator relating the latent function f to the mRNA abundance of gene j, xj (t). If the covariance function associated with f (t) is given by kff (t, t′) then elementary functional analysis yields that cov (Lj [f] (t) , Lk [f] (t′)) = Lj ⊗Lk [kff] (t, t′) . Explicitly, this is given by the following formula kxjxk (t, t′) = SjSk exp (−Djt −Dkt′) Z t 0 exp (Dju) Z t′ 0 exp (Dku′) kff (u, u′) du′du. (4) If the process prior over f (t) is taken to be a squared exponential kernel, kff (t, t′) = exp −(t −t′)2 l2 ! , where l controls the width of the basis functions1, the integrals in equation (4) can be computed analytically. The resulting covariances are obtained as kxjxk (t, t′) = SjSk √πl 2 [hkj (t′, t) + hjk (t, t′)] (5) where hkj (t′, t) = exp (γk)2 Dj + Dk  exp [−Dk (t′ −t)]  erf t′ −t l −γk  + erf t l + γk  −exp [−(Dkt′ + Dj)]  erf t′ l −γk  + erf (γk)  . Here erf(x) = 2 √π R x 0 exp −y2 dy and γk = Dkl 2 . We can therefore compute a likelihood which relates instantiations from all the observed genes, {xj (t)}N j=1, through dependencies on the parameters {Bj, Sj, Dj}N j=1. The effect of f (t) has been marginalised. To infer the protein concentration levels, one also needs the “cross-covariance” terms between xj (t) and f (t′), which is obtained as kxjf (t, t′) = Sj exp (−Djt) Z t 0 exp (Dju) kff (u, t′) du. (6) Again, this can be obtained explicitly for squared exponential priors on the latent function f as kxjf (t′, t) = √πlSj 2 exp (γj)2 exp [−Dj (t′ −t)]  erf t′ −t l −γj  + erf t l + γj  . Standard Gaussian process regression techniques [see e.g. 8] then yield the mean and covariance function of the posterior process on f as ⟨f⟩post = KfxK−1 xx x Kpost ff = Kff −KfxK−1 xx Kxf (7) where x denotes collectively the xj (t) observed variables and capital K denotes the matrix obtained by evaluating the covariance function of the processes on every pair of observed time points. The 1The scale of the process is ignored to avoid a parameterisation ambiguity with the sensitivities. model parameters Bj, Dj and Sj can be estimated by type II maximum likelihood. Alternatively, they can be assigned vague gamma prior distributions and estimated a posteriori using MCMC sampling. In practice, we will allow the mRNA abundance of each gene at each time point to be corrupted by some noise, so that we can model the observations at times ti for i = 1, . . . , T as, yj (ti) = xj (ti) + ϵj (ti) (8) with ϵj (ti) ∼N 0, σ2 ji  . Estimates of the confidence levels associated with each mRNA measurement can be obtained for Affymetrix microarrays using probe-level processing techniques such as the mmgMOS model of [4]. The covariance of the noisy process is simply obtained as Kyy = Σ + Kxx, with Σ = diag σ2 11, . . . , σ2 1T , . . . , σ2 N1, . . . , σ2 NT  . 2 Non-linear Response Model While the linear response model presents the advantage of being exactly tractable in the important squared exponential case, a realistic model of transcription should account for effects such as saturation and ultrasensitivity which cannot be captured by a linear function. Also, all the quantities in equation (1) are positive, but one cannot constrain samples from a Gaussian process to be positive. Modelling the response of the transcription rate to protein concentration using a positive nonlinear function is an elegant way to enforce this constraint. 2.1 Formalism Let the response of the mRNA transcription rate to transcription factor protein concentration levels be modelled by a nonlinear function g with a target-specific vector θj of parameters, so that, dxj dt = Bj + g(f(t), θj) −Djxj xj(t) = Bj Dj + exp (−Djt) Z t 0 du g(f(u), θj) exp (Dju) , (9) where we again set xj(0) = Bj/Dj and assign a Gaussian process prior distribution to f(t). In this case the induced distribution of xj(t) is no longer a Gaussian process. However, we can derive the functional gradient of the likelihood and prior, and use this to learn the Maximum a Posteriori (MAP) solution for f(t) and the parameters by (functional) gradient descent. Given noise-corrupted data yj (ti) as above, the log-likelihood of the data Y = {yj (ti)} is given by p(Y |f, {Bj, θj, Dj, Ξ}) = −1 2 T X i=1 N X j=1 " (xj(ti) −yj (ti))2 σ2 ji −log σ2 ji  # −NT 2 log(2π) (10) where Ξ denotes collectively the parameters of the prior covariance on f (in the squared exponential case, Ξ = l2). The functional derivative of the log-likelihood with respect to f is then obtained as δ log p(Y |f) δf (t) = − T X i=1 Θ(ti −t) N X j=1 (xj(ti) −yj (ti)) σ2 ji g′(f(t))e−Dj(ti−t) (11) where Θ(x) is the Heaviside step function and we have omitted the model parameters for brevity. The negative Hessian of the log-likelihood with respect to f is given by w(t, t′) = −δ2 log p(Y |f) δf (t) δf (t′) = T X i=1 Θ(ti −t)δ (t −t′) N X j=1 (xj(ti) −yj (ti)) σ2 ji g′′(f(t))e−Dj(ti−t) + T X i=1 Θ(ti −t)Θ (ti −t′) N X j=1 σ−2 ji g′ (f(t)) g′ (f(t′)) e−Dj(2ti−t−t′) (12) where g′(f) = ∂g/∂f and g′′(f) = ∂2g/∂f 2. 2.2 Implementation We discretise in time t and compute the gradient and Hessian on a grid using approximate Riemann quadrature. In the simplest case, we choose a uniform grid [tp] p = 1, . . . , M so that ∆= tp −tp−1 is constant. We write f = [fp] to be the vector realisation of the function f at the grid points. The gradient of the log-likelihood is then given by, ∂log p(Y |f) ∂fp = −∆ T X i=1 Θ (ti −tp) N X j=1 (xj (ti) −yj (ti)) σ2 ji g′ (fp) e−Dj(ti−tp) (13) and the negative Hessian of the log-likelihood is, Wpq = −∂2 log p(Y |f) ∂fp∂fq = δpq∆ T X i=1 Θ (ti −tq) N X j=1 (xj (ti) −yj (ti)) σ2 ji g′′ (fq) e−Dj(ti−tq) + ∆2 T X i=1 Θ (ti −tp) Θ (ti −tq) N X j=1 σ−2 ji g′ (fq) g′ (fp) e−Dj(2ti−tp−tq) (14) where δpq is the Kronecker delta. In these and the following formulae ti is understood to mean the index of the grid point corresponding to the ith data point, whereas tp and tq correspond to the grid points themselves. We can then compute the gradient and Hessian of the (discretised) un-normalised log posterior Ψ(f) = log p(Y |f) + log p(f) [see 8, chapter 3] ∇Ψ(f) = ∇log p(Y |f) −K−1f ∇∇Ψ(f) = −(W + K−1) (15) where K is the prior covariance matrix evaluated at the grid points. These can be used to find the MAP solution ˆf using Newton’s method. The Laplace approximation to the log-marginal likelihood is then (ignoring terms that do not involve model parameters) log p(Y ) ≃log p(Y | ˆf) −1 2 ˆf T K−1 ˆf −1 2 log |I + KW|. (16) We can also optimise the log-marginal with respect to the model and kernel parameters. The gradient of the log-marginal with respect to the kernel parameters is [8] ∂log p(Y |Ξ) ∂Ξ = 1 2 ˆf T K−1 ∂K ∂Ξ K−1 ˆf −1 2tr  (I + KW)−1W ∂K ∂Ξ  + X p ∂log p(Y |Ξ) ∂ˆfp ∂ˆfp ∂Ξ (17) where the final term is due to the implicit dependence of ˆf on Ξ. 2.3 Example: exponential response As an example, we consider the case in which g (f (t) , θj) = Sjexp (f (t)) (18) which provides a useful way of constraining the protein concentration to be positive. Substituting equation (18) in equations (13) and (14) one obtains ∂log p(Y |f) ∂fp = −∆ T X i=1 Θ (ti −tp) N X j=1 (xj(ti) −yj (ti)) σ2 ji Sjefp−Dj(ti−tp) Wpq = −δpq ∂log p(Y |f) ∂fp + ∆2 T X i=1 Θ (ti −tp) Θ (ti −tq) N X j=1 σ−2 ji S2 j efp+fq−Dj(2ti−tp−tq) . The terms required in equation (17) are, ∂log p(Y |Ξ) ∂ˆfp = −(AW)pp −1 2 X q AqqWqp ∂ˆf ∂Ξ = AK−1 ∂K ∂Ξ ∇log p(Y | ˆf) , where A = (W + K−1)−1. 3 Results To test the efficacy of our method, we used a recently published biological data set which was studied using a linear response model by Barenco et al. [1]. This study focused on the tumour suppressor protein p53. mRNA abundance was measured at regular intervals in three independent human cell lines using Affymetrix U133A oligonucleotide microarrays. The authors then restricted their interest to five known target genes of p53: DDB2, p21, SESN1/hPA26, BIK and TNFRSF10b. They estimated the mRNA production rates by using quadratic interpolation between any three consecutive time points. They then discretised the model and used MCMC sampling (assuming a log-normal noise model) to obtain estimates of the model parameters Bj, Sj, Dj and f(t). To make the model identifiable, the value of the mRNA decay of one of the target genes, p21, was measured experimentally. Also, the scale of the sensitivities was fixed by choosing p21’s sensitivity to be equal to one, and f(0) was constrained to be zero. Their predictions were then validated by doing explicit protein concentration measurements and growing mutant cell lines where the p53 gene had been knocked out. 3.1 Linear response analysis We first analysed the data using the simple linear response model used by Barenco et al. [1]. Raw data was processed using the mmgMOS model of [4], which also provides estimates of the credibility associated with each measurement. Data from the different cell lines were treated as independent instantiations of f but sharing the model parameters {Bj, Sj, Dj, Ξ}. We used a squared exponential covariance function for the prior distribution on the latent function f. The inferred posterior mean function for f, together with 95% confidence intervals, is shown in Figure 1(a). The pointwise estimates inferred by Barenco et al. are shown as crosses in the plot. The posterior mean function matches well the prediction obtained by Barenco et al.2 Notice that the right hand tail of the inferred mean function shows an oscillatory behaviour. We believe that this is an artifact caused by the squared exponential covariance; the steep rise between time zero and time two forces the length scale of the function to be small, hence giving rise to wavy functions [see page 123 in 8]. To avoid this, we repeated the experiment using the “MLP” covariance function for the prior distribution over f [12]. Posterior estimation cannot be obtained analytically in this case so we resorted to the MAPLaplace approximation described in section 2. The MLP covariance is obtained as the limiting case of an infinite number of sigmoidal neural networks and has the following covariance function k (t, t′) = arcsin wtt′ + b p (wt2 + b + 1) (wt′2 + b + 1) ! (19) where w and b are parameters known as the weight and the bias variance. The results using this covariance function are shown in Figure 1(b). The resulting profile does not show the unexpected oscillatory behaviour and has tighter credibility intervals. Figure 2 shows the results of inference on the values of the hyperparameters Bj, Sj and Dj. The columns on the left, shaded grey, show results from our model and the white columns are the estimates obtained in [1]. The hyperparameters were assigned a vague gamma prior distribution (a = b = 0.1, corresponding to a mean of 1 and a variance of 10). Samples from the posterior distribution were obtained using Hybrid Monte Carlo [see e.g. 7]. The results are in good accordance with the results obtained by Barenco et al. Differences in the estimates of the basal transcription rates are probably due to the different methods used for probe-level processing of the microarray data. 3.2 Non-linear response analysis We then used the non-linear response model of section 2 in order to constrain the protein concentrations inferred to be positive. We achieved this by using an exponential response of the transcription rate to the logged protein concentration. The inferred MAP solutions for the latent function f are plotted in Figure 3 for the squared exponential prior (a) and for the MLP prior (b). 2Barenco et al. also constrained the latent function to be zero at time zero. 0 5 10 −2 −1 0 1 2 3 4 (a) 0 5 10 −2 −1 0 1 2 3 4 (b) Figure 1: Predicted protein concentration for p53 using a linear response model: (a) squared exponential prior on f; (b) MLP prior on f. Solid line is mean prediction, dashed lines are 95% credibility intervals. The prediction of Barenco et al. was pointwise and is shown as crosses. DDB2 hPA26 TNFRSF20b p21 BIK 0 0.05 0.1 0.15 0.2 0.25 (a) DDB2 hPA26 TNFRSF20b p21 BIK 0 0.5 1 1.5 2 2.5 (b) DDB2 hPA26 TNFRSF20b p21 BIK 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (c) Figure 2: Results of inference on the hyperparameters for p53 data studied in [1]. The bar charts show (a) Basal transcription rates from our model and that of Barenco et al.. Grey are estimates obtained with our model, white are the estimates obtained by Barenco et al. (b) Similar for sensitivities. (c) Similar for decay rates. 4 Discussion In this paper we showed how Gaussian processes can be used effectively in modelling the dynamics of a very simple regulatory network motif. This approach has many advantages over standard parametric approaches: first of all, there is no need to restrict the inference to the observed time points, and the temporal continuity of the inferred functions is accounted for naturally. Secondly, Gaussian processes allow noise information to be accounted for in a natural way. It is well known that biological data exhibits a large variability, partly because of technical noise (due to the difficulty to measure mRNA abundance for low expressed genes, for example), and partly because of the difference between different cell lines. Accounting for these sources of noise in a parametric model can be difficult (particularly when estimates of the derivatives of the measured quantities are required), while Gaussian Processes can incorporate this information naturally. Finally, MCMC parameter estimation in a discretised model can be computationally expensive due to the high correlations between variables. This is a consequence of treating the protein concentrations as parameters, and results in many MCMC iterations to obtain reliable samples. Parameter estimation can be achieved easily in our framework by type II maximum likelihood or by using efficient Monte Carlo sampling techniques only on the model hyperparameters. While the results shown in the paper are encouraging, this is still a very simple modelling situation. For example, it is well known that transcriptional delays can play a significant role in determining the dynamics of many cellular processes [5]. These effects can be introduced naturally in a Gaussian process model; however, the data must be sampled at a reasonably high frequency in order for delays to become identifiable in a stochastic model, which is often not the case with microarray data sets. Another natural extension of our work would be to consider more biologically meaningful nonlinearities, such as the popular Michaelis-Menten model of transcription used in [9]. Finally, networks consisting of a single transcription factor are very useful to study small systems of particular interest such as p53. However, our ultimate goal would be to describe regulatory pathways consisting of 0 5 10 0 1 2 3 4 5 6 (a) 0 5 10 0 1 2 3 4 5 6 (b) Figure 3: Predicted protein concentration for p53 using an exponential response: (a) shows results of using a squared exponential prior covariance on f; (b) shows results of using an MLP prior covariance on f. Solid line is mean prediction, dashed lines show 95% credibility intervals. The results shown are for exp(f), hence the asymmetry of the credibility intervals. The prediction of Barenco et al. was pointwise and is shown as crosses. more genes. These can be dealt with in the general framework described in this paper, but careful thought will be needed to overcome the greater computational difficulties. Acknowledgements We thank Martino Barenco for useful discussions and for providing the data. We gratefully acknowledge support from BBSRC Grant No BBS/B/0076X “Improved processing of microarray data with probabilistic models”. References [1] M. Barenco, D. Tomescu, D. Brewer, R. Callard, J. Stark, and M. Hubank. Ranked prediction of p53 targets using hidden variable dynamic modeling. Genome Biology, 7(3):R25, 2006. [2] T. Graepel. Solving noisy linear operator equations by Gaussian processes: Application to ordinary and partial differential equations. In T. Fawcett and N. Mishra, editors, Proceedings of the International Conference in Machine Learning, volume 20, pages 234–241. AAAI Press, 2003. [3] J. C. Liao, R. Boscolo, Y.-L. Yang, L. M. Tran, C. Sabatti, and V. P. Roychowdhury. Network component analysis: Reconstruction of regulatory signals in biological systems. Proceedings of the National Academy of Sciences USA, 100(26):15522–15527, 2003. [4] X. Liu, M. Milo, N. D. Lawrence, and M. Rattray. A tractable probabilistic model for affymetrix probelevel analysis across multiple chips. Bioinformatics, 21(18):3637–3644, 2005. [5] N. A. Monk. Unravelling nature’s networks. Biochemical Society Transactions, 31:1457–1461, 2003. [6] R. Murray-Smith and B. A. Pearlmutter. Transformations of Gaussian process priors. In J. Winkler, N. D. Lawrence, and M. Niranjan, editors, Deterministic and Statistical Methods in Machine Learning, volume 3635 of Lecture Notes in Artificial Intelligence, pages 110–123, Berlin, 2005. Springer-Verlag. [7] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996. Lecture Notes in Statistics 118. [8] C. E. Rasmussen and C. K. Williams. Gaussian Processes for Machine Learning. MIT press, 2005. [9] S. Rogers, R. Khanin, and M. Girolami. Model based identification of transcription factor activity from microarray data. In Probabilistic Modeling and Machine Learning in Structural and Systems Biology, Tuusula, Finland, 17-18th June 2006. [10] C. Sabatti and G. M. James. Bayesian sparse hidden components analysis for transcription regulation networks. Bioinformatics, 22(6):739–746, 2006. [11] G. Sanguinetti, M. Rattray, and N. D. Lawrence. A probabilistic dynamical model for quantitative inference of the regulatory mechanism of transcription. Bioinformatics, 22(14):1753–1759, 2006. [12] C. K. I. Williams. Computation with infinite neural networks. Neural Computation, 10(5):1203–1216, 1998.
2006
195
3,027
T rueSkill TM : A Ba y esian Skill Rating System Ralf Herbric h Microsoft Researc h Ltd. Cam bridge, UK rherb@micr osoft.c om T om Mink a Microsoft Researc h Ltd. Cam bridge, UK minka@micr osoft.c om Thore Graep el Microsoft Researc h Ltd. Cam bridge, UK thor e g@micr osoft.c om Abstract W e presen t a new Ba y esian skill rating system whic h can b e view ed as a generalisation of the Elo system used in Chess. The new system trac ks the uncertain t y ab out pla y er skills, explicitly mo dels dra ws, can deal with an y n um b er of comp eting en tities and can infer individual skills from team results. Inference is p erformed b y appro ximate message passing on a factor graph represen tation of the mo del. W e presen t exp erimen tal evidence on the increased accuracy and con v ergence sp eed of the system compared to Elo and rep ort on our exp erience with the new rating system running in a large-scale commercial online gaming service under the name of T rueSkil l. 1 In tro duction Skill ratings in comp etitiv e games and sp orts serv e three main functions. First, they allo w pla y ers to b e matc hed with other pla y ers of similar skill leading to in teresting, balanced matc hes. Second, the ratings can b e made a v ailable to the pla y ers and to the in terested public and th us stim ulate in terest and comp etition. Thirdly , ratings can b e used as criteria of qualication for tournamen ts. With the adv en t of online gaming, the in terest in rating systems has increased dramatically b ecause the qualit y of the online exp erience of millions of pla y ers eac h da y are at stak e. In 1959, Arpad Elo dev elop ed a statistical rating system for Chess, whic h w as adopted b y the W orld Chess F ederation FIDE in 1970 [4]. The k ey idea b ehind the Elo system [2] is to mo del the probabilit y of the p ossible game outcomes as a function of the t w o pla y ers' skill ratings s 1 and s 2 . In a game eac h pla y er i exhibits p erformance p i  N (p i ; s i ; 2 ) normally distributed around their skills s i with xed v ariance 2 . The probabilit y that pla y er 1 wins is giv en b y the probabilit y that his p erformance p 1 exceeds the opp onen t's p erformance p 2 , P (p 1 > p 2 js 1 ; s 2 ) =   s 1 s 2 p 2  ; (1) where  denotes the cum ulativ e densit y of a zero-mean unit-v ariance Gaussian. After the game, the skill ratings s 1 and s 2 are up dated suc h that the observ ed game outcome b ecomes more lik ely and s 1 + s 2 = const. is main tained. Let y = +1 if pla y er 1 wins, y = 1 if pla y er 2 wins and y = 0 if a dra w o ccurs. Then the resulting (linearised) Elo up date is giv en b y s 1 s 1 + y , s 2 s 2 y  and  = p  | {z } K F actor  y + 1 2   s 1 s 2 p 2  ; where 0 < < 1 determines the w eigh ting of the new evidence v ersus the old estimate. Most curren tly used Elo v arian ts use a logistic distribution instead of a Gaussian b ecause it is argued to pro vide a b etter t for Chess data. F rom the p oin t of view of statistics the Elo system addresses the problem of estimating from paired comparison data [1] with the Gaussian v arian t corresp onding to the Thurstone Case V mo del and the logistic v arian t to the Br ad ley-T erry mo del. In the Elo system, a pla y er's rating is regarded as pro visional as long as it is based on less than a xed n um b er of, sa y , 20 games. This problem w as addressed b y Mark Glic kman's Ba y esian rating system Glicko [5 ] whic h in tro duces the idea of mo deling the b elief ab out a pla y er's skill as a Gaussian b elief distribution c haracterised b y a mean  and a v ariance  2 . An imp ortan t new application of skill rating systems are m ultipla y er online games that greatly b enet from the abilit y to create online matc hes in whic h the participating pla y ers ha v e roughly ev en skills and hence enjo y able, fair and exciting game exp eriences. Multipla y er online games pro vide the follo wing c hallenges: 1. Game outcomes often refer to teams of pla y ers y et a skill rating for individual pla y ers is needed for future matc hmaking. 2. More than t w o pla y ers or teams comp ete suc h that the game outcome is a p erm utation of teams or pla y ers rather than just a winner and a loser. In this pap er w e presen t a new rating system, T rueSkil l, that addresses b oth these c hallenges in a principled Ba y esian framew ork. W e express the mo del as a factor graph (Section 2) and use appro ximate message passing (Section 3) to infer the marginal b elief distribution o v er the skill of eac h pla y er. In Section 4 w e presen t exp erimen tal results on real-w orld data generated b y Bungie Studios during the b eta testing of the Xb o x title Halo 2 and w e rep ort on our exp erience with the rating system running in the Xb o x Liv e service. 2 F actor Graphs for Ranking F rom among a p opulation of n pla y ers f1; : : : ; ng in a game let k teams comp ete in a matc h. The team assignmen ts are sp ecied b y k non-o v erlapping subsets A j  f1; : : : ; ng of the pla y er p opulation, A i \ A j = ; if i 6= j . The outcome r := (r 1 ; : : : ; r k ) 2 f1; : : : ; k g is sp ecied b y a rank r j for eac h team j , with r = 1 indicating the winner and with the p ossibilit y of dra ws when r i = r j . The ranks are deriv ed from the scoring rules of the game. W e mo del the probabilit y P ( rjs; A) of the game outcome r giv en the skills s of the participating pla y ers and the team assignmen ts A := fA 1 ; : : : ; A k g . F rom Ba y es' rule w e obtain the p osterior distribution p (sjr; A) = P (rjs; A) p (s) P ( rjA) : (2) W e assume a factorising Gaussian prior distribution, p(s) := Q n i=1 N (s i ;  i ;  2 i ). Eac h pla y er i is assumed to exhibit a p erformance p i  N (p i ; s i ; 2 ) in the game, cen tred around their skill s i with xed v ariance 2 . The p erformance t j of team j is mo deled as the sum of the p erformances of its mem b ers, t j := P i2A j p i . Let us reorder the teams in ascending order of rank, r (1)  r (2)      r (k ) . Disregarding dra ws, the probabilit y of a game outcome r is mo deled as P (rj ft 1 ; : : : ; t k g ) = P t r (1) > t r (2) >    > t r (k )  ; that is, the order of p erformances generates the order in the game outcome. If dra ws are p ermitted the winning outcome r (j ) < r (j +1) requires t r (j ) > t r (j +1) + " and the dra w outcome r (j ) = r (j +1) requires jt r (j ) t r (j +1) j  ", where " > 0 is a dra w margin that can b e calculated from the assumed probabilit y of dra w. 1 W e need to b e able to rep ort skill estimates after eac h game and will therefore use an online learning sc heme referred to as Gaussian density ltering [8 ]. The p osterior distribution is appro ximated to b e Gaussian and is used as the prior distribution for the next game. If the 1 The transitiv e relation 1 dra ws with 2 is not mo delled exactly b y the relation jt 1 t 2 j  ", whic h is non-transitiv e. If jt 1 t 2 j  " and jt 2 t 3 j  " then the mo del generates a dra w among the three teams despite the p ossibilit y that jt 1 t 3 j > ". s 1 s 2 s 3 s 4 p 1 p 2 p 3 p 4 t 1 t 2 t 3 d 1 d 2 N (s 1 ;  1 ;  2 1 ) N (s 2 ;  2 ;  2 2 ) N (s 3 ;  3 ;  2 3 ) N (s 4 ;  4 ;  2 4 ) N (p 1 ; s 1 ; 2 ) N (p 2 ; s 2 ; 2 ) N (p 3 ; s 3 ; 2 ) N (p 4 ; s 4 ; 2 ) I (t 1 = p 1 ) I (t 2 = p 2 + p 3 ) I (t 3 = p 4 ) I (d 1 = t 1 t 2 ) I (d 2 = t 2 t 3 ) I (d 1 > ") I (jd 2 j  ") 1 2 3 4 5 6 Figure 1: An example T rueSkill factor graph. There are four t yp es of v ariables: s i for the skil ls of all pla y ers, p i for the p erformanc es of all pla y ers, t i for the p erformanc es of all te ams and d j for the te am p erformanc e dier enc es. The rst ro w of factors enco de the (pro duct) prior; the pro duct of the remaining factors c haracterises the lik eliho o d for the game outcome T eam 1 > T eam 2 = T eam 3. The arro ws indicate the optimal message passing sc hedule: First, all ligh t arro w messages are up dated from top to b ottom. In the follo wing, the sc hedule o v er the team p erformance (dierence) no des are iterated in the order of the n um b ers. Finally , the p osterior o v er the skills is computed b y up dating all the dark arro w messages from b ottom to top. skills are exp ected to v ary o v er time, a Gaussian dynamics factor N (s i;t+1 ; s i;t ; 2 ) can b e in tro duced whic h leads to an additiv e v ariance comp onen t of 2 in the subsequen t prior. Let us consider a game with k = 3 teams with team assignmen ts A 1 = f 1g , A 2 = f2; 3g and A 3 = f 4g . Let us further assume that team 1 is the winner and that teams 2 and 3 dra w, i.e., r := (1; 2; 2) . W e can represen t the resulting join t distribution p (s; p; tjr; A) b y the factor graph depicted in Figure 1. A factor graph is a bi-partite graph consisting of v ariable and factor no des, sho wn in Figure 1 as gra y circles and blac k squares, resp ectiv ely . The function represen ted b y a factor graph in our case the join t distribution p (s; p; tjr; A)is giv en b y the pro duct of all the (p oten tial) functions asso ciated with eac h factor. The structure of the factor graph giv es information ab out the dep endencies of the factors in v olv ed and is the basis of ecien t inference algorithms. Returning to Ba y es rule (2), the quan tities of in terest are the p osterior distribution p (s i jr; A) o v er skills giv en game outcome r and team asso ciations A. The p (s i jr; A) are calculated from the join t distribution in tegrating out the individual p erformances fp i g and the team p erformances ft i g, p (sjr; A) = Z 1 1    Z 1 1 p (s; p; tjr; A) dp dt : F actor f () = I ( > ") (win) F actor f () = I (j  j  ") (dra w) F unction V f −6 −4 −2 0 2 4 6 0 2 4 6 8 10 12 t v (t,α,β) ε = 0.50 ε = 1.00 ε = 4.00 −6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6 t v (t,α,β) ε = 0.50 ε = 1.00 ε = 4.00 F unction W f −6 −4 −2 0 2 4 6 0 0.2 0.4 0.6 0.8 1 t w (t,α,β) ε = 0.50 ε = 1.00 ε = 4.00 −6 −4 −2 0 2 4 6 0 0.2 0.4 0.6 0.8 1 t w (t,α,β) ε = 0.50 ε = 1.00 ε = 4.00 Figure 2: Up date rules for the the appro ximate marginals for dieren t v alues of the dra w margin ": F or a t w o-team game, the parameter t represen ts the dierence of team p erformances b et w een winner and loser. Hence, in the win column (left) negativ e v alues of t indicate a surprise outcome leading to a large up date. In the dra w column (righ t) an y stark deviation of team p erformances is surprising and leads to a large up date. 3 Appro ximate Message P assing The sum-pro duct algorithm in its form ulation for factor graphs [7] exploits the sparse connection structure of the graph to p erform ecien t inference of single-v ariable marginals b y message passing. The message passing for con tin uous v ariables is c haracterised b y the follo wing equations (these follo w directly from the distributiv e la w): p (v k ) = Y f 2F v k m f !v k ( v k ) (3) m f !v j (v j ) = Z    Z f (v ) Y i6=j m v i !f (v i ) dv nj (4) m v k !f (v k ) = Y ~ f 2F v k nff g m ~ f !v k ( v k ) ; (5) where F v k denotes the set of factors connected to v ariable v k and v nj denotes the comp onen ts of the v ector v except for its j th comp onen t. If the factor graph is acyclic and the messages can b e calculated and represen ted exactly then eac h message needs to b e calculated only once and the marginals p(v k ) can b e calculated from the messages b y virtue of (3). As can b e seen from Figure 1 the T rueSkill factor graph is in fact acyclic and the ma jorit y of messages can b e represen ted compactly as 1dimensional Gaussians. Ho w ev er, (4) sho ws that messages 2 and 5 from the comparison factors I ( > ") or I (j  j  ") to the p erformance dierences d i in Figure 1 are non Gaussianin fact, the true message w ould b e the (nonGaussian) factor itself. F ollo wing the Exp e ctation Pr op agation algorithm [8], w e appro ximate these messages as w ell as p ossible b y appro ximating the marginal p(d i ) via momen t matc hing resulting in a Gaussian ^ p (d i ) with the same mean and v ariance as p(d i ). F or Gaussian distributions, F actor Up date equation x N (x; m; v 2 ) m f !x  new x  x + 1 v 2  new x  x + m v 2 x y N (x; y ; c 2 ) m f !x  new f !x a ( y  f !y )  new f !x a ( y  f !y ) a := 1 + c 2 ( y  f !y )  1 m f !y follo ws from N x; y ; c 2  = N y ; x; c 2  . x y 1 y n    I (x = a > y ) m f !x  new f !x 0 @ n X j =1 a 2 j  y j  f !y j 1 A 1  new f !x  new f !x  0 @ n X j =1 a j   y j  f !y j  y j  f !y j 1 A x y 1 y n    I (x = b > y ) m f !y n x y 1 y n    I (y n = a > [y 1 ;    ; y n1 ; x]) m f !y n a = 1 b n  2 6 6 4 b 1 . . . b n1 +1 3 7 7 5 x I (x > ") m f > !x x I (jxj  ") m f jj !x  new x c 1 W f ( d= p c; " p c)  new x d + p c  V f ( d= p c; " p c) 1 W f (d= p c; " p c) c :=  x  f !x ; d :=  x  f !x T able 1: The up date equations for the (cac hed) marginals p(x) and the messages m f !x for all factor t yp es of a T rueSkill factor graph. W e represen t Gaussians N (; ;  ) in terms of their canonical parameters: precision,  :=  2 , and precision adjusted mean,  :=  . The missing up date equation for the message or the marginal follo w from (6). momen t matc hing is kno wn to minimise the Kullbac kLeibler div ergence. Then, w e exploit the fact that from (3) and (5) w e ha v e ^ p (d i ) = ^ m f !d i (d i )  m d i !f (d i ) , ^ m f !d i ( d i ) = ^ p (d i ) m d i !f ( d i ) : (6) T able 1 giv es all the up date equations necessary for p erforming inference in the T rueSkill factor graph. The top four ro ws result from standard Gaussian in tegrals. The b ottom rule is the result of the momen t matc hing pro cedure describ ed ab o v e. The four functions are the additiv e and m ultiplicativ e correction term for the mean and v ariance of a (doubly) truncated Gaussian and are giv en b y (see also Figure 2): V I ( >") (t; ") := N (t ")  (t ") ; W I ( >") ( t; ") := V I ( >") (t; ")  V I ( >") ( t; ) + t "  ; V I ( jj >") (t; ") := N (" t) N (" t)  (" t)  (" t) ; W I ( jj >") (t; ") := V 2 I ( jj>") (t; ") + ( " t)  N (" t) + ( " + t) N (" + t)  (" t)  (" t) : Since the messages 2 and 5 are appro ximate, w e need to iterate o v er all messages that are on the shortest path b et w een an y t w o appro ximate marginals ^ p(d i ) un til the appro ximate marginals do not c hange an ymore. The resulting optimal message passing sc hedule can b e found in Figure 1 (arro ws and caption). 4 Exp erimen ts and Online Service 4.1 Halo 2 Beta T est In order to assess the p erformance of the T rueSkill algorithm w e p erformed exp erimen ts on the game outcome data set generated b y Bungie Studios during the b eta testing of the Xb o x title Halo 2 2 . The data set consists of thousands of game outcomes for four dieren t t yp es of games: 8 pla y ers against eac h other (F ree for All), 4 pla y ers vs. 4 pla y ers (Small T eams), 1 pla y er vs. 1 pla y er (Head to Head), and 8 pla y ers vs. 8 pla y ers (Large T eams). The dra w margin " for eac h factor no de w as set b y coun ting the fraction of dra ws b et w een teams (empirical dra w probabilit y) and relate the dra w margin " to the c hance of dra wing b y dra w probabilit y =   " p n 1 + n 2    " p n 1 + n 2  = 2  " p n 1 + n 2  1 ; where n 1 and n 2 are the n um b er of pla y ers in eac h of the t w o teams compared b y a I ( > ") or I (j  j  ") no de (see Figure 1). The p erformance v ariance 2 and the dynamics v ariance 2 w ere set to the standard v alues (see next section). W e compared the T rueSkill algorithm to Elo with a Gaussian p erformance distribution (1) and = 0:07; this corresp onds to a K factor of 24 on the Elo scale whic h is considered a go o d and stable dynamics (see [4]). When w e had to pro cess a team game or a game with more than t w o teams w e used the so-called duel ling heuristic: F or eac h pla y er, compute the 's in comparison to all other pla y ers based on the team outcome of the pla y er and ev ery other pla y er and p erform an up date with the a v erage of the 's. The appro ximate message passing algorithm describ ed in the last section is extremely ecien t; in all our exp erimen ts the run time of the ranking algorithm w as within t wice the run time of the simple Elo up date. Predictiv e P erformance The follo wing table presen ts the prediction error (fraction of teams that w ere predicted in the wrong order b efore the game) for b oth algorithms (column 2 and 3). This measure is dicult to in terpret b ecause of the in terpla y of ranking and matc hmaking: Dep ending on the (unkno wn) true skills of all pla y ers, the smallest ac hiev able prediction error could b e as big as 50%. In order to comp ensate for this laten t, unkno wn v ariable, w e arranged a comp etition b et w een ELO and T rueSkill: W e let eac h system predict whic h games it considered most tigh tly matc hed and presen ted them to the other algorithm. The algorithm that predicts more game outcomes correctly has a b etter abilit y to iden tify tigh t matc hes. F or T rueSkill w e used the matc hmaking criterion (7) and for Elo w e used the dierence in Elo scores, s 1 s 2 . ELO full T rueSkill full ELO c hallenged T rueSkill c hallenged F ree for All 32.14% 30.82% 38.30% 35.64% Small T eams 34.92% 35.23% 42.55% 37.17% Head to Head 33.24% 32.44% 40.57% 30.83% Large T eams 39.49% 38.15% 44.12% 29.94% It can b e seen from column 4 and 5 of this table that T rueSkill is signican tly b etter at predicting the tigh t matc hes (the c hallenge set w as alw a ys 20% of the total n um b er of games in eac h game mo de). 2 A v ailable for do wnload at http://research.microsoft.com /mlp /apg/ downl oads .htm Matc h Qualit y One of the main applications of a rating system is to b e able to matc h pla y ers of similar skill. In order to compare the abilit y of Elo and T rueSkill on this task, w e sorted the games based on the matc h qualit y assigned b y b oth systems to eac h game. If the matc h w as truly tigh t then it w ould b e v ery lik ely to observ e a dra w. Th us, w e plot the fraction of dra ws (out of all p ossible dra ws) accum ulating o v er the matc h qualit y order assigned b y eac h system. In the graph on the righ t w e see that T rueSkill is signican tly b etter than Elo for b oth the F ree for All and Head to Head game mo de but fails in Small T eams. This is p ossibly due to the violation of the additiv e team p erformance mo del as most games in this mo de are Capture-the-Flag games. 0 20 40 60 80 100 0 5 10 15 20 25 30 35 Small Team Free for All Head To Head % of games % of pairwise draws ELO TrueSkill Win Probabilit y The p erceiv ed qualit y of a rating system for pla y ers is in terms of their winning ratio: if the winning ratio is high then pla y er w as erroneously assigned to o w eak opp osition b y the ranking system (and vice v ersa). In a second exp erimen t w e pro cessed the Halo 2 dataset but rejected games that did not meet a certain matc h qualit y threshold. F or the games th us selected, w e computed the winning ratio of eac h pla y er and, dep ending on the minimal n um b er of games pla y ed b y eac h pla y er, measured the a v erage deviation of the winning probabilit y from 50% (the optimal winning ratio). The resulting plot on the righ t (for the Head to Head game mo de) sho ws that with T rueSkill ev er pla y ers with v ery few games got mostly fair matc hes (with a winning probabilit y within 35% to 65%). 0 5 10 15 20 0 5 10 15 20 25 30 Minimal Number of Games Avg. ∆ of Win. Prob. from 50% Halo 2 ELO TrueSkill Con v ergence Prop erties Finally , w e plotted t w o exemplary con v ergence tra jectories for t w o of the highest rated pla y ers in the F ree for All game mo de (Solid line: T rueSkill; Dashed line: Elo). As can b e seen, T rueSkill automatically c ho oses the correct learning rate whereas Elo only slo wly conv erges to the target skill. In fact, T rueSkill comes close to the information theoretic limit of n log(n) bits to enco de a ranking of n pla y ers. F or 8 pla y er games, the information theoretic limit is log(n)= log(8)  5 games p er pla y er on a v erage and the observ ed con v ergence for these t w o pla y ers is  10 games! 4.2 T rueSkill in Xb o x 360 Liv e Xb o x Liv e is Microsoft's console online gaming service. It lets pla y ers pla y together across the w orld in h undreds of dieren t titles. As of Septem b er 2005 Xb o x Liv e had o v er 2 million subscrib ed users who had accrued o v er 1.3 billion hours on the service. The new and impro v ed Xb o x 360 Liv e service oers automatic pla y er rating and matc hmaking using the T rueSkill algorithm. The system pro cesses h undreds of thousands of games p er da y making it one of the largest applications of Ba y esian inference to date. In Xb o x Liv e w e use a scale giv en b y a prior  0 = 25 and  2 0 = (25=3) 2 corresp onding to a probabilit y for p ositiv e skills of appro ximately 99%. The v ariance of p erformance is giv en b y 2 = ( 0 =2) 2 and the dynamics v ariance is c hosen to b e 2 = ( 0 =100) 2 . The T rueSkill skill of a pla y er i is curren tly displa y ed as a conserv ativ e skill estimate giv en b y the 1% lo w er quan tile  i 3 i . This c hoice ensures that the top of the leaderb oards (a listing of all pla y ers according to  3 ) are only p opulated b y pla y ers that are highly skilled with high certain t y , ha ving w ork ed up their w a y from 0 =  0 3 0 . P airwise matc hmaking of pla y ers is p erformed using a matc h qualit y criterion deriv ed as the dra w probabilit y relativ e to the highest p ossible dra w probabilit y in the limit " ! 0, q dra w 2 ;  i ;  j ;  i ;  j  := s 2 2 2 2 +  2 i +  2 j  exp ( i  j ) 2 2 2 2 +  2 i +  2 j  ! : (7) Note that the matc hmaking pro cess can b e view ed as a pro cess of sequen tial exp erimen tal design [3]. Since the qualit y of a matc h is determined b y the unpredictabilit y of its outcome, the goals of matc hmaking and nding the most informativ e matc hes are aligned! As a fascinating b y-pro duct w e ha v e the opp ortunit y to study T rueSkill in action with pla y er p opulations of h undreds of thousands of pla y ers. While w e are only just b eginning to analyse the v ast amoun t of resulting data, w e ha v e already made some in teresting observ ations. 1. Games dier in the n um b er of eectiv e skill lev els. Games of c hance (e.g., single game Bac kgammon or UNO) ha v e a narro w skill distribution while games of skill (e.g., semi-realistic racing games) ha v e a wide skill distribution. 2. Matc hmaking and skill displa y result in a feedbac k lo op bac k to the pla y ers, who often view their skill estimate as a rew ard or punishmen t for p erformance. Some pla y ers try to protect or b o ost their skill rating b y either stopping to pla y , b y carefully c ho osing their opp onen ts, or b y c heating. 3. The total skill distribution is shifted to b elo w the prior distribution if pla y ers new to the system consisten tly lose their rst few games. When a skill reset w as initiated, w e found that the eect disapp eared with tigh ter matc hmaking enforced. 5 Conclusion T rueSkill is a globally deplo y ed Ba y esian skill rating algorithm based on appro ximate message passing in factor graphs. It has man y theoretical and practical adv an tages o v er the Elo system and has b een demonstrated to w ork w ell in practice. While w e sp ecically fo cused on the T rueSkill algorithm, man y more in teresting mo dels can b e dev elop ed within the factor graph framew ork presen ted here. In particular, the factor graph form ulation is applicable to the family of constrain t classication mo dels [6] that encompass a wide range of m ulticlass and ranking problems. Also, instead of ranking individual en tities one can use feature v ectors to build a ranking function, e.g., for w eb pages represen ted as bags-of-w ords. Finally , w e are planning to run a full time-indep enden t EP analysis across c hess games to obtain T rueSkill ratings for c hess masters of all times. A c kno wledgemen ts W e w ould lik e to thank P atric k O'Kelley , Da vid Sha w and Chris Butc her for in teresting discussions. W e also thank Bungie Studios for pro viding the data. References [1] A. H. Da vid. The Metho d of Pair e d Comp arisons. Charles Grin and Compan y , London, 1969. [2] A. E. Elo. The r ating of chess players: Past and pr esent. Arco Publishing, New Y ork, 1978. [3] V. V. F edoro v. The ory of optimal exp eriments. A cademic Press, New Y ork, 1972. [4] M. E. Glic kman. A comprehensiv e guide to c hess ratings. A mer. Chess Journal, 3:59102, 1995. [5] M. E. Glic kman. P arameter estimation in large dynamic paired comparison exp erimen ts. Applie d Statistics, 48:377394, 1999. [6] S. Har-P eled, D. Roth, and D. Zimak. Constrain t classication: A new approac h to m ulticlass classication and ranking. In NIPS 15, pages 785792, 2002. [7] F. R. Ksc hisc hang, B. F rey , and H.-A. Lo eliger. F actor graphs and the sum-pro duct algorithm. IEEE T r ans. Inform. The ory, 47(2):498519, 2001. [8] T. Mink a. A family of algorithms for appr oximate Bayesian infer enc e. PhD thesis, MIT, 2001.
2006
196
3,028
Learnability and the Doubling Dimension Yi Li Genome Institute of Singapore liy3@gis.a-star.edu.sg Philip M. Long Google plong@google.com Abstract Given a set F of classifiers and a probability distribution over their domain, one can define a metric by taking the distance between a pair of classifiers to be the probability that they classify a random item differently. We prove bounds on the sample complexity of PAC learning in terms of the doubling dimension of this metric. These bounds imply known bounds on the sample complexity of learning halfspaces with respect to the uniform distribution that are optimal up to a constant factor. We prove a bound that holds for any algorithm that outputs a classifier with zero error whenever this is possible; this bound is in terms of the maximum of the doubling dimension and the VC-dimension of F, and strengthens the best known bound in terms of the VC-dimension alone. We show that there is no bound on the doubling dimension in terms of the VC-dimension of F (in contrast with the metric dimension). 1 Introduction A set F of classifiers and a probability distribution D over their domain induce a metric in which the distance between classifiers is the probability that they disagree on how to classify a random object. (Let us call this metric  D.) Properties of metrics like this have long been used for analyzing the generalization ability of learning algorithms [11, 32]. This paper is about bounds on the number of examples required for PAC learning in terms of the doubling dimension [4] of this metric space. The doubling dimension of a metric space is the least d such that any ball can be covered by 2 d balls of half its radius. The doubling dimension has been frequently used lately in the analysis of algorithms [13, 20, 21, 17, 29, 14, 7, 22, 28, 6]. In the PAC-learning model, an algorithm is given examples (x 1 ; f (x 1 )); :::; (x m ; f (x m )) of the behavior of an arbitrary member f of a known class F. The items x 1 ; :::; x m are chosen independently at random according to D. The algorithm must, with probability at least 1 Æ (w.r.t. to the random choice of x 1 ; :::; x m), output a classifier whose distance from f is at most . We show that if (F ;  D ) has doubling dimension d, then F can be PAC-learned with respect to D using O  d + log 1 Æ   (1) examples. If in addition the VC-dimension of F is d, we show that any algorithm that outputs a classifier with zero training error whenever this is possible PAC-learns F w.r.t. D using O 0 @ d q log 1  + log 1 Æ  1 A (2) examples. We show that if F consists of halfspaces through the origin, and D is the uniform distribution over the unit ball in R n, then the doubling dimension of (F ;  D ) is O (n). Thus (1) generalizes the known bound of O  n+log 1 Æ   for learning halfspaces with respect to the uniform distribution [25], matching a known lower bound for this problem [23] up to a constant factor. Both upper bounds improve on the O  n log 1  +log 1 Æ   bound that follows from the traditional analysis; (2) is the first such improvement for a polynomial-time algorithm. Some previous analyses of the sample complexity of learning have made use of the fact that the “metric dimension” [18] is at most the VC-dimension [11, 15]. Since using the doubling dimension can sometimes lead to a better bound, a natural question is whether there is also a bound on the doubling dimension in terms of the VC-dimension. We show that this is not the case: it is possible to pack (1= ) (1=2o(1))d classifiers in a set F of VC-dimension d so that the distance between every pair is in the interval [ ; 2 ]. Our analysis was inspired by some previous work in computational geometry [19], but is simpler. Combining our upper bound analysis with established techniques (see [33, 3, 8, 31, 30]), one can perform similar analyses for the more general case in which no classifier in F has zero error. We have begun with the PAC model because it is a clean setting in which to illustrate the power of the doubling dimension for analyzing learning algorithms. The doubling dimension appears most useful when the best achievable error rate (the Bayes error) is of the same order as the inverse of the number of training examples (or smaller). Bounding the doubling dimension is useful for analyzing the sample complexity of learning because it limits the richness of a subclass of F near the classifier to be learned. For other analyses that exploit bounds on such local richness, please see [31, 30, 5, 25, 26, 34]. It could be that stronger results could be obtained by marrying the techniques of this paper with those. In any case, it appears that the doubling dimension is an intuitive yet powerful way to bound the local complexity of a collection of classifiers. 2 Preliminaries 2.1 Learning For some domain X, an example consists of a member of X, and its classification in f0; 1g. A classifier is a mapping from X to f0; 1g. A training set is a finite collection of examples. A learning algorithm takes as input a training set, and outputs a classifier. Suppose D is a probability distribution over X. Then define  D (f ; g ) = Pr xD (f (x) 6= g (x)): A learning algorithm A PAC learns F w.r.t. D with accuracy  and confidence Æ from m examples if, for any f 2 F, if  domain elements x 1 ; :::; x m are drawn independently at random according to D, and  (x 1 ; f (x 1 )); :::; (x m ; f (x m )) is passed to A, which outputs h, then Pr ( D (f ; h) > )  Æ : If F is a set of classifiers, a learning algorithm is a consistent hypothesis finder for F if it outputs an element of F that correctly classifies all of the training data whenever it is possible to do so. 2.2 Metrics Suppose  = (Z ; ) is a metric space. An -cover for  is a set T  Z such that every element of Z has a counterpart in T that is at a distance at most (with respect to ). An -packing for  is a set T  Z such that every pair of elements of T are at a distance greater than (again, with respect to ). The -ball centered at z 2 Z consists of all t 2 Z for which (z ; t)  . Denote the size of the smallest -cover by N ( ; ). Denote the size of the largest -packing by M( ; ). Lemma 1 ([18]) For any metric space  = (Z ; ), and any > 0, M(2 ; )  N ( ; )  M( ; ): The doubling dimension of  is the least d such that, for all radii > 0, any -ball in  can be covered by at most 2 d =2-balls. That is, for any > 0 and any z 2 Z, there is a C  Z such that  jC j  2 d, and  ft 2 Z : (z ; t)  g  [ c2C ft 2 Z : (c; t)  =2g. 2.3 Probability For a function and a probability distribution D, let E xD ( (x)) be the expectation of w.r.t. D. We will shorten this to E D ( ), and if u = (u 1 ; :::; u m ) 2 X m, then E u ( ) will be 1 m P m i=1 (u i ). We will use Pr xD, Pr D, and Pr u similarly. 3 The strongest upper bound Theorem 2 Suppose d is the doubling dimension of (F ;  D ). There is an algorithm A that PAClearns F from O  d+log(1=Æ )   examples. The key lemma limits the extent to which points that are separated from one another can crowd around some point in a metric space with limited doubling dimension. Lemma 3 (see [13]) Suppose  = (Z ; ) is a metric space with doubling dimension d and z 2 Z. For > 0, let B (z ; ) consist of the elements of u 2 Z such that (u; z )  (that is, the -ball centered at z). Then M( ; B (z ; ))   8  d : (In other words, any -packing must have at most (8 = ) d elements within distance of z.) Proof: Since  has doubling dimension d, the set B (z ; ) can be covered by 2 d balls of radius =2. Each of these can be covered by 2 d balls of radius =4, and so on. Thus, B (z ; ) can be covered by 2 ddlog 2 = e  (4 = ) d balls of radius =2. Applying Lemma 1 completes the proof. Now we are ready to prove Theorem 2. The proof is an application of the peeling technique [1] (see [30]). Proof of Theorem 2: Construct an =4 packing G greedily, by repeatedly adding an element of F to G for as long as this is possible. This packing is also an =4-cover, since otherwise we could add another member to G. Consider the algorithm that outputs the element of G with minimum error on the training set. Whatever the target, some element of G has error at most =4. Applying Chernoff bounds, O  log (1=Æ )   examples are sufficient that, with probability at least 1 Æ =2, this classifier is incorrect on at most a fraction =2 of the training data. Thus, the training error of the hypothesis output by A is at most =2 with probability at least 1 Æ =2. Choose an arbitrary function f, and let S be the random training set resulting from drawing m examples according to D, and classifying them using f. Define  S (g ; h) to be the fraction of examples in S on which g and h disagree. We have Pr(9g 2 G;  D (g ; f ) >  and  S (g ; f )  =2)  log (1=) X k =0 Pr(9g 2 G; 2 k  <  D (g ; f )  2 k +1  and  S (g ; f )  =2)  log (1=) X k =0 2 (k +5)d e 2 k m=8 by Lemma 3 and the standard Chernoff bound. Each of the following steps is a straightforward manipulation: log (1=) X k =0 2 (k +5)d e 2 k m=8 = 32 d log (1=) X k =0 2 k d e 2 k m=8  32 d log (1=) X k =0 2 2 k d e 2 k m=8  32 d 1 X k =0 2 2 k d e 2 k m=8  64 d e m=8 1 2 d e m=8 : Since m = O ((d + log(1=Æ ))=) is sufficient for 64 d e m=8  Æ =2 and 2 d e m=8  1=2, this completes the proof. 4 A bound for consistent hypothesis finders In this section we analyze algorithms that work by finding hypotheses with zero training error. This is one way to achieve computational efficiency, as is the case when F consists of halfspaces. This analysis will use the notion of VC-dimension. Definition 4 The VC-dimension of a set F of f0; 1g-valued functions with a common domain is the size of the largest set x 1 ; :::; x d of domain elements such that f(f (x 1 ); :::; f (x d )) : f 2 F g = f0; 1g d : The following lemma generalizes the Chernoff bound to hold uniformly over a class of random variables; it concentrates on a simplified consequence of the Chernoff bound that is useful when bounding the probability that an empirical estimate is much larger than the true expectation. Lemma 5 (see [12, 24]) Suppose F is a set of f0; 1g-valued functions with a common domain X. Let d be the VC-dimension of F. Let D be a probability distribution over X. Choose > 0 and K  1. Then if m  c(d log 1 + log 1 Æ ) K log (1 + K ) ; where c is an absolute constant, then Pr uD m (9f ; g 2 F ; Pr D (f 6= g )  but Pr u (f 6= g ) > (1 + K ) )  Æ : Now we are ready for the main analysis of this section. Theorem 6 Suppose the doubling dimension of (F ;  D ) and the VC-dimension of F are both at most d. Any consistent hypothesis finder for F PAC learns F from O  1   d q log 1  + log 1 Æ   examples. Proof: Assume without loss of generality that   1=100. Let =  exp (q ln 1  ); since   1=100, we have  =8. Choose a target function f. For each h 2 F, define ` h : X ! f0; 1g by ` h (x) = 1 , h(x) 6= f (x). Let ` F = f` h : h 2 F g. Since ` g (x) 6= ` h (x) exactly when g (x) 6= h(x), the doubling dimension of ` F is the same as the doubling dimension of F; the VC-dimension of ` F is also known to be the same as the VC-dimension of F (see [32]). Construct an packing G greedily, by repeatedly adding an element of ` F to G for as long as this is possible. This packing is also an -cover. For each g 2 ` F , let (g ) be its nearest neighbor in G. Since  =8, by the triangle inequality, E D (g ) >  and E u (g ) = 0 ) E D ((g )) > 7=8 and E u (g ) = 0: (3) The triangle inequality also yields E u (g ) = 0 ) (E u ((g ))  =4 or Pr u ((g ) 6= g ) > =4): Combining this with (3), we have we have Pr (9g 2 ` F ; E D (g ) >  but E u (g ) = 0)  Pr(9g 2 ` F ; E D ((g )) > 7=8 but E u ((g ))  =4) (4) +Pr(9g 2 ` F ; Pr u ((g ) 6= g ) > =4): We have Pr(9g 2 ` F ; E D ((g )) > 7=8 but E u ((g ))  =4)  Pr (9g 2 G; E D (g ) > 7=8 but E u (g )  =4) = Pr (9g 2 G;  D (f ; g ) > 7=8 but Pr u (f 6= g )  =4)  log (8=(7)) X k =0 Pr(9g 2 G; 2 k (7=8) <  D (g ; f )  2 k +1 (7=8) and Pr u (f 6= g )  =4)  log (8=(7)) X k =0  82 k +1  d e c2 k m ; where c > 0 is an absolute constant, by Lemma 3 and the standard Chernoff bound. Computing a geometric sum exactly as in the proof of Theorem 2, we have that m = O (d=) suffices for Pr (9g 2 ` F ; E D ((g )) > 7=8 but E u ((g ))  =4)   c 1   d e c 2 m ; for absolute constants c 1 ; c 2 > 0. By plugging in the value of and solving, we can see that m = O 1  d r log 1  + log 1 Æ ! ! suffices for Pr(9g 2 ` F ; Pr D ((g )) > 7=8 but Pr u ((g ))  =4)  Æ =2: (5) Since Pr D ((g ) 6= g )   =8 for all g 2 ` F , applying Lemma 5 with K = =(4 ) 1, we get that there is an absolute constant c > 0 such that m  c d log 1 + log 1 Æ  (=4 ) log (  4 ) (6) also suffices for Pr (9g 2 ` F ; Pr u ((g ) 6= g ) > =4)  Æ =2: Substituting the value into (6), it is sufficient that m  c  d(log 1  + q log 1  ) + log 1 Æ  (=8) q log 1  log 4 : Putting this together with (5) and (4) completes the proof. 5 Halfspaces and the uniform distribution Proposition 7 If U n is the uniform distribution over the unit ball in R n, and H n is the set of halfspaces that go through the origin, then the doubling dimension of (H n ;  U n ) is O (n). Proof: Choose h 2 H n and > 0. We will show that the ball of radius centered at h can be covered by O (n) balls of radius =2. Suppose U H n is the probability distribution over H n obtained by choosing a normal vector w uniformly from the unit ball, and outputting fx : w  x  0g. The argument will be a “volume argument” using U H n. It is known (see Lemma 4 of [25]) that Pr g U H n ( U n (g ; h)  =4)  (c 1 ) n1 where c 1 > 0 is an absolute constant independent of and n. Furthermore, Pr g U H n ( U n (g ; h)  5 =4)  (c 2 ) n1 where c 2 > 0 is another absolute constant. Suppose we choose arbitrarily choose g 1 ; g 2 ; ::: 2 H n that are at a distance at most from h, but =2 far from one another. By the triangle inequality, =4-balls centered at g 1 ; g 2 ; ::: are disjoint. Thus, the probability that an random element of H n is in a ball of radius =4 centered at one of g 1 ; :::; g N is at least N (c 1 ) n1. On the other hand, since each g 1 ; :::; g N has distance at most from h, any element of an =4 ball centered at one of them is at most + =4 far from h. Thus, the union of the =4 balls centered at g 1 ; :::; g N is contained in the 5 =4 ball centered at h. Thus N (c 1 ) n1  (c 2 ) n1, which implies N  (c 2 =c 1 ) n1 = 2 O (n), completing the proof. 6 Separation Theorem 8 For all 2 [0; 1=2] and positive integers d there is a set F of classifiers and a probability distribution D over their common domain with the following properties:  the VC-dimension of F is d  jF j  b 1 2 1 2e  d=2 c  for each f ; g 2 F,   D (f ; g )  2 . This proof uses the probabilistic method. We begin with the following lemma. Lemma 9 Choose positive integers s and d. Suppose A is chosen uniformly at random from among the subsets of f1; :::; sg of size d. Then, for any B > 1, Pr (jA \ f1; :::; dgj  (1 + B )E (jA \ f1; :::; dgj))   e 1 + B  (1+B )E (jA\f1;:::;dgj) : Proof: in Appendix A. Now we’re ready for the proof of Theorem 8, which uses the deletion technique (see [2]). Proof (of Theorem 8): Set the domain X to be f1; :::; sg, where s = dd= e. Let N = b s 2ed  d=2 c. Suppose f 1 ; :::; f N are chosen independently, uniformly at random from among the classifiers that evaluate to 1 on exactly d elements of X. For any distinct i; j, suppose f 1 i (1) is fixed, and we think of the members of f 1 j (1) as being chosen one at a time. The probability that any of the elements of f 1 j (1) is also in f 1 i (1) is d=s. Applying the linearity of expectation, and averaging over the different possibilities for f 1 i (1), we get E(jf 1 i (1) \ f 1 j (1)j) = d 2 s : Applying Lemma 9, Pr (jf 1 i (1) \ f 1 j (1)j  d=2)   2ed s  ( s 2d )( d 2 s ) =  2ed s  d=2 : Thus, the expected number of pairs i; j such that Pr (jf 1 i (1) \ f 1 j (1)j  d=2) is at most (N 2 =2) 2ed s  d=2. This implies that there exist f 1 ; :::; f N such that jffi; j g : jf 1 i (1) \ f 1 j (1)j  d=2gj  (N 2 =2)  2ed s  d=2 : If we delete one element from each such pair, and form G from what remains, then each pair g ; h of elements in G satisfies jg 1 (1) \ h 1 (1)j < d=2: (7) If D is the uniform distribution over f1; :::; sg, then (7) implies  D (g ; h) > . The number of elements of G is at least N (N 2 =2) 2ed s  d=2  N =2. Since each g 2 G has g 1 (1) = d, no function in G evaluates to 1 on each element of any set of d + 1 elements of X. Thus, the VC-dimension of G is at most d. Theorem 8 implies that there is no bound on the doubling dimension of (G;  D ) in terms of the VC-dimension of G. For any constraint on the VC-dimension, a set G satisfying the constraint can have arbitrarily large doubling dimension by setting the value of in Theorem 8 arbitrarily small. Acknowledgement We thank G´abor Lugosi and Tong Zhang for their help. References [1] K. Alexander. Rates of growth for weighted empirical processes. In Proc. of Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, volume 2, pages 475–493, 1985. [2] N. Alon, J. H. Spencer, and P. Erd¨os. The Probabilistic Method. Wiley, 1992. [3] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [4] P. Assouad. Plongements lipschitziens dans. R . Bull. Soc. Math. France, 111(4):429–448, 1983. [5] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of Statistics, 33(4):1497–1537, 2005. [6] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. ICML, 2006. [7] H. T. H. Chan, A. Gupta, B. M. Maggs, and S. Zhou. On hierarchical routing in doubling metrics. SODA, 2005. [8] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [9] D. Dubhashi and D. Ranjan. Balls and bins: A study in negative dependence. Random Structures & Algorithms, 13(2):99–124, Sept 1998. [10] Devdatt Dubhashi, Volker Priebe, and Desh Ranjan. Negative dependence through the FKG inequality. Technical Report RS-96-27, BRICS, 1996. [11] R. M. Dudley. Central limit theorems for empirical measures. Annals of Probability, 6(6):899–929, 1978. [12] R. M. Dudley. A course on empirical processes. Lecture notes in mathematics, 1097:2–142, 1984. [13] A. Gupta, R. Krauthgamer, and J. R. Lee. Bounded geometries, fractals, and low-distortion embeddings. FOCS, 2003. [14] S. Har-Peled and M. Mendel. Fast construction of nets in low dimensional metrics, and their applications. SICOMP, 35(5):1148–1184, 2006. [15] D. Haussler. Sphere packing numbers for subsets of the Boolean n-cube with bounded VapnikChervonenkis dimension. Journal of Combinatorial Theory, Series A, 69(2):217–232, 1995. [16] K. Joag-Dev and F. Proschan. Negative association of random variables, with applications. The Annals of Statistics, 11(1):286–295, 1983. [17] J. Kleinberg, A. Slivkins, and T. Wexler. Triangulation and embedding using small sets of beacons. FOCS, 2004. [18] A. N. Kolmogorov and V. M. Tihomirov. -entropy and -capacity of sets in functional spaces. American Mathematical Society Translations (Ser. 2), 17:277–364, 1961. [19] J. Koml´os, J. Pach, and G. Woeginger. Almost tight bounds on epsilon-nets. Discrete and Computational Geometry, 7:163–173, 1992. [20] R. Krauthgamer and J. R. Lee. The black-box complexity of nearest neighbor search. ICALP, 2004. [21] R. Krauthgamer and J. R. Lee. Navigating nets: simple algorithms for proximity search. SODA, 2004. [22] F. Kuhn, T. Moscibroda, and R. Wattenhofer. On the locality of bounded growth. PODC, 2005. [23] P. M. Long. On the sample complexity of PAC learning halfspaces against the uniform distribution. IEEE Transactions on Neural Networks, 6(6):1556–1559, 1995. [24] P. M. Long. Using the pseudo-dimension to analyze approximation algorithms for integer programming. Proceedings of the Seventh International Workshop on Algorithms and Data Structures, 2001. [25] P. M. Long. An upper bound on the sample complexity of PAC learning halfspaces with respect to the uniform distribution. Information Processing Letters, 87(5):229–234, 2003. [26] S. Mendelson. Estimating the performance of kernel classes. Journal of Machine Learning Research, 4:759–771, 2003. [27] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995. [28] A. Slivkins. Distance estimation and object location via rings of neighbors. PODC, 2005. [29] K. Talwar. Bypassing the embedding: Approximation schemes and compact representations for low dimensional metrics. STOC, 2004. [30] S. van de Geer. Empirical processes in M-estimation. Cambridge Series in Statistical and Probabilistic Methods, 2000. [31] A. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes With Applications to Statistics. Springer, 1996. [32] V. N. Vapnik. Estimation of Dependencies based on Empirical Data. Springer Verlag, 1982. [33] V. N. Vapnik. Statistical Learning Theory. New York, 1998. [34] T. Zhang. Information theoretical upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory, 2006. to appear. A Proof of Lemma 9 Definition 10 ([16]) A collection X 1 ; :::; X n of random variables are negatively associated if for every disjoint pair I ; J  f1; :::; ng of index sets, and for every pair f : R jI j ! R and g : R jJ j ! R of non-decreasing functions, we have E(f (X i ; i 2 I )g (X j ; j 2 J ))  E(f (X i ; i 2 I ))E(g (X j ; j 2 J )): Lemma 11 ([10]) If A is chosen uniformly at random from among the subsets of f1; :::; sg with exactly d elements, and X i = 1 if i 2 A and 0 otherwise, then X 1 ; :::; X s are negatively associated. Lemma 12 ([9]) Collections X 1 ; :::; X n of negatively associated random variables satisfy Chernoff bounds: for any  > 0, E(exp( P n i=1 X i ))  Q n i=1 E(exp(X i )): Proof of Lemma 9: Let X i 2 f0; 1g indicate whether i 2 A. By Lemma 11, X 1 ; :::; X d are negatively associated. We have jfA \ f1; :::; dgj = P d i=1 X i. Combining Lemma 12 with a standard Chernoff-Hoeffding bound (see Theorem 4.1 of [27]) completes the proof.
2006
197
3,029
Sample complexity of policy search with known dynamics Peter L. Bartlett Divison of Computer Science and Department of Statistics University of California, Berkeley Berkeley, CA 94720-1776 bartlett@cs.berkeley.edu Ambuj Tewari Division of Computer Science University of California, Berkeley Berkeley, CA 94720-1776 ambuj@cs.berkeley.edu Abstract We consider methods that try to find a good policy for a Markov decision process by choosing one from a given class. The policy is chosen based on its empirical performance in simulations. We are interested in conditions on the complexity of the policy class that ensure the success of such simulation based policy search methods. We show that under bounds on the amount of computation involved in computing policies, transition dynamics and rewards, uniform convergence of empirical estimates to true value functions occurs. Previously, such results were derived by assuming boundedness of pseudodimension and Lipschitz continuity. These assumptions and ours are both stronger than the usual combinatorial complexity measures. We show, via minimax inequalities, that this is essential: boundedness of pseudodimension or fat-shattering dimension alone is not sufficient. 1 Introduction A Markov Decision Process (MDP) models a situation in which an agent interacts (by performing actions and receiving rewards) with an environment whose dynamics is Markovian, i.e. the future is independent of the past given the current state of the environment. Except for toy problems with a few states, computing an optimal policy for an MDP is usually out of the question. Some relaxations need to be done if our aim is to develop tractable methods for achieving near optimal performance. One possibility is to avoid considering all possible policies by restricting oneself to a smaller class Π of policies. Given a simulator for the environment, we try to pick the best policy from Π. The hope is that if the policy class is appropriately chosen, the best policy in Π would not be too much worse than the true optimal policy. Use of simulators introduces an additional issue: how is one to be sure that performance of policies in the class Π on a few simulations is indicative of their true performance? This is reminiscent of the situation in statistical learning. There the aim is to learn a concept and one restricts attention to a hypotheses class which may or may not contain the “true” concept. The sample complexity question then is: how many labeled examples are needed in order to be confident that error rates on the training set are close to the true error rates of the hypotheses in our class? The answer turns out to depend on “complexity” of the hypothesis class as measured by combinatorial quantities associated with the class such as the VC dimension, the pseudodimension and the fat-shattering dimension. Some progress [6,7] has already been made to obtain uniform bounds on the difference between value functions and their empirical estimates, where the value function of a policy is the expected long term reward starting from a certain state and following the policy thereafter. We continue this line of work by further investigating what properties of the policy class determine the rate of uniform convergence of value function estimates. The key difference between the usual statistical learning setting and ours is that we not only have to consider the complexity of the class Π but also of the classes derived from Π by composing the functions in Π with themselves and with the state evolution process implied by the simulator. Ng and Jordan [7] used a finite pseudodimension condition along with Lipschitz continuity to derive uniform bounds. The Lipschitz condition was used to control the covering numbers of the iterated function classes. We provide a uniform convergence result (Theorem 1) under the assumption that policies are parameterized by a finite number of parameters and that the computations involved in computing the policy, the single-step simulation function and the reward function all require a bounded number of arithmetic operations on real numbers. The number of samples required grows linearly with the dimension of the parameter space but is independent of the dimension of the state space. Ng and Jordan’s and our assumptions are both stronger than just assuming finiteness of some combinatorial dimension. We show that this is unavoidable by constructing two examples where the fat-shattering dimension and the pseudodimension respectively are bounded, yet no simulation based method succeeds in estimating the true values of policies well. This happens because iteratively composing a function class with itself can quickly destroy finiteness of combinatorial dimensions. Additional assumptions are therefore needed to ensure that these iterates continue to have bounded combinatorial dimensions. Although we restrict ourselves to MDPs for ease of exposition, the analysis in this paper carries over easily to the case of partially obervable MDPs (POMDPs), provided the simulator also simulates the conditional distribution of observations given state using a bounded amount of computation. The plan of the rest of the paper is as follows. We set up notation and terminology in Section 2. In the same section, we describe the model of computation over reals that we use. Section 3 proves Theorem 1, which gives a sample complexity bound for achieving a desired level of performance within the policy class. In Section 4, we give two examples of policy classes whose combinatorial dimensions are bounded. Nevertheless, we can prove strong minimax lower bounds implying that no method of choosing a policy based on empirical estimates can do well for these examples. 2 Preliminaries We define an MDP M as a tuple (S, D, A, P(·|s, a), r, γ) where S is the state space, D the initial state distribution, A the action space, P(s′|s, a) gives the probability of moving to state s′ upon taking action a in state s, r is a function mapping states to distributions over rewards (which are assumed to lie in a bounded interval [0, R]), and γ ∈(0, 1) is a factor that discounts future rewards. In this paper, we assume that the state space S and the action space A are finite dimensional Euclidean spaces of dimensionality dS and dA respectively. A (randomized) policy π is a mapping from S to distributions over A. Each policy π induces a natural Markov chain on the state space of the MDP, namely the one obtained by starting in a start state s0 sampled from D and st+1 sampled according to P(·|st, at) with at drawn from π(st) for t ≥0. Let rt(π) be the expected reward at time step t in this Markov chain, i.e. rt(π) = E[ρt] where ρt is drawn from the distribution r(st). Note that the expectation is over the randomness in the choice of the initial state, the state transitions, and the randomized policy and reward outcomes. Define the value VM(π) of the policy by VM(π) = ∞ X t=0 γtrt(π) . We omit the subscript M in the value function if the MDP in question is unambiguously identified. For a class Π of policies, define opt(M, Π) = sup π∈Π VM(π) . The regret of a policy π′ relative to an MDP M and a policy class Π is defined as RegM,Π(π′) = opt(M, Π) −VM(π′) . We use a degree bounded version of the Blum-Shub-Smale [3] model of computation over reals. At each time step, we can perform one of the four arithmetic operations +, −, ×, / or can branch based on a comparison (say <). While Blum et al. allow an arbitrary fixed rational map to be computed in one time step, we further require that the degree of any of the polynomials appearing at computation nodes be at most 1. Definition 1. Let k, l, m, τ be positive integers, f a function from Rk to probability distributions over Rl and Ξ a probability distribution over Rm. The function f is (Ξ, τ)-computable if there exists a degree bounded finite dimensional machine M over R with input space Rk+m and output space Rl such that the following hold. 1. For every x ∈Rk and ξ ∈Rm, the machine halts with halting time TM(x, ξ) ≤τ. 2. For every x ∈Rk, if ξ ∈Rm is distributed according to Ξ the input-output map ΦM(x, ξ) is distributed as f(x). Informally, the definition states that given access to an oracle which generates samples from Ξ, we can generate samples from f(x) by doing a bounded amount of computation. For precise definitions of the input-output map and halting time, we refer the reader to [3, Chap. 2]. In Section 3, we assume that the policy class Π is parameterized by a finite dimensional parameter θ ∈Rd. In this setting π(s; θ), P(·|s, a) and r(s) are distributions over RdA, RdS and [0, R] respectively. The following assumption states that all these maps are computable within τ time steps in our model of computation. Assumption A. There exists a probability distribution Ξ over Rm and a positive integer τ such that π(s; θ), P(·|s, a) and r(s) are (Ξ, τ)-computable. Let Mπ, MP and Mr respectively be the machines that compute them. This assumption will be satisfied if we have three “programs” that make a call to a random number generator for distribution Ξ, do a fixed number of floating-point operations and simulate the policies in our class, the state-transition dynamics and the rewards respectively. The following two examples illustrate this for the state-transition dynamics. • Linear Dynamical System with Additive Noise 1 Suppose P and Q are dS × dS and dS × dA matrices and the system dynamics is given by st+1 = Pst + Qat + ξt , (1) where ξt are i.i.d. from some distribution Ξ. Since computing (1) takes 2(d2 S +dSdA +dS) operations, P(·|s, a) is (Ξ, τ)-computable for τ = O(dS(dS + dA)). • Discrete States and Actions Suppose S = {1, 2, . . ., nS} and A = {1, 2, . . ., nA}. For some fixed s, a, P(·|s, a) is described by n numbers ⃗ps,a = (p1, . . . , pnS), P i pi = 1. Let Pk = Pk i=1 pi. For ξ ∈(0, 1], set f(ξ) = min{k : Pk ≥ξ}. Thus, if ξ has uniform distribution on (0, 1], then f(ξ) = k with probability pk. Since the Pk’s are non-decreasing, f(ξ) can be computed in log nS steps using binary search. But this was for a fixed s, a pair. Finding which ⃗ps,a to use, further takes log(nSnA) steps using binary search. So if Ξ denotes the uniform distribution on (0, 1] then P(·|s, a) is (Ξ, τ)-computable for τ = O(log nS + log nA). For a small ϵ, let H be the ϵ horizon time, i.e. ignoring rewards beyond time H does not affect the value of any policy by more than ϵ. To obtain sample rewards, given initial state s0 and policy πθ = π(·; θ), we first compute the trajectory s0, . . . , sH sampled from the Markov chain induced by πθ. This requires H “calls” each to Mπ and MP . A further H + 1 calls to Mr are then required to generate the rewards ρ0 through ρH. These calls require a total of 3H + 1 samples from Ξ. The empirical estimates are computed as follows. Suppose, for 1 ≤i ≤n, (s(i) 0 , ⃗ξi) are i.i.d. samples generated from the joint distribution D × Ξ3H+1. Define the empirical estimate of the value of the policy π by ˆV H M(πθ) = 1 n n X i=1 H X t=0 γtρt(s(i) 0 , θ, ⃗ξi) . We omit the subscript M in ˆV when it is clear from the context. Define an ϵ-approximate maximizer of ˆV to be a policy π′ such that ˆV H M(π′) ≥sup π∈Π ˆV H M(π) −ϵ . 1In this case, the realizable dynamics (mapping from state to next state for a given policy class) is not uniformly Lipschitz if policies allow unbounded actions. So previously known bounds [7] are not applicable even in this simple setting. Finally, we mention the definitions of three standard combinatorial dimensions. Let X be some space and consider classes G and F of {−1, +1} and real valued functions on X, respectively. Fix a finite set X = {x1, . . . , xn} ⊆X. We say that G shatters X if for all bit vectors ⃗b ∈{0, 1}n there exists g ∈G such that for all i, bi = 0 ⇒g(xi) = −1, bi = 1 ⇒g(xi) = +1. We say that F shatters X if there exists ⃗r ∈Rn such that, for all bit vectors ⃗b ∈{0, 1}n, there exists f ∈F such that for all i, bi = 0 ⇒f(xi) < ri, bi = 1 ⇒f(xi) ≥ri. We say that F ϵ-shatters X if these exists ⃗r ∈Rn such that, for all bit vectors ⃗b ∈{0, 1}n, there exists f ∈F such that for all i, bi = 0 ⇒f(xi) ≤ri −ϵ, bi = 1 ⇒f(xi) ≥ri + ϵ. We then have the following definitions, VCdim(G) = max{|X| : G shatters X} , Pdim(F) = max{|X| : F shatters X} , fatF(ϵ) = max{|X| : F ϵ-shatters X} . 3 Regret Bound for Parametric Policy Classes Computable in Bounded Time Theorem 1. Fix an MDP M, a policy class Π = {s 7→π(s; θ) : θ ∈Rd}, and an ϵ > 0. Suppose Assumption A holds. Then n > O  R2Hdτ (1 −γ)2ϵ2 log R ϵ(1 −γ)  ensures that E  RegM,Π(πn)  ≤3ϵ + ϵ′, where πn is an ϵ′-approximate maximizer of ˆV and H = log1/γ(2R/(ϵ(1 −γ))) is the ϵ/2 horizon time. Proof. The proof consists of three steps: (1) Assumption A is used to get bounds on pseudodimension; (2) The pseudodimension bound is used to prove uniform convergence of empirical estimates to true value functions; (3) Uniform convergence and the definition of ϵ′-approximate maximizer gives the bound on expected regret. STEP 1. Given initial state s0, parameter θ and random numbers ξ1 through ξ3H+1, we first compute the trajectory as follows. Recall that ΦM refers to the input-output map of a machine M. st = ΦMP (st−1, ΦMπ(θ, s, ξ2t−1), ξ2t), 1 ≤t ≤H . (2) The rewards are then computed by ρt = ΦMr(st, ξ2H+t+1), 0 ≤t ≤H . (3) The H-step discounted reward sum is computed as H X t=0 γtρt = ρ0 + γ(ρ1 + γ(ρ2 + . . . (pH−1 + γρH) . . .)) . (4) Define the function class R = {(s0, ⃗ξ) 7→PH t=0 γtρt(s0, θ, ⃗ξ) : θ ∈Rd}, where we have explicitly shown the dependence of ρt on s0, θ and ⃗ξ. Let us count the number of arithmetic operations needed to compute a function in this class. Using Assumption A, we see that steps (2) and (3) require no more than 2τH and τ(H + 1) operations respectively. Step (4) requires H multiplications and H additions. This gives a total of 2τH + τ(H + 1) + 2H ≤6τH operations. Goldberg and Jerrum [4] showed that the VC dimension of a function class can be bounded in terms of an upper bound on the number of arithmetic operations it takes to compute the functions in the class. Since the pseudodimension of R can be written as Pdim(R) = VCdim{(s0, ⃗ξ, c) 7→sign(f(s0, ⃗ξ) −c) : f ∈R, c ∈R} , we get the following bound by [2, Thm. 8.4], Pdim(R) ≤4d(6τH + 3) . (5) STEP 2. Let V H(π) = PH t=0 γtrt(π). For the choice of H stated in the theorem, we have for all π, |V H(π) −V (π)| ≤ϵ/2. Therefore, P n(∃π ∈Π : | ˆV H(π) −V (π)| > ϵ) ≤P n(∃π ∈Π : | ˆV H(π) −V H(π)| > ϵ/2) . (6) Functions in R are positive and bounded above by R′ = R/(1 −γ). There are well-known bounds for deviations of empirical estimates from true expectations for bounded function classes in terms of the pseudodimension of the class (see, for example, Theorems 3 and 5 in [5]; also see Pollard’s book [8]). Using a weak form of these results, we get P n(∃π ∈Π : | ˆV H(π) −V H(π)| > ϵ) ≤8 32eR′ ϵ 2 Pdim(R) e−ϵ2n/64R′2 . In order to ensure that P n(∃π ∈Π : | ˆV H(π) −V H(π)| > ϵ/2) < δ, we need 8 64eR′ ϵ 2 Pdim(R) e−ϵ2n/256R′2 < δ , Using the bound (5) on Pdim(R), we get that P n  sup π∈Π ˆV H(π) −V (π) > ϵ  < δ , (7) provided n > 256R2 (1 −γ)2ϵ2  log 8 δ  + 8d(6τH + 3) log  64eR (1 −γ)ϵ  . STEP 3. We now show that (7) implies E RegM,Π(πn) ≤Rδ/(1 −γ) + (2ϵ + ϵ′). The theorem them immediately follows by setting δ = (1 −γ)ϵ/R. Suppose that for all π ∈Π, | ˆV H(π)−V (π)| ≤ϵ. This implies that for all π ∈Π, V (π) ≤ˆV H(π)+ ϵ. Since πn is an ϵ′-approximate maximizer of ˆV , we have for all π ∈Π, ˆV H(π) ≤ˆV H(πn) + ϵ′. Thus, for all π ∈Π, V (π) ≤ˆV H(πn) + ϵ + ϵ′. Taking the supremum over π ∈Π and using the fact that ˆV H(πn) ≤V (πn) + ϵ, we get supπ∈Π V (π) ≤V (πn) + 2ϵ + ϵ′, which is equivalent to RegM,Π(πn) ≤2ϵ + ϵ′. Thus, if (7) holds then we have P n RegM,Π(πn) > 2ϵ + ϵ′ < δ . Denoting the event {RegM,Π(πn) > 2ϵ + ϵ′} by E, we have E RegM,Π(πn) = E RegM,Π(πn)1E + E RegM,Π(πn)1(¬E) ≤Rδ/(1 −γ) + (2ϵ + ϵ′) . where we used the fact that regret is bounded above by R/(1 −γ). 4 Two Policy Classes Having Bounded Combinatorial Dimensions We will describe two policy classes for which we can prove that there are strong limitations on the performance of any method (of choosing a policy out of a policy class) that has access only to empirically observed rewards. Somewhat surprisingly, one can show this for policy classes which are “simple” in the sense that standard combinatorial dimensions of these classes are bounded. This shows that sufficient conditions for the success of simulation based policy search (such as the assumptions in [7] and in our Theorem 1) have to be necessarily stronger than boundedness of standard combinatorial dimensions. The first example is a policy class F1 for which fatF1(ϵ) < ∞for all ϵ > 0. The second example is a class F2 for which Pdim(F2) = 1. Since finiteness of pseudodimension is a stronger condition, the second example makes our point more forcefully than the first one. However, the first example is considerably less contrived than the second one. Example 1 Let MD = (S, D, A, P(·|s, a), r, γ) be an MDP where S = [−1, +1], D = some distribution on [−1, +1], A = [−2, +2], P(s′|s, a) = 1 if s′ = max(−1, min(s + a, 1))), 0 otherwise , −1 −0.5 0 0.5 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 fT(x) x Figure 1: Plot of the function fT with T = {0.2, 0.3, 0.6, 0.8}. Note that, for x > 0, fT (x) is 0 iff x ∈T . Also, fT (x) satisfies the Lipschitz condition (with constant 1) everywhere except at 0. r = deterministic reward that maps s to s, and γ = some fixed discount factor in (0, 1). For a function f : [−1, +1] 7→[−1, +1], let πf denote the (deterministic) policy which takes action f(s) −s in state s. Given a class F of functions, we define an associated policy class ΠF = {πf : f ∈F}. We now describe a specific function class F1. Fix ϵ1 > 0. Let T be an arbitrary finite subset of (0, 1). Let δ(x) = (1 −|x|)+ be the “triangular spike” function. Let fT (x) =        −1 −1 ≤x < 0 0 x = 0 max y∈T  ϵ1 |T |δ  x−y ϵ1/|T |  −ϵ1 |T |  0 < x ≤1 . There is a spike at each point in T and the tips of the spikes just touch the X-axis (see Figure 1). Since −1 and 0 are fixed points of FT (x), it is straightforward to verify that f 2 T (x) =    −1 −1 ≤x < 0 0 x = 0 1(x∈T ) −1 0 < x ≤1 . (8) Also, f n T = f 2 T for all n > 2. Define F1 = {fT : T ⊂(ϵ1, 1), |T | < ∞}. By construction, functions in F1 have bounded total variation and so, fatF1(ϵ) is O(1/ϵ) (see, for example, [2, Chap. 11]). Moreover, fT (x) satisfies the Lipschitz condition everywhere (with constant L = 1) except at 0. This is striking in the sense that the loss of the Lipschitz property at a single point allows us to prove the following lower bound. Theorem 2. Let gn range over functions from Sn to F1. Let D range over probability distributions on S. Then, inf gn sup D E(s1,...,sn)∼Dn h RegMD,ΠF1 (πgn(s1,...,sn)) i ≥ γ2 1 −γ −2ϵ1 . This says that for any method that maps random initial states s1, . . . , sn to a policy in ΠF1, there is an initial state distribution such that the expected regret of the selected policy is at least γ2/(1 − γ) −2ϵ1. This is in sharp contrast to Theorem 1 where we could reduce, by using sufficiently many samples, the expected regret down to any positive number given the ability to maximize the empirical estimates ˆV . Let us see how maximization of empirical estimates behaves in this case. Since fatF1(ϵ) < ∞ for all ϵ > 0, the law of large numbers holds uniformly [1, Thm. 2.5] over the class F1. The transitions, policies and rewards here are all deterministic. The reward function is just the identity. This means that the 1-step reward function family is just F1. So the estimates of 1-step rewards are still uniformly concentrated around their expected values. Since the contribution of rewards from time step 2 onwards can be no more than γ2+γ3+. . . = γ2/(1−γ), we can claim that the expected regret of the ˆV maximizer πn behaves like E h RegM,ΠF1(πn) i ≤ γ2 1 −γ + en where en →0. Thus the bound in Theorem 2 above is essentially tight. Before we prove Theorem 2, we need the following lemma whose proof is given in the appendix accompanying the paper. Lemma 1. Fix an interval (a, b) and let T be the set of all its finite subsets. Let gn range over functions from (a, b)n to T . Let D range over probability distributions on (a, b). Then, inf gn sup D  sup T ∈T EX∼D1(X∈T ) −E(X1,...,Xn)∼DnE(X∼D)1(X∈gn(X1,...,Xn))  ≥1 . Proof of Theorem 2. We will prove the inequality when D ranges over distributions on (0, 1) which, obviously, implies the theorem. Since, for all f ∈F1 and n > 2, f n = f 2, we have opt(MD, ΠF1) −E(s1,...,sn)∼DnVMD(πgn(s1,...,sn)) = sup f∈F1 Es∼D  s + γf(s) + γ2 1 −γ f 2(s)  −E(s1,...,sn)∼Dn  Es∼D[s + γgn(s1, . . . , sn)(s) + γ2 1 −γ gn(s1, . . . , sn)2(s)]  = sup f∈F1 Es∼D  γf(s) + γ2 1 −γ f 2(s)  −E(s1,...,sn)∼Dn  Es∼D[γgn(s1, . . . , sn)(s) + γ2 1 −γ gn(s1, . . . , sn)2(s)]  For all f1, f2, |Ef1 −Ef2| ≤E|f1 −f2| ≤ϵ1. Therefore, we can get rid of the first terms in both sub-expressions above without changing the value by more than 2γϵ1. ≥sup f∈F1 Es∼D  γ2 1 −γ f 2(s)  −E(s1,...,sn)∼Dn  Es∼D[ γ2 1 −γ gn(s1, . . . , sn)2(s)]  −2γϵ1 = γ2 1 −γ sup f∈F1 Es∼D  f 2(s) + 1  −E(s1,...,sn)∼DnEs∼D[gn(s1, . . . , sn)2(s) + 1] ! −2γϵ1 From (8), we know that f 2 T (x) + 1 restricted to x ∈(0, 1) is the same as 1(x∈T ). Therefore, restricting D to probability measures on (0, 1) and applying Lemma 1, we get inf gn sup D opt(MD, ΠF1) −E(s1,...,sn)∼DnVMD(πgn(s1,...,sn))  ≥ γ2 1 −γ −2γϵ1 . To finish the proof, we note that γ < 1 and, by definition, RegMD,ΠF1(πgn(s1,...,sn)) = opt(MD, ΠF1) −VMD(πgn(s1,...,sn)) . Example 2 We use the MDP of the previous section with a different policy class which we now describe. For a real number x, y ∈(0, 1) with binary expansions (choose the terminating representation for rationals) 0.b1b2b3 . . . and 0.c1c2c3 . . ., define mix(x, y) = 0.b1c1b2c2 . . . stretch(x) = 0.b10b20b3 . . . even(x) = 0.b2b4b6 . . . odd(x) = 0.b1b3b5 . . . Some obvious identities are mix(x, y) = stretch(x) + stretch(y)/2, odd(mix(x, y)) = x and even(mix(x, y)) = y. Now fix ϵ2 > 0. Since, finite subsets of (0, 1) and irrationals in (0, ϵ2) have the same cardinality, there exists a bijection h which maps every finite subset T of (0, 1) to some irrational h(T ) ∈(0, ϵ2). For a finite subset T of (0, 1), define fT (x) =              0 x = −1 1(odd(−x)∈h−1(even(−x)) −1 < x < 0 0 x = 0 −mix(x, h(T )) 0 < x < 1 1 x = 1 . It is easy to check that with this definition, f 2 T (x) = 1(x∈T ) for x ∈(0, 1). Finally, let F2 = {fT : T ⊂(0, 1), |T | < ∞}. To calculate the pseudodimension of this class, note that using the identity mix(x, y) = stretch(x) + stretch(y)/2, every function fT in the class can be written as fT = f0 + ˜fT where f0 is a fixed function (does not depend on T ) and ˜fT is given by ˜fT (x) =    0 −1 ≤x ≤0 −stretch(h(T ))/2 0 < x < 1 0 x = 1 . Let H = { ˜fT : T ⊂(0, 1), |T | < ∞}. Since Pdim(H + f0) = Pdim(H) for any class H and a fixed function f0, we have Pdim(F2) = Pdim(H). As each function ˜fT (x) is constant on (0, 1) and zero elsewhere, we cannot shatter even two points using H. Thus, Pdim(H) = 1. Theorem 3. Let gn range over functions from Sn to F2. Let D range over probability distributions on S. Then, inf gn sup D E(s1,...,sn)∼Dn h RegMD,ΠF1(πgn(s1,...,sn)) i ≥ γ2 1 −γ −ϵ2 . Sketch. Let us only check that the properties of F1 that allowed us to proceed with the proof of Theorem 2 are also satisfied by F2. First, for all f ∈F2 and n > 2, f n = f 2. Second, for all f1, f2 ∈F2 and x ∈[−1, +1], |f1(x) −f2(x)| ≤ϵ2/2. This is because fT1 and fT2 can differ only for x ∈(0, 1). For such an x, |fT1(x) −fT2(x)| = | mix(x, h(T1) −mix(x, h(T2))| = | stretch(h(T1)) −stretch(h(T2))|/2 ≤ϵ2/2. Third, the restriction of f 2 T to (0, 1) is 1(x∈T ). Acknowledgments We acknowledge the support of DARPA under grants HR0011-04-1-0014 and FA8750-05-2-0249. References [1] Alon, N., Ben-David, S., Cesa-Bianchi, N. & Haussler, D. (1997) Scale-sensitive Dimensions, Uniform Convergence, and Learnability. Journal of the ACM 44(4):615–631. [2] Anthony, M. & Bartlett P.L. (1999) Neural Network Learning: Theoretical Foundations. Cambridge University Press. [3] Blum, L., Cucker, F., Shub, M. & Smale, S. (1998) Complexity and Real Computation. Springer-Verlag. [4] Goldberg, P.W. & Jerrum, M.R. (1995) Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers. Machine Learning 18(2-3):131–148. [5] Haussler, D. (1992) Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications. Information and Computation 100:78–150. [6] Jain, R. & Varaiya, P. (2006) Simulation-based Uniform Value Function Estimates of Discounted and Average-reward MDPs. SIAM Journal on Control and Optimization, to appear. [7] Ng A.Y. & Jordan M.I. (2000) PEGASUS: A Policy Search Method for MDPs and POMDPs. In Proceedings of the 16th Annual Conference on Uncertainty in Artificial Intelligence, pp. 405–415. Morgan Kauffman Publishers. [8] Pollard D. (1990) Empirical Processes: Theory and Applications. NSF-CBMS Regional Conference Series in Probability and Statistics, Volume 2.
2006
198
3,030
Large Scale Hidden Semi-Markov SVMs Gunnar R¨atsch∗ Friedrich Miescher Laboratoy, Max Planck Society Spemannstr. 39, 72070 T¨ubingen, Germany Gunnar.Raetsch@tuebingen.mpg.de S¨oren Sonnenburg Fraunhofer FIRST.IDA Kekul´estr. 7, 12489 Berlin, Germany sonne@first.fhg.de Abstract We describe Hidden Semi-Markov Support Vector Machines (SHM SVMs), an extension of HM SVMs to semi-Markov chains. This allows us to predict segmentations of sequences based on segment-based features measuring properties such as the length of the segment. We propose a novel technique to partition the problem into sub-problems. The independently obtained partial solutions can then be recombined in an efficient way, which allows us to solve label sequence learning problems with several thousands of labeled sequences. We have tested our algorithm for predicting gene structures, an important problem in computational biology. Results on a well-known model organism illustrate the great potential of SHM SVMs in computational biology. 1 Introduction Hidden Markov SVMs are a recently-proposed method for predicting a label sequence given the input sequence [3, 17, 18, 1, 2]. They combine the benefits of the power and flexibility of kernel methods with the idea of Hidden Markov Models (HMM) [11] to predict label sequences. In this work we introduce a generalization of Hidden Markov SVMs, called Hidden Semi-Markov SVMs (HSM SVMs). In HM SVMs and HMMs there is a state transition for every input symbol. In semiMarkov processes it is allowed to persist in a state for a number of time steps before transitioning into a new state. During this segment of time the system’s behavior is allowed to be non-Markovian. This adds flexibility for instance to model segment lengths or to use non-linear content sensors that may depend on the start and end of the segment. One of the largest problems with HM SVMs and also SHM SVMs is their high computational complexity. Solving the resulting optimization problems may become computationally infeasible already for a few hundred examples. In the second part of the paper we consider the case of using content sensors (for whole segments) and signal detectors (at segment boundaries) in SHM SVMs. We motivate a simple, but very effective strategy of partitioning the problem into independent subproblems and discuss how one can reunion the different parts. We propose to solve a relatively small optimization problem that can be solved rather efficiently. This strategy allows us to tackle significantly larger label sequence problems (with several thousands of sequences). To illustrate the strength of our approach we have applied our algorithm to an important problem in computational biology: the prediction of the segmentation of a pre-mRNA sequence into exons and introns. On problems derived from sequences of the model organism Caenorhabditis elegans we can show that the SHM SVM approach consistently outperforms HMM based approaches by a large margin (see also [13]). The paper is organized as follows: In Section 2 we introduce the necessary notation, HM SVMs and the extension to semi-Markov models. In Section 3 we propose and discuss a technique that allows us to train SHM SVMs on significantly more training examples. Finally, in Section 4 we outline the gene structure prediction problem, discuss additional techniques to apply SHM SVMs to this problem and show surprisingly large improvements compared to state-of-the-art methods. ∗Corresponding author, http://www.fml.mpg.de/raetsch 2 Hidden Markov SVMs In label sequence learning one learns a function that assigns to a sequence of objects x = χ1χ2 . . . χl a sequence of labels y = υ1υ2 . . . υl (χi ∈X, υi ∈Υ, i = 1, . . . , l). While objects can be of rather arbitrary kind (e.g. vectors, letters, etc), the set of labels Υ has to be finite.1 A common approach is to determine a discriminant function F : X ×Y →R that assigns a score to every input x ∈X := X∗ and every label sequence y ∈Y := Υ∗, where X∗denotes the Kleene closure of X. In order to obtain a prediction f(x) ∈Y, the function is maximized with respect to the second argument: f(x) = argmax y∈Y F(x, y). (1) 2.1 Representation & Optimization Problem In Hidden Markov SVMs (HM SVMs) [3], the function F(x, y) := ⟨w, Φ(x, y)⟩is linearly parametrized by a weight vector w, where Φ(x, y) is some mapping into a feature space F. Given a set of training examples (xn, yn), n = 1, . . . , N, the parameters are tuned such that the true labeling yn scores higher than all other labelings y ∈Yn := Y \ yn with a large margin, i.e. F(xn, yn) ≫argmaxy∈Yn F(xn, y). This goal can be achieved by solving the following optimization problem (appeared equivalently in [3]): min ξ∈RN,w∈F C N X n=1 ξi + P (w) (2) s.t. ⟨w, Φ(x, yn)⟩−⟨w, Φ(x, y)⟩≥1 −ξn for all n = 1, . . . , N and y ∈Yn, where P is a suitable regularizer (e.g. P (w) = ∥w∥2) and the ξ’s are slack variables to implement a soft margin. Note that the linear constraints in (2) are equivalent to the following set of nonlinear constraints: F(xn, yn) −maxy∈Yn F(xn, y) ≥1 −ξn for n = 1, . . . , N [3]. If P(w) = ∥w∥2, it can be shown that the solution w∗of (2) can be written as w∗= N X n=1 X y∈Y αn(y)Φ(xn, y), where αn(y) is the Lagrange multiplier of the constraint involving example n and labeling y (see [3] for details). Defining the kernel as k((x, y), (x′, y′)) := ⟨Φ(x, y), Φ(x′, y′)⟩, we can rewrite F(x, y) as F(x′, y′) = N X n=1 X y∈Y αn(y)k((xn, y), (x′, y′)). 2.2 Outline of an Optimization Algorithm The number of constraints in (2) can be very large, which may constitute challenges for efficiently solving problem (2). Fortunately, only a few of the constraints usually are active and working set methods can be applied in order to solve the problem for larger number of examples. The idea is to start with small sets of negative (i.e. false) labelings Yn for every example. One solves (2) for the smaller problem and then identifies labelings y ∈Yn that maximally violate constraints, i.e. y = argmax y∈Yn F(xn, y), (3) where w is the intermediate solution of the restricted problem. The new constraint generated by the negative labeling is then added to the optimization problem. The method described above is also known as column generation method or cutting-plane algorithm and can be shown to converge to the optimal solution w∗[18]. However, since the computation of F involves many kernel computations and also the number of non-zero α’s is often large, solving the problem with more than a few hundred labeled sequences often seems computationally too expensive. 2.3 Viterbi-like Decoding Determining the optimal labeling in (1) efficiently is crucial during optimization and prediction. If F(x, ·) satisfies certain conditions, one can use a Viterbi-like algorithm [20] for efficient decoding 1Note that the number of possible labelings grows exponentially in the length of the sequence. of the optimal labeling. This is particularly the case when Φ can be written as a sum over the length of the sequence and decomposed as Φ(x, y) =   l(x) X i=1 Φσ,τ(υi, υi+1, x, i)   σ,τ∈Υ where l(x) is the length of the sequence x.2 By (φγ)γ∈Γ we denote the concatenation of feature vectors, i.e. (φ⊤ γ1, φ⊤ γ2, . . .)⊤. It is essential that Φ is composed of mapping functions that depend only on labels at position i and i + 1, x as well as i. We can rewrite F using w = (wσ,τ)σ,τ∈Υ: F(x, y) = X σ,τ∈Υ * wσ,τ, l(x) X i=1 Φσ,τ(υi, υi+1, x, i) + = l(x) X i=1 X σ,τ∈Υ ⟨wσ,τ, Φσ,τ(υi, υi+1, x, i)⟩ | {z } =:g(υi,υi+1,x,i) . (4) Thus we have positionally decomposed the function F. The score at position i + 1 only depends on x, i and labels at positions i and i + 1 (Markov property). Using this decomposition we can define V (i, υ) := ( max υ′∈Υ(V (i −1, υ′) + g(υ′, υ, x, i −1)) i > 1 0 otherwise as the maximal score for all labelings with label υ at position i. Via dynamic programming one can compute maxυ∈Υ V (l(x), υ), which can be proven to solve (1) for the considered case. Moreover, using backtracking one can recover the optimal label sequence.3 The above decoding algorithm requires to evaluate g at most |Υ|2l(x) times. Since computing g involves computing potentially large sums of kernel functions, the decoding step can be computationally quite demanding–depending on the kernels and the number of examples. 2.4 Extension to Hidden Semi-Markov SVMs Semi-Markov models extend hidden Markov models by allowing each state to persist for a nonunit number δi of symbols. Only after that the system will transition to a new state, which only depends on x and the current state. During the interval (i, i + δi) the behavior of the system may be non-Markovian [14]. Semi-Markov models are fairly common in certain applications of statistics [6, 7] and are also used in reinforcement learning [16]. Moreover, [15, 9] previously proposed an extension of HMMs, called Generalized HMMs (GHMMs) that is very similar to the ideas above. Also, [14] proposed a semi-Markov extension to Conditional Random Fields. In this work we extend Hidden Markov-SVMs to Hidden Semi-Markov SVMs by considering sequences of segments instead of simple label sequences. We need to extend the definition of the labeling with s segments: y = (υ1, π1), (υ2, π2), . . . , (υs, πs), where πj is the start position of the segment and υj its label.4 We assume π1 = 1 and let πj = πj−1 + δj. To simplify the notation we define πs+1 := l(x) + 1, s := s(y) to be the number of segments in y and υs+1 := ∅. We can now generalize the mapping Φ to: Φ(x, y) =   s(y) X j=1 Φσ,τ(υj, υj+1, x, πj, πj+1)   σ,τ∈Υ . 2We define υl+1 := ∅to keep the notation simple. 3Note that one can extend the outlined decoding algorithm to produce not only the best path, but the K best paths. The 2nd best path may be required to compute the structure in (3). The idea is to duplicate tables K times as follows: V (i, υ, k) := ( max(k) υ′∈Υ,k′=1,...,K(V (i −1, υ′, k′) + g(υ′, υ, x, i −1)) i > 1 0 otherwise where max(k) is the function computing the kth largest number and is −∞if there are fewer numbers. V (i, υ, k) now is the k-best score of labelings with label υ at position i. 4For simplicity, we associate the label of a segment with the signal at the boundary to the next segment. A generalization is straightforward. With this definition we can extract features from segments: As πj and πj+1 are given one can for instance compute the length of the segment or other features that depend on the start and the end of the segment. Decomposing F results in: F(x, y) = s(y) X j=1 X σ,τ∈Υ ⟨wσ,τ, Φσ,τ(υj, υj+1, x, πj, πj+1)⟩ | {z } =:g(υj,υj+1,x,πj,πj+1) . (5) Analogously we can extend the formula for the Viterbi-like decoding algorithm [14]: V (i, υ) := ( max υ′∈Υ,d=1,...,min(i−1,S)(V (i −d, υ′) + g(υ′, υ, x, i −d, i)) i > 1 0 otherwise (6) where S is the maximal segment length and maxυ∈Υ V (l(x), υ) is the score of the best segment labeling. The function g needs to be evaluated at most |Υ|2l(x)S times. The optimal label sequence can be obtained as before by backtracking. Also the above method can be easily extended to produce the K best labelings (cf. Footnote 3). 3 An Algorithm for Large Scale Learning 3.1 Preliminaries In this section we consider a specific case that is relevant for the application that we have in mind. The idea is that the feature map should contain information about segments such as the length or the content as well as segment boundaries, which may exhibit certain detectable signals. For simplicity we assume that it is sufficient to consider the string χπj..πj+1 := χπj χπj+1 . . . χπj+1−2 χπj+1−1 for extracting content information about segment j. Also, for considering signals we assume it to be sufficient to consider a window ±ω around the end of the segment, i.e. we only consider χπj+1±ω := χπj+1−ω . . . χπj+1+ω. To keep the notation simple we do not consider signals at the start of the segment. Moreover, we assume for simplicity that xπ±ω is appropriately defined for every π = 1, . . . , l(x). We may therefore define the following feature map: Φ(x, y) =           s(y) X j=1 [[υj = σ]][[υj+1 = τ]]Φc(χπj..πj+1)   σ,τ∈Υ   s(y) X j=1 [[υj+1 = τ]]Φs(χπj+1±ω)   τ∈Υ         %content %signal where [[true]] = 1 and 0 otherwise. Then the kernel between two examples using this feature map can be written as: k((x, y), (x′, y′)) = X σ,τ∈Υ X j:(υj,υj)=(σ,τ) j′:(υ′ j′,υ′ j′)=(σ,τ) kc(χπj..πj+1, χ′ π′ j′..π′ j′+1) + X τ∈Υ X j:υj+1=τ j′:υj′+1=τ ks(χπj+1±ω, χ′ π′ j′+1±ω) where kc(·, ·) := ⟨Φ1(·), Φ1(·)⟩and ks(·, ·) := ⟨Φs(·), Φs(·)⟩. The above formulation has the benefit of keeping the signals and content kernels separated for each label, which we can exploit for rewriting F(x, y) F(x, y) = X σ,τ∈Υ X j:(υj,υj+1)=(σ,τ) Fσ,τ(χπj..πj+1) + X τ∈Υ X j:υj+1=τ Fτ(χπj+1±ω), where Fσ,τ(χ) := N X n=1 X y′∈Y αn(y′) X j′:(υ′ j′,υ′ j′+1)=(σ,τ) kc(χ, χn π′ j′..π′ j′+1) and Fτ(χ) = N X n=1 X y′∈Y αn(y′) X j′:υj′+1=τ ks(χ, χn π′ j′+1±ω). Hence, we have partitioned F(x, y) into |Υ|2 + |Υ| functions characterizing the content and the signals. 3.2 Two-Stage Learning By enumerating all non-zero α’s and valid settings of j′ in Fτ and Fσ,τ, we can define sets of sequences {χτ,σ m }m=1,...,Mσ,τ and {χτ m}m=1,...,Mτ where every element is of the form χn πj..πj+1 and χn πj+1±ω, respectively. Hence, Fτ and Fσ,τ can be rewritten as a (single-sum) linear combination of kernels: Fσ,τ(χ) := PMσ,τ m=1 ασ,τ m kc(χ, χτ,σ m ) and Fτ(χ) := PMτ m=1 ατ mks(χ, χτ m) for appropriately chosen α’s. For sequences χτ m that do not correspond to true segment boundaries, the coefficient ατ m is either negative or zero (since wrong segment boundaries can only appear in wrong labelings y ̸= yn and αn(y) ≤0). True segment boundaries in correct label sequences have non-negative ατ m’s. Analogously with segments χτ,σ m . Hence, we may interpret these functions as SVM classification functions recognizing segments and boundaries of all kinds. Hidden Semi-Markov SVMs simultaneously optimize all these functions and also determine the relative importance of the different signals and sensors. In this work we propose to separate the learning of the content sensors and signal detectors from learning how they have to act together in order to produce the correct labeling. The idea is to train SVM-based classifiers ¯Fσ,τ and ¯Fτ using the kernels kc and ks on examples with known labeling. For every segment type and segment boundary we generate a set of positive examples from observed segments and boundaries. As negative examples we use all boundaries and segments that were not observed in a true labeling. This leads to a set of sequences that may potentially also appear in the expansions of Fσ,τ and Fτ. However, the expansion coefficients ¯ασ,τ m and ¯ατ m are expected to be different as the functions are estimated independently. The advantage of this approach is that solving two-class problems–for which we can reuse existing large scale learning methods–is much easier than solving the full HSM SVM problem. However, while the functions ¯Fσ,τ and ¯Fτ might recognize the same contents and signals as Fσ,τ and Fτ, the functions are obtained independently from each other and might not be scaled correctly to jointly produce the correct labeling. We therefore propose to learn transformations tσ,τ and tτ such that Fσ,τ(χ) ≈tσ,τ( ¯Fσ,τ(χ)) and Fτ(χ) ≈tτ( ¯Fτ(χ)). The transformation functions t : R →R are one-dimensional mappings and it seems fully sufficient to use for instance piece-wise linear functions (PLiFs) pµ,θ(λ) := ⟨ϕµ(λ), θ⟩with fixed abscissa boundaries µ and θ-parametrized ordinate values (ϕµ(λ) can be appropriately defined). We may define the mapping Φ(x, y) for our case as Φ(x, y) =           s(y) X j=1 [[υj = σ]][[υj+1 = τ]] ϕµσ,τ ( ¯Fσ,τ(χπj..πj+1))   σ,τ∈Υ   s(y) X j=1 [[υj+1 = τ]] ϕµτ ( ¯Fτ(χπj+1±ω))   τ∈Υ         , (7) where we simply replaced the feature with PLiF features based on the outcomes of precomputed predictions. Note that Φ(x, y) has only (|Υ|2 + |Υ|)P dimensions, where P is the number of support points used in the PLiFs. If the alphabet Υ is reasonably small then the dimensionality is low enough to solve the optimization problem (2) efficiently in the primal domain. In the next section we will illustrate how to successfully apply a version of the outlined algorithm to a problem where we have several thousands of relatively long labeled sequences. 4 Application to Gene Structure Prediction The problem of gene structure prediction is to segment nucleotide sequences (so-called pre-mRNA sequences generated by transcription; cf. Figure 4) into exons and introns. In a complex biochemical process called splicing the introns are removed from the pre-mRNA sequence to form the mature mRNA sequence that can be translated into protein. The exon-intron and intron-exon boundaries are defined by sequence motifs almost always containing the letters GT and AG (cf. Figure 4), respectively. However, these dimers appear very frequently and one needs sophisticated methods to recognize true splice sites [21, 12, 13]. So far mostly HMM-based methods such as Genscan [5], Snap [8] or ExonHunter [4] have been applied to this problem and also to the more difficult problem of gene finding. In this work we show that our newly developed method is applicable to this task and achieves very competitive results. We call it mSplicer. Figure 2 illustrates the “grammar” that we use for gene structure prediction. We only require four different states (start, exon-end, exon-start and end) and two different segment labels (exon & intron). Biologically it makes sense to distinguish between first, internal, last and single exons, as their typical lengths are quite different. Each of these exon types correspond to one transition in the model. States two and three recognize the two types of splice sites and the transition between these states defines an intron. For our specific problem we only need signal detectors for segments ending in state two and three. In the next subsection we outline how we obtain ¯F2 and ¯F3. Additionally we need content sensors for every possible transition. While the “content” of the different exon segments is essentially the same, the length of them can vary quite drastically. We therefore decided to use one content sensor ¯FI for the intron transition 2 →3 and the same content sensor ¯FE for all four exon transitions 1 →2, 1 →4, 3 →2 and 3 →4. However, in order to capture the different length characteristics, we include   s(y) X j=1 [[υj = σ]][[υj+1 = τ]]ϕγσ,τ (πj+1 −πj)   σ,τ∈Υ (8) in the feature map (7), which amounts to using PLiFs for the lengths of all transitions. Also, note that we can drop those features in (7) and (8) that correspond to transitions that are not allowed (e.g. 4 →1; cf. Figure 2).5 We have obtained data for training, validation and testing from public sequence databases (see [13] for details).For the considered genome of C. elegans we have split the data into four different sets: Set 1 is used for training the splice site signal detectors and the two content sensors; Set 2 is used for model selection of the latter signal detectors and content sensors and for training the HSM SVM; Set 3 is used for model selection of the HSM SVM; and Set 4 is used for the final evaluation. These are large scale datasets, with which current Hidden-Markov-SVMs are unable to deal with: The C. elegans training set used for label-sequence learning contains 1,536 sequences with an average length of ≈2, 300 base pairs and about 9 segments per sequence, and the splice site signal detectors where trained on more than a million examples. In principle it is possible to join sets 1 & 2, however, then the predictions of ¯Fσ,τ and ¯Fτ on the sequences used for the HSM SVM are skewed in the margin area (since the examples are pushed away from the decision boundary on the training set). We therefore keep the two sets separated. 4.1 Learning the Splice Site Signal Detectors From the training sequences (Set 1) we extracted sequences of confirmed splice sites (intron start and end). For intron start sites we used a window of [−80, +60] around the site. For intron end sites we used [−60, +80]. From the training sequences we also extracted non-splice sites, which are within an exon or intron of the sequence and have AG or GT consensus. We train an SVM [19] with softmargin using the WD kernel [12]: k(x, x′) = Pd j=1 βj Pl−j i=1 [[(x[i,i+j] = x′ [i,i+j])]], where l = 140 is the length of the sequence and x[a,b] denotes the sub-string of x from position a to (excluding) b and βj := d −j + 1. We used a normalization of the kernel ˜k(x, x′) = k(x,x′) √ k(x,x)k(x′,x′). This leads to the two discriminative functions ¯F2 and ¯F3. All model parameters (including the window size) have been tuned on the validation set (Set 2). SVM training for C. elegans resulted in 79,000 and 61,233 support vectors for detecting intron start and end sites, respectively. 5We also excluded these transitions during the Viterbi-like algorithm. Figure 1: The major steps in protein synthesis [10]. A transcript of a gene starts with an exon and may then be interrupted by an intron, followed by another exon, intron and so on until it ends in an exon. In this work we learn the unknown formal mapping from the pre-mRNA to the mRNA. Figure 2: An elementary state model for unspliced mRNA: The start is either directly followed by the end or by an arbitrary number of donor-acceptor splice site pairs. 4.2 Learning the Exon and Intron Content Sensors To obtain the exon content sensor we derived a set of exons from the training set. As negative examples we used sub-sequences of intronic sequences sampled such that both sets of strings have roughly the same length distribution. We trained SVMs using a variant of the Spectrum kernel [21] of degree d = 6, where we count 6-mers appearing at least once in both sequences. We applied the same normalization as in Sec. 4.1 and proceeded analogously for the intron content sensor. The model parameters have been obtained by tuning them on the validation set. Note that the resulting content sensors ¯FI and ¯FE need to be evaluated several times during the Viterbi-like algorithm (cf. (6)): One needs to extend segments ending at the same position i to several different starting points. By re-using the shorter segment’s outputs this computation can be made drastically faster. 4.3 Combination For datasets 2-4 we can precompute all candidate splice sites using the classifiers ¯F2 and ¯F3. We decided to use PLiFs with P = 30 support points and chose the boundaries for ¯F2, ¯F3, ¯FE, and ¯FI uniformly between −5 and 5 (typical range of outputs of our SVMs). For the PLiFs concerned with length of segments we chose appropriate boundaries in the range 30 −1000. With all these definitions the feature map as in (7) and (8) is fully defined. The model has nine PLiFs as parameters, with a total of 270 parameters. Finally, we have modified the regularizer for our particular case, which favors smooth PLiFs: P(w) := X σ,τ∈Υ |wσ,τ P −wσ,τ 1 | + X τ∈Υ |wτ P −wτ 1| + X τ∈Υ P −1 X i=1 |wτ,l i −wτ,l i+1|, where w = (wσ,τ)σ,τ∈Υ; (wτ)τ∈Υ; (wτ,l)τ∈Υ  and we constrain the PLiFs for the signal and content sensors to be monotonically increasing.6 Having defined the feature map and the regularizer, we can now apply the HSM SVM algorithm outlined in Sections 2.4 and 3. Since the feature space is rather low dimensional (270 dimensions), we can solve the optimization problem in the primal domain even with several thousands of examples employing a standard optimizer (we used ILOG CPLEX and column generation) within a reasonable time.7 4.4 Results To estimate the out-of-sample accuracy, we apply our method to the independent test dataset 4. For C. elegans we can compare it to ExonHunter8 on 1177 test sequences. We greatly outperform the ExonHunter method: our method obtains almost 1/3 of the test error of ExonHunter (cf. Table 1). Simplifying the problem by only considering sequences between the start and stop codons allows us to also include SNAP in the comparison on the dataset 4’, a slightly modified version of dataset 4 with 1138 sequences.9 The results are shown in Table 1. On dataset 4’ the best competing method achieves an error rate of 9.8% which is more than twice the error rate of our method. 5 Conclusion We have extended the framework of Hidden Markov SVMs to Hidden Semi-Markov SVMs and suggested an very efficient two-stage learning algorithm to train an approximation to Hidden SemiMarkov SVMs. Moreover, we have successfully applied our method on large scale gene structure 6This implements our intuition that large SVM scores should lead to larger scores for a labeling. 7It takes less than one hour to solve the HSM SVM problem with about 1,500 sequences on a single CPU. Training the content and signal detectors on several hundred thousand examples takes around 5 hours in total. 8The method was trained by their authors on the same training data. 9In this setup additional biological information about the so-called “open reading frame” is used: As there was only a version of SNAP available that uses this information, we incorporated this extra knowledge also in our model (marked ∗) and also used another version of Exonhunter that also exploits that information in order to allow a fair comparison. C. elegans Dataset 4 Method error rate exon Sn exon Sp exon nt Sn exon nt Sp Our Method 13.1% 96.7% 96.8% 98.9% 97.2% ExonHunter 36.8% 89.1% 88.4% 98.2% 97.4% C. elegans Dataset 4’ Our Method∗ 4.8% 98.9% 99.2% 99.2% 99.9% ExonHunter∗ 9.8% 97.9% 96.6% 99.4% 98.1% SNAP∗ 17.4% 95.0% 93.3% 99.0% 98.9% Table 1: Shown are the rates of predicting a wrong gene structure, sensitivity (Sn) and specificity (Sp) on exon and nucleotide levels (see e.g. [8]) for our method, ExonHunter and SNAP. The methods exploiting additional biological knowledge have an advantage and are marked with ∗. prediction appearing in computational biology, where our method obtains less than a half of the error rate of the best competing HMM-based method. Our predictions are available at Wormbase: http://www.wormbase.org. Additional data and results are available at the project’s website http://www.fml.mpg.de/raetsch/projects/msplicer. Acknowledgments We thank K.-R. M¨uller, B. Sch¨olkopf, E. Georgii, A. Zien, G. Schweikert and G. Zeller for inspiring discussions. The latter three we also thank for proofreading the manuscript. Moreover, we thank D. Surendran for naming the piece-wise linear functions PLiF and optimizing the Viterbi-implementation. References [1] Y. Altun, T. Hofmann, and A. Smola. Gaussian process classification for segmenting and annotating sequences. In Proc. ICML 2004, 2004. [2] Y. Altun, D. McAllester, and M. Belkin. Maximum margin semi-supervised learning for structured variables. In Proc. NIPS 2005, 2006. [3] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In T. Fawcett, editor, Proc. 20th Int. Conf. Mach. Learn., pages 3–10, 2003. [4] B. Brejova, D.G. Brown, M. Li, and T. Vinar. ExonHunter: a comprehensive approach to gene finding. Bioinformatics, 21(Suppl 1):i57–i65, 2005. [5] C. Burge and S. Karlin. Prediction of complete gene structures in human genomic DNA. Journal of Molecular Biology, 268:78–94, 1997. [6] X. Ge. Segmental Semi-Markov Models and Applications to Sequence Analysis. PhD thesis, University of California, Irvine, 2002. [7] J. Janssen and N. Limnios. Semi-Markov Models and Applications. Kluwer Academic, 1999. [8] I. Korf. Gene finding in novel genomes. BMC Bioinformatics, 5(59), 2004. [9] D. Kulp, D. Haussler, M.G. Reese, and F.H. Eeckman. A generalized hidden markov model for the recognition of human genes in DNA. ISMB 1996, pages 134–141, 1996. [10] B. Lewin. Genes VII. Oxford University Press, New York, 2000. [11] L.R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–285, February 1989. [12] G. R¨atsch and S. Sonnenburg. Accurate splice site prediction for Caenorhabditis elegans. In B. Sch¨olkopf, K. Tsuda, and J.-P. Vert, editors, Kernel Methods in Computational Biology. MIT Press, 2004. [13] G. R¨atsch, S. Sonnenburg, J. Srinivasan, H. Witte, K.-R. M¨uller, R. Sommer, and B. Sch¨olkopf. Improving the C. elegans genome annotation using machine learning. PLoS Computational Biology, 2007. In press. [14] S. Sarawagi and W.W. Cohen. Semi-markov conditional random fields for information extraction. In Proc. NIPS 2004, 2005. [15] G.D. Stormo and D. Haussler. Optimally parsing a sequence into different classes based on multiple types of information. In Proc. ISMB 1994, pages 369–375, Menlo Park, CA, 1994. AAAI/MIT Press. [16] R. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learrning. Artificial Intelligence, 112:181–211, 1999. [17] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Proc. NIPS 2003, 16, 2004. [18] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Large margin methods for structured output spaces. Journal for Machine Learning Research, 6, September 2005. [19] V.N. Vapnik. The nature of statistical learning theory. Springer Verlag, New York, 1995. [20] A. J. Viterbi. Error bounds for convolutional codes and an asymptotically optimal decoding algorithm. IEEE Trans. Informat. Theory, IT-13:260–269, Apr 1967. [21] X.H. Zhang, K.A. Heller, I. Hefter, C.S. Leslie, and L.A. Chasin. Sequence information for the splicing of human pre-mRNA identified by SVM classification. Genome Res, 13(12):2637–50, 2003.
2006
199
3,031
Learning to Traverse Image Manifolds Piotr Doll´ar, Vincent Rabaud and Serge Belongie University of California, San Diego {pdollar,vrabaud,sjb}@cs.ucsd.edu Abstract We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1 Introduction A number of techniques have been developed for dealing with high dimensional data sets that fall on or near a smooth low dimensional nonlinear manifold. Such data sets arise whenever the number of modes of variability of the data are much fewer than the dimension of the input space, as is the case for image sequences. Unsupervised manifold learning refers to the problem of recovering the structure of a manifold from a set of unordered sample points. Manifold learning is often equated with dimensionality reduction, where the goal is to find an embedding or ‘unrolling’ of the manifold into a lower dimensional space such that certain relationships between points are preserved. Such embeddings are typically used for visualization, with the projected dimension being 2 or 3. Image manifolds have also been studied in the context of measuring distance between images undergoing known transformations. For example, the tangent distance [20, 21] between two images is computed by generating local approximations of a manifold from known transformations and then computing the distance between these approximated manifolds. In this work, we seek to frame the problem of recovering the structure of a manifold as that of directly learning the transformations a point on a manifold may undergo. Our approach, Locally Smooth Manifold Learning (LSML), attempts to learn a warping function W with d degrees of freedom that can take any point on the manifold and generate its neighbors. LSML recovers a first order approximation of W, and by making smoothness assumptions on W can generalize to unseen points. We show that LSML can recover the structure of the manifold where data is given, and also in regions where it is not, including regions beyond the support of the original data. We propose a number of uses for the recovered warping function W, including embedding with a natural out-ofsample extension, and in the image domain discuss how it can be used for tasks such as computation of tangent distance, image sequence interpolation, compression, and motion transfer. We also show examples where LSML is used to simultaneously learn the structure of multiple “parallel” manifolds, and even generalize to data on new manifolds. Finally, we show that by exploiting the manifold smoothness, LSML is robust under conditions where many embedding methods have difficulty. Related work is presented in Section 2 and the algorithm in Section 3. Experiments on point sets and results on images are shown in Sections 4 and 5, respectively. We conclude in Section 6. 2 Related Work Related work can be divided into two categories. The first is the literature on manifold learning, which serves as the foundation for this work. The second is work in computer vision and computer graphics addressing image warping and generative models for image formation. A number of classic methods exist for recovering the structure of a manifold. Principal component analysis (PCA) tries to find a linear subspace that best captures the variance of the original data. Traditional methods for nonlinear manifolds include self organizing maps, principal curves, and variants of multi-dimensional scaling (MDS) among others, see [11] for a brief introduction to these techniques. Recently the field has seen a number of interesting developments in nonlinear manifold learning. [19] introduced a kernelized version of (PCA). A number of related embedding methods have also been introduced, representatives include LLE [17], ISOMAP [22], and more recently SDE [24]. Broadly, such methods can be classified as spectral embedding techniques [24]; the embeddings they compute are based on an eigenvector decomposition of an n × n matrix that represents geometrical relationships of some form between the original n points. Out-of-sample extensions have been proposed [3]. The goal of embedding methods (to find structure preserving embeddings) differs from the goals of LSML (learn to traverse the manifold). Four methods that we share inspiration with are [6, 13, 2, 16]. [6] employs a novel charting based technique to achieve increased robustness to noise and decreased probability of pathological behavior vs. LLE and ISOMAP; we exploit similar ideas in the construction of LSML but differ in motivation and potential applicability. [2] proposed a method to learn the tangent space of a manifold and demonstrated a preliminary illustration of rotating a small bitmap image by about 1◦. Work by [13] is based on the notion of learning a model for class specific variation, the method reduces to computing a linear tangent subspace that models variability of each class. [16] shares one of our goals as it addresses the problem of learning Lie groups, the infinitesimal generators of certain geometric transformations. In image analysis, the number of dimensions is usually reduced via approaches like PCA [15], epitomic representation [12], or generative models like in the realMOVES system developed by Di Bernardo et al. [1]. Sometimes, a precise model of the data, like for faces [4] or eyes [14], is even used to reduce the complexity of the data. Another common approach is simply to have instances of an object in different conditions: [5] start by estimating feature correspondences between a novel input with unknown pose and lighting and a stored labeled example in order to apply an arbitrary warp between pictures. The applications range from video texture synthesis [18] and facial expression extrapolation [8, 23] to face recognition [10] and video rewrite [7]. 3 Algorithm Let D be the dimension of the input space, and assume the data lies on a smooth d-dimensional manifold (d ≪D). For simplicity assume that the manifold is diffeomorphic with a subset of Rd, meaning that it can be endowed with a global coordinate system (this requirement can easily be relaxed) and that there exists a continuous bijective mapping M that converts coordinates y ∈Rd to points x ∈RD on the manifold. The goal of most dimensionality reduction techniques given a set of data points xi is to find an embedding yi = M−1(xi) that preserves certain properties of the original data like the distances between all points (classical MDS) or the distances or angles between nearby points (e.g. spectral embedding methods). Instead, we seek to learn a warping function W that can take a point on the manifold and return any neighboring point on the manifold, capturing all the modes of variation of the data. Let us use W(x, ϵ) to denote the warping of x, with ϵ ∈Rd acting on the degrees of freedom of the warp according to the formula M: W(x, ϵ) = M(y + ϵ), where y = M−1(x). Taking the first order approximation of the above gives: W(x, ϵ) ≈x + H(x)ϵ, where each column H·k(x) of the matrix H(x) is the partial derivative of M w.r.t. yk: H·k(x) = ∂/∂ykM(y). This approximation is valid given ϵ small enough, hence we speak of W being an infinitesimal warping function. We can restate our goal of learning to warp in terms of learning a function Hθ : RD →RD×d parameterized by a variable θ. Only data points xi sampled from one or several manifolds are given. For each xi, the set N i of neighbors is then computed (e.g. using variants of nearest neighbor such (a) (b) (c) (d) (e) (f) Figure 1: Overview. Twenty points (n=20) that lie on 1D curve (d=1) in a 2D space (D=2) are shown in (a). Black lines denote neighbors, in this case the neighborhood graph is not connected. We apply LSML to train H (with f = 4 RBFs). H maps points in R2 to tangent vectors; in (b) tangent vectors computed over a regularly spaced grid are displayed, with original points (blue) and curve (gray) overlayed. Tangent vectors near original points align with the curve, but note the seam through the middle. Regularization fixes this problem (c), the resulting tangents roughly align to the curve along its entirety. We can traverse the manifold by taking small steps in the direction of the tangent; (d) shows two such paths, generated starting at the red plus and traversing outward in large steps (outer curve) and finer steps (inner curve). This generates a coordinate system for the curve resulting in a 1D embedding shown in (e). In (f) two parallel curves are shown, with n=8 samples each. Training a common H results in a vector field that more accurately fits each curve than training a separate H for each (if the structure of the two manifolds was very different this need not be the case). as kNN or ϵNN), with the constraint that two points can be neighbors only if they come from the same manifold. To proceed, we assume that if xj is a neighbor of xi, there then exists an unknown ϵij such that W(xi, ϵij) = xj to within a good approximation. Equivalently: Hθ(xi)ϵij ≈xj −xi. We wish to find the best θ in the squared error sense (the ϵij being additional free parameters that must be optimized over). The expression of the error we need to minimize is therefore: error1(θ) = min {ϵij} n X i=1 X j∈N i Hθ(xi)ϵij −(xj −xi) 2 2 (1) Minimizing the above error function can be interpreted as trying to find a warping function that can transform a point into its neighbors. Note, however, that the warping function has only d degrees of freedom while a point may have many more neighbors. This intuition allows us to rewrite the error in an alternate form. Let ∆i be the matrix where each column is of the form (xj −xi) for each neighbor of xi. Let ∆i = U iΣiV i⊤be the thin singular value decomposition of ∆i. Then, one can show [9] that error1 is equivalent to the following: error2(θ) = min {Ei} n X i=1 Hθ(xi)Ei −U iΣi 2 F (2) Here, the matrices Ei are the additional free parameters. Minimizing the above can be interpreted as searching for a warping function that directly explains the modes of variation at each point. This form is convenient since we no longer have to keep track of neighbors. Furthermore, if there is no noise and the linearity assumption holds there are at most d non-zero singular values. In practice we use the truncated SVD, keeping at most 2d singular values, allowing for significant computational savings. We now give the remaining details of LSML for the general case [9]. For the case of images, we present an efficient version in Section 5 which uses some basic domain knowledge to avoid solving a large regression. Although potentially any regression technique is applicable, a linear model is particularly easy to work with. Let f i be f features computed over xi. We can then define Hθ(xi) = [Θ1f i · · · ΘDf i]⊤, where each Θk is a d × f matrix. Re-arranging error2 gives: errorlin(θ) = min {Ei} n X i=1 D X k=1 f i⊤Θk⊤Ei −U i k·Σi 2 2 (3) Solving simultaneously for E and Θ is complex, but if either E or Θ is fixed, solving for the remaining variable becomes a least squares problem (an equation of the form AXB = C can be rewritten as B⊤⊗A · vec(X) = vec(C), where ⊗denotes the Kronecker product and vec the (a) (b) (c) (d) Figure 2: Robustness. LSML used to recover the embedding of the S-curve under a number of sampling conditions. In each plot we show the original points along with the computed embedding (rotated to align vertically), correspondence is indicated by coloring/shading (color was determined by the y-coordinate of the embedding). In each case LSML was run with f = 8, d = 2, and neighbors computed by ϵNN with ϵ = 1 (the height of the curve is 4). The embeddings shown were recovered from data that was: (a) densely sampled (n=500) (b) sparsely sampled (n=100), (c) highly structured (n=190), and (d) noisy (n=500, random Gaussian noise with σ = .1). In each case LSML recovered the correct embedding. For comparison, LLE recovered good embeddings for (a) and (c) and ISOMAP for (a),(b), and (c). The experiments were repeated a number of times yielding similar results. For a discussion see the text. matrix vectorization function). To solve for θ, we use an alternating minimization procedure. In all experiments in this paper we perform 30 iterations of the above procedure, and while local minima do not seem to be to prevalent, we randomly restart the procedure 5 times. Finally, nowhere in the construction have we enforced that the learned tangent vectors be orthogonal (such a constraint would only be appropriate if the manifold was isometric to a plane). To avoid numerically unstable solutions we regularize the error: error′ lin(θ) = errorlin(θ) + λE n X i=1 Ei 2 F + λθ D X k=1 Θk 2 F (4) For the features we use radial basis functions (RBFs) [11], the number of basis functions, f, being an additional parameter. Each basis function is of the form f j(x) = exp(−∥x −µj∥2 2/2σ2) where the centers µj are obtained using K-means clustering on the original data with f clusters and the width parameter σ is set to be twice the average of the minimum distance between each cluster and its nearest neighbor center. The feature vectors are then simply defined as f i = [f 1(xi) · · · f p(xi)]⊤. The parameter f controls the smoothness of the final mapping Hθ; larger values result in mappings that better fit local variations of the data, but whose generalization abilities to other points on the manifold may be weaker. This is exactly analogous to the standard supervised setting and techniques like cross validation could be used to optimize over f. 4 Experiments on Point Sets We begin with a discussion on the intuition behind various aspects of LSML. We then show experiments demonstrating the robustness of the method, followed by a number of applications. In the figures that follow we make use of color/shading to indicate point correspondences, for example when we show the original point set and its embedding. LSML learns a function H from points in RD to tangent directions that agree, up to a linear combination, with estimated tangent directions at the original training points of the manifold. By constraining H to be smooth (through use of a limited number of RBFs), we can compute tangents at points not seen during training, including points that may not lie on the underlying manifold. This generalization ability of H will be central to the types of applications considered. Finally, given multiple non-overlapping manifolds with similar structure, we can train a single H to correctly predict the tangents of each, allowing information to be shared. Fig. 1 gives a visual tutorial of these different concepts. LSML appears quite robust. Fig. 2 shows LSML successfully applied for recovering the embedding of the “S-curve” under a number of sampling conditions (similar results were obtained on the “Swissroll”). After H is learned, the embedding is computed by choosing a random point on the manifold (a) (b) (c) (d) Figure 3: Reconstruction. Reconstruction examples are used to demonstrate quality and generalization of H. (a) Points sampled from the Swiss-roll manifold (middle), some recovered tangent vectors in a zoomedin region (left) and embedding found by LSML (right). Here n = 500 f = 20, d = 2, and neighbors were computed by ϵNN with ϵ = 4 (height of roll is 20). Reconstruction of Swiss-roll (b), created by a backprojection from regularly spaced grid points in the embedding (traversal was done from a single original point located at the base of the roll, see text for details). Another reconstruction (c), this time using all points and extending the grid well beyond the support of the original data. The Swiss-roll is extended in a reasonable manner both inward (occluded) and outward. (d) Reconstruction of unit hemisphere (LSML trained with n = 100 f = 6, d = 2, ϵNN with ϵ = .3) by traversing outward from topmost point, note reconstruction in regions with no points. and establishing a coordinate system by traversing outward (the same procedure can be used to embed novel points, providing a natural out-of-sample extension). Here we compare only to LLE and ISOMAP using published code. The densely sampled case, Fig. 2(a), is comparatively easy and a number of methods have been shown to successfully recover an embedding. On sparsely sampled data, Fig. 2(b), the problem is more challenging; LLE had problems for n < 250 (lowering LLE’s regularization parameter helped somewhat). Real data need not be uniformly sampled, see Fig. 2(c). In the presence of noise Fig. 2(d), ISOMAP and LLE performed poorly. A single outlier can distort the shortest path computed by ISOMAP, and LLE does not directly use global information necessary to disambiguate noise. Other methods are known to be robust [6], and in [25] the authors propose a method to “smooth” a manifold as a preprocessing step for manifold learning algorithms; however a full comparison is outside the scope of this work. Having learned H and computed an embedding, we can also backproject from a point y ∈Rd to a point x on the manifold by first finding the coordinate of the closest point yi in the original data, then traversing from xi by ϵj = yj −yi j along each tangent direction j (see Fig. 1(d)). Fig. 3(a) shows tangents and an embedding recovered by LSML on the Swiss-roll. In Fig. 3(b) we backproject from a grid of points in R2; by linking adjacent sets of points to form quadrilaterals we can display the resulting backprojected points as a surface. In Fig. 3(c), we likewise do a backprojection (this time keeping all the original points), however we backproject grid points well below and above the support of the original data. Although there is no ground truth here, the resulting extension of the surface seems “natural”. Fig. 3(d) shows the reconstruction of a unit hemisphere by traversing outward from the topmost point. There is no isometric mapping (preserving distance) between a hemisphere and a plane, and given a sphere there is actually not even a conformal mapping (preserving angles). In the latter case an embedding is not possible, however, we can still easily recover H for both (only hemisphere results are shown). 5 Results on Images Before continuing, we consider potential applications of H in the image domain, including tangent distance estimation, nonlinear interpolation, extrapolation, compression, and motion transfer. We refer to results on point-sets to aid visualization. Tangent distance estimation: H computes the tangent and can be used directly in invariant recognition schemes such as [21]. Compression: Fig. 3(b,d) suggest how given a reference point and H nearby points can be reconstructed using d numbers (with distortion increasing with distance). Nonlinear interpolation and extrapolation: points can be generated within and beyond the support of given data (cf. Fig. 3); of potential use in tasks such as frame rate up-conversion, reconstructing dropped frames and view synthesis. Motion transfer: for certain classes of manifolds with “parallel” structure (cf. Fig. 1(f)), a recovered warp may be used on an entirely novel image. These applications will depend not only on the accuracy of the learned H but also on how close a set of images is to a smooth manifold. (a) (b) (c) (d) Figure 4: The translation manifold. Here F i = Xi; s = 17, d = 2 and 9 sets of 6 translated images each were used (not including the cameraman). (a) Zero padded, smoothed test image x. (b) Visualization of learned Θ, see text for details. (c) Hθ(x) computed via convolution. (d) Several transformations obtained after multiple steps along manifold for different linear combinations of Hθ(x). Some artifacts due to error propagation start to appear in the top figures. The key insight to working with images is that although images can live in very high dimensional spaces (with D ≈106 quite common), we do not have to learn a transformation with that many parameters. Let x be an image and H·k(x), k ∈[1, d], be the d tangent images. Here we assume that each pixel in H·k(x) can be computed based only on the information in s × s patch centered on the corresponding pixel in x. Thus, instead of learning a function RD →RD×d we learn a function Rs2 →Rd, and to compute H we apply the per patch function at each of the D locations in the image. The resulting technique scales independently of D, in fact different sized images can be used. The per patch assumption is not always suitable, most notably for transformations that are based only on image coordinate and are independent of appearance. The approach of Section 3 needs to be slightly modified to accommodate patches. We rewrite each image xi ∈RD as a s2 × D matrix Xi where each row contains pixels from one patch in xi (in training we sub-sample patches). Patches from all the images are clustered to obtain the f RBFs; each Xi is then transformed to a f × D matrix F i that contains the features computed for each patch. The per patch linear model can now be written as Hθ(xi) = (ΘF i)⊤, where Θ is a d × f matrix (compare with the D Θs needed without the patch assumption). The error function, which is minimized in a similar way [9], becomes: errorimg(Θ) = min {Ei} n X i=1 F i⊤Θ⊤Ei −U iΣi 2 F (5) We begin with the illustrative example of translation (Fig. 4). Here, RBFs were not used, instead F i = Xi. The learned Θ is a 2 × s2 matrix, which can be visualized as two s × s images as in Fig. 4(b). These resemble derivative of Gaussian filters, which are in fact the infinitesimal generates for translation [16]. Computing the dot product of each column of Θ with each patch can be done using a convolution. Fig. 4 shows applications of the learned transformations, which resemble translations with some artifacts. Fig. 5 shows the application of LSML for learning out-of-plane rotation of a teapot. On this size problem training LSML (in MATLAB) takes a few minutes; convergence occurs within about 10 iterations of the minimization procedure. Hθ(x) for novel x can be computed with f convolutions (to compute cross correlation) and is also fast. The outer frames in Fig. 5 highlight a limitation of the approach: with every successive step error is introduced; eventually significant error can accumulate. Here, we used a step size which gives roughly 10 interpolated frames between each pair of original frames. With out-of-plane rotation, information must be created and the problem becomes ambiguous (multiple manifolds can intersect at a single point), hence generalization across images is not expected to be good. In Fig. 6, results are shown on an eye manifold with 2 degrees of freedom. LSML was trained on sparse data from video of a single eye; Hθ was used to synthesize views within and also well outside the support of the original data (cf. Fig. 6(c)). In Fig. 6(d), we applied the transformation learned from one person’s eye to a single image of another person’s eye (taken under the same imaging conditions). LSML was able to start from the novel test image and generate a convincing series of ~~ · · · ~~ · · · (a) (b) (c) (d) (e) Figure 5: Manifold generated by out-of-plane rotation of a teapot (data from [23], sub-sampled and smoothed). Here, d = 1, f = 400 and roughly 3000 patches of width s = 13 were sampled from 30 frames. Bottom row shows the ground truth images; dashed box contains 3 of 30 training images, representing ∼8◦of physical rotation. The top row shows the learned transformation applied to the central image. By observing the tip, handle and the two white blobs on the teapot, and comparing to ground truth data, we can observe the quality of the learned transformation on seen data (b) and unseen data (d), both starting from a single frame (c). The outmost figures (a)(e) shows failure for large rotations. Fig. 5 shows the application of LSML for learning out-of-plane rotation of a teapot. On this size problem training LSML (in MATLAB) takes a few minutes; convergence occurs within about 10 iterations of the minimization procedure. Hθ(x) for novel x can be computed with f convolutions (to compute cross correlation) and is also fast. The outer frames in Fig. 5 highlight a limitation of the approach: with every successive step error is introduced; eventually significant error can accumulate. Here, we used a step size which gives roughly 10 interpolated frames between each pair of original frames. With out-of-plane rotation, information must be created and the problem becomes ambiguous (multiple manifolds can intersect at a single point), hence generalization across images is not expected to be good.             Q Q Q Q Q Q Q Q Q Q Q Q QQQQQQQQQQQQ          +  Q Q Q Q Q Q Q Q k ? 6 QQQQQQQQ s  3 (a) (b) (c) (d) Figure 6: Traversing the eye manifold. LSML trained on one eye moving along five different lines (3 vertical and 2 horizontal). Here d = 2, f = 600, s = 19 and around 5000 patches were sampled; 2 frames were considered neighbors if they were adjacent in time. Figure (a) shows images generated from the central image. The inner 8 frames lie just outside the support of the training data (not shown), the outer 8 are extrapolated beyond its support. Figure (b) details Hθ(x) for two images in a warping sequence: a linear combination can lead the iris/eyelid to move in different directions (e.g. the sum would make the iris go up). Figure (c) shows extrapolation far beyond the training data, i.e. an eye wide open and fully closed. Finally, Figure(d) shows how the eye manifold we learned on one eye can be applied on a novel eye not seen during training. Figure 5: Manifold generated by out-of-plane rotation of a teapot (data from [24], sub-sampled and smoothed). Here, d = 1, f = 400 and roughly 3000 patches of width s = 13 were sampled from 30 frames. Bottom row shows the ground truth images; dashed box contains 3 of 30 training images, representing ∼8◦ of physical rotation. The top row shows the learned transformation applied to the central image. By observing the tip, handle and the two white blobs on the teapot, and comparing to ground truth data, we can observe the quality of the learned transformation on seen data (b) and unseen data (d), both starting from a single frame (c). The outmost figures (a)(e) shows failure for large rotations.             Q Q Q Q Q Q Q Q Q Q Q Q QQQQQQQQQQQQ          +  Q Q Q Q Q Q Q Q k ? 6 QQQQQQQQ s  3 (a) (b) (c) (d) Figure 1: Traversing the eye manifold. LSML trained on one eye moving along five different lines (3 vertical and 2 horizontal). Here d = 2, f = 600, s = 19 and around 5000 patches were sampled; 2 frames were considered neighbors if they were adjacent in time. Figure (a) shows images generated from the central image. The inner 8 frames lie just outside the support of the training data (not shown), the outer 8 are extrapolated beyond its support. Figure (b) details Hθ(x) for two images in a warping sequence: a linear combination can lead the iris/eyelid to move in different directions (e.g. the sum would make the iris go up). Figure (c) shows extrapolation far beyond the training data, i.e. an eye wide open and fully closed. Finally, Figure(d) shows how the eye manifold we learned on one eye can be applied on a novel eye not seen during training. Figure 6: Traversing the eye manifold. LSML trained on one eye moving along five different lines (3 vertical and 2 horizontal). Here d = 2, f = 600, s = 19 and around 5000 patches were sampled; 2 frames were considered neighbors if they were adjacent in time. Figure (a) shows images generated from the central image. The inner 8 frames lie just outside the support of the training data (not shown), the outer 8 are extrapolated beyond its support. Figure (b) details Hθ(x) for two images in a warping sequence: a linear combination can lead the iris/eyelid to move in different directions (e.g. the sum would make the iris go up). Figure (c) shows extrapolation far beyond the training data, i.e. an eye wide open and fully closed. Finally, Figure(d) shows how the eye manifold we learned on one eye can be applied on a novel eye not seen during training. transformations. Thus, motion transfer was possible - Hθ trained on one series of images generalized to a different set of images. 6 Conclusion In this work we presented an algorithm, Locally Smooth Manifold Learning, for learning the structure of a manifold. Rather than pose manifold learning as the problem of recovering an embedding, we posed the problem in terms of learning a warping function for traversing the manifold. Smoothness assumptions on W allowed us to generalize to unseen data. Proposed uses of LSML include tangent distance estimation, frame rate up-conversion, video compression and motion transfer. We are currently engaged in scaling the implementation to handle large datasets; the goal is to integrate LSML into recognition systems to provide increased invariance to transformations. Acknowledgements This work was funded by the following grants and organizations: NSF Career Grant #0448615, Alfred P. Sloan Research Fellowship, NSF IGERT Grant DGE-0333451, and UCSD Division of Calit2. We would like to thank Sameer Agarwal, Kristin Branson, Matt Tong, and Neel Joshi for valuable input and Anna Shemorry for helping us make it through the deadline. References [1] E. Di Bernardo, L. Goncalves and P. Perona.US Patent 6,552,729: Automatic generation of animation of synthetic characters., 2003. [2] Y. Bengio and M. Monperrus. Non-local manifold tangent learning. In NIPS. 2005. [3] Y. Bengio, J.F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, and M. Ouimet. Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering. In NIPS, 2004. [4] D. Beymer and T. Poggio. Face recognition from one example view. In ICCV, page 500, Washington, DC, USA, 1995. IEEE Computer Society. [5] Volker Blanz and Thomas Vetter. Face recognition based on fitting a 3D morphable model. PAMI, 25(9):1063–1074, 2003. [6] M. Brand. Charting a manifold. In NIPS, 2003. [7] Christoph Bregler, Michele Covell, and Malcolm Slaney. Video rewrite: driving visual speech with audio. In SIGGRAPH, pages 353–360, 1997. [8] E. Chuang, H. Deshpande, and C. Bregler. Facial expression space learning. In Pacific Graphics, 2002. [9] P. Doll´ar, V. Rabaud, and S. Belongie. Learning to traverse image manifolds. Technical Report CS20070876, UCSD CSE, Jan. 2007. [10] G. J. Edwards, T. F. Cootes, and C. J. Taylor. Face recognition using active appearance models. ECCV, 1998. [11] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001. [12] N. Jojic, B. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In ICCV, 2003. [13] D. Keysers, W. Macherey, J. Dahmen, and H. Ney. Learning of variability for invariant statistical pattern recognition. ECML, 2001. [14] T. Moriyama, T. Kanade, J. Xiao, and J. F. Cohn. Meticulously detailed eye region model. PAMI, 2006. [15] H. Murase and S.K. Nayar. Visual learning and recognition of 3D objects from appearance. IJCV, 1995. [16] R. Rao and D. Ruderman. Learning Lie groups for invariant visual perception. In NIPS, 1999. [17] L. K. Saul and S. T. Roweis. Think globally, fit locally: unsupervised learning of low dimensional manifolds. JMLR, 2003. [18] A. Sch¨odl, R. Szeliski, D.H. Salesin, and I. Essa. Video textures. In SIGGRAPH, 2000. [19] B. Sch¨olkopf, A. Smola, and K. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neur. Comp., 1998. [20] P. Simard, Y. LeCun, and J. S. Denker. Efficient pattern recognition using a new transformation distance. In NIPS, 1993. [21] P. Simard, Y. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognitiontangent distance and tangent propagation. In Neural Networks: Tricks of the Trade, 1998. [22] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290, 2000. [23] Joshua B. Tenenbaum and William T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247–1283, 2000. [24] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite programming. In CVPR04. [25] Z. Zhang and Zha. Local linear smoothing for nonlinear manifold learning. Technical report, 2003.
2006
2
3,032
Online Classification for Complex Problems Using Simultaneous Projections Yonatan Amit1 Shai Shalev-Shwartz1 Yoram Singer1,2 1 School of Computer Sci. & Eng., The Hebrew University, Jerusalem 91904, Israel 2 Google Inc. 1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA {mitmit,shais,singer}@cs.huji.ac.il Abstract We describe and analyze an algorithmic framework for online classification where each online trial consists of multiple prediction tasks that are tied together. We tackle the problem of updating the online hypothesis by defining a projection problem in which each prediction task corresponds to a single linear constraint. These constraints are tied together through a single slack parameter. We then introduce a general method for approximately solving the problem by projecting simultaneously and independently on each constraint which corresponds to a prediction sub-problem, and then averaging the individual solutions. We show that this approach constitutes a feasible, albeit not necessarily optimal, solution for the original projection problem. We derive concrete simultaneous projection schemes and analyze them in the mistake bound model. We demonstrate the power of the proposed algorithm in experiments with online multiclass text categorization. Our experiments indicate that a combination of class-dependent features with the simultaneous projection method outperforms previously studied algorithms. 1 Introduction In this paper we discuss and analyze a framework for devising efficient online learning algorithms for complex prediction problems such as multiclass categorization. In the settings we cover, a complex prediction problem is cast as the task of simultaneously coping with multiple simplified subproblems which are nonetheless tied together. For example, in multiclass categorization, the task is to predict a single label out of k possible outcomes. Our simultaneous projection approach is based on the fact that we can retrospectively (after making a prediction) cast the problem as the task of making k −1 binary decisions each of which involves the correct label and one of the competing labels. The performance of the k −1 predictions is measured through a single loss. Our approach stands in contrast to previously studied methods which can be roughly be partitioned into three paradigms. The first and probably the simplest previously studied approach is to break the problem into multiple decoupled problems that are solved independently. Such an approach was used for instance by Weston and Watkins [1] for batch learning of multiclass support vector machines. The simplicity of this approach also underscores its deficiency as it is detached from the original loss of the complex decision problem. The second approach maintains the original structure of the problem but focuses on a single, worst performing, derived sub-problem (see for instance [2]). While this approach adheres with the original structure of the problem, the resulting update mechanism is by construction sub-optimal as it oversees almost all of the constraints imposed by the complex prediction problem. (See also [6] for analysis and explanation of the sub-optimality of this approach.) The third approach for dealing with complex problems is to tailor a specific efficient solution for the problem on hand. While this approach yielded efficient learning algorithms for multiclass categorization problems [2] and aesthetic solutions for structured output problems [3, 4], devising these algorithms required dedicated efforts. Moreover, tailored solutions typically impose rather restrictive assumptions on the representation of the data in order to yield efficient algorithmic solutions. In contrast to previously studied approaches, we propose a simple, general, and efficient framework for online learning of a wide variety of complex problems. We do so by casting the online update task as an optimization problem in which the newly devised hypothesis is required to be similar to the current hypothesis while attaining a small loss on multiple binary prediction problems. Casting the online learning task as a sequence of instantaneous optimization problems was first suggested and analyzed by Kivinen and Warmuth [12] for binary classification and regression problems. In our optimization-based approach, the complex decision problem is cast as an optimization problem that consists of multiple linear constraints each of which represents a simplified sub-problem. These constraints are tied through a single slack variable whose role is to asses the overall prediction quality for the complex problem. We describe and analyze a family of two-phase algorithms. In the first phase, the algorithms solve simultaneously multiple sub-problems. Each sub-problem distills to an optimization problem with a single linear constraint from the original multiple-constraints problem. The simple structure of each single-constraint problem results in an analytical solution which is efficiently computable. In the second phase, the algorithms take a convex combination of the independent solutions to obtain a solution for the multiple-constraints problem. The end result is an approach whose time complexity and mistake bounds are equivalent to approaches which solely deal with the worst-violating constraint [9]. In practice, though, the performance of the simultaneous projection framework is much better than single-constraint update schemes. 2 Problem Setting In this section we introduce the notation used throughout the paper and formally describe our problem setting. We denote vectors by lower case bold face letters (e.g. x and ω) where the j’th element of x is denoted by xj. We denote matrices by upper case bold face letters (e.g. X), where the j’th row of X is denoted by xj. The set of integers {1, . . . , k} is denoted by [k]. Finally, we use the hinge function [a]+ = max{0, a}. Online learning is performed in a sequence of trials. At trial t the algorithm receives a matrix Xt of size kt × n, where each row of Xt is an instance, and is required to make a prediction on the label associated with each instance. We denote the vector of predicted labels by ˆyt. We allow ˆyt j to take any value in R, where the actual label being predicted is sign(ˆyt j) and |ˆyt j| is the confidence in the prediction. After making a prediction ˆyt the algorithm receives the correct labels yt where yt j ∈{−1, 1} for all j ∈[kt]. In this paper we assume that the predictions in each trial are formed by calculating the inner product between a weight vector ωt ∈Rn with each instance in Xt, thus ˆyt = Xt ωt. Our goal is to perfectly predict the entire vector yt. We thus say that the vector ˆyt was imperfectly predicted if there exists an outcome j such that yt j ̸= sign(ˆyt j). That is, we suffer a unit loss on trial t if there exists j, such that sign(ˆyt j) ̸= yt j. Directly minimizing this combinatorial error is a computationally difficult task. Therefore, we use an adaptation of the hinge-loss, defined ℓ(ˆyt, yt) = maxj∈[kt]  1 −yt j ˆyt j  +, as a proxy for the combinatorial error. The quantity yt j ˆyt j is often referred to as the (signed) margin of the prediction and ties the correctness and the confidence in the prediction. We use ℓ(ω; (Xt, yt)) to denote ℓ(ˆyt, yt) where ˆyt = Xt ω. We also denote the set of instances whose labels were predicted incorrectly by Mt = {j | sign(ˆyt j) ̸= yt j}, and similarly the set of instances whose hinge-losses are greater than zero by Γt = {j | [1 −yt j ˆyt j]+ > 0}. 3 Derived Problems In this section we further explore the motivation for our problem setting by describing two different complex decision tasks and showing how they can be cast as special cases of our setting. We also would like to note that our approach can be employed in other prediction problems (see Sec. 7). Multilabel Categorization In the multilabel categorization task each instance is associated with a set of relevant labels from the set [k]. The multilabel categorization task can be cast as a special case of a ranking task in which the goal is to rank the relevant labels above the irrelevant ones. Many learning algorithms for this task employ class-dependant features (for example, see [7]). For simplicity, assume that each class is associated with n features and denote by φ(x, r) the feature vector for class r. We would like to note that features obtained for different classes typically relay different information and are often substantially different. ωt ωt+1 yt 2 (ω · xt 2) ≥1 yt 3 (ω · xt 3) ≥1 yt 1 (ω · xt 1) ≥1 Figure 1: Illustration of the simultaneous projections algorithm: each instance casts a constraint on ω and each such constraint defines a halfspace of feasible solutions. We project on each halfspace in parallel and the new vector is a weighted average of these projections A categorizer, or label ranker, is based on a weight vector ω. A vector ω induces a score for each class ω · φ(x, r) which, in turn, defines an ordering of the classes. A learner is required to build a vector ω that successfully ranks the labels according to their relevance, namely for each pair of classes (r, s) such that r is relevant while s is not, the class r should be ranked higher than the class s. Thus we require that ω · φ(x, r) > ω ·φ(x, s) for every such pair (r, s). We say that a label ranking is imperfect if there exists any pair (r, s) which violates this requirement. The loss associated with each such violation is [1 −(ω · φ(x, r) −ω · φ(x, s))]+ and the loss of the categorizer is defined as the maximum over the losses induced by the violated pairs. In order to map the problem to our setting, we define a virtual instance for every pair (r, s) such that r is relevant and s is not. The new instance is the n dimensional vector defined by φ(x, r) −φ(x, s). The label associated with all of the instances is set to 1. It is clear that an imperfect categorizer makes a prediction mistake on at least one of the instances, and that the losses defined by both problems are the same. Ordinal Regression In the problem of ordinal regression an instance x is a vector of n features that is associated with a target rank y ∈[k]. A learning algorithm is required to find a vector ω and k thresholds b1 ≤· · · ≤bk−1 ≤bk = ∞. The value of ω · x provides a score from which the prediction value can be defined as the smallest index i for which ω·x < bi, ˆy = min {i|ω · x < bi}. In order to obtain a correct prediction, an ordinal regressor is required to ensure that ω·x ≥bi for all i < y and that ω · x < bi for i ≥y. It is considered a prediction mistake if any of these constraints is violated. In order to map the ordinal regression task to our setting, we introduce k −1 instances. Each instance is a vector in Rn+k−1. The first n entries of the vector are set to be the elements of x, the remaining k −1 entries are set to −δi,j. That is, the i’th entry in the j’th vector is set to −1 if i = j and to 0 otherwise. The label of the first y −1 instances is 1, while the remaining k −y instances are labeled as −1. Once we learned an expanded vector in Rn+k−1, the regressor ω is obtained by taking the first n components of the expanded vector and the thresholds b1, . . . , bk−1 are set to be the last k −1 elements. A prediction mistake of any of the instances corresponds to an incorrect rank in the original problem. 4 Simultaneous Projection Algorithms Recall that on trial t the algorithm receives a matrix, Xt, of kt instances, and predicts ˆyt = Xt ωt. After performing its prediction, the algorithm receives the corresponding labels yt. Each such instance-label pair casts a constraint on ωt, yt j ωt · xt j  ≥1. If all the constraints are satisfied by ωt then ωt+1 is set to be ωt and the algorithm proceeds to the next trial. Otherwise, we would like to set ωt+1 as close as possible to ωt while satisfying all constraints. Such an aggressive approach may be sensitive to outliers and over-fitting. Thus, we allow some of the constraints to remain violated by introducing a tradeoff between the change to ωt and the loss attained on (Xt, yt). Formally, we would like to set ωt+1 to be the solution of the following optimization problem, minω∈Rn 1 2 ∥ω −ωt∥2 + C ℓ(ω; (Xt, yt)), where C is a tradeoff parameter. As we discuss below, this formalism effectively translates to a cap on the maximal change to ωt. We rewrite the above optimization by introducing a single slack variable as follows: min ω∈Rn,ξ≥0 1 2 ω −ωt 2 + Cξ s.t. ∀j ∈[kt] : yt j ω · xt j  ≥1 −ξ , ξ ≥0 . (1) We denote the objective function of Eq. (1) by Pt and refer to it as the instantaneous primal problem to be solved on trial t. The dual optimization problem of Pt is the maximization problem max αt 1,..,αt kt kt X j=1 αt j −1 2 ωt + kt X j=1 αt j yt j xt j 2 s.t. kt X j=1 αt j ≤C , ∀j : αt j ≥0 . (2) Each dual variable corresponds to a single constraint of the primal problem. The minimizer of the primal problem is calculated from the optimal dual solution as follows, ωt+1 = ωt+Pkt j=1 αt j yt j xt j. Unfortunately, in the common case, where each xt j is in an arbitrary orientation, there does not exist an analytic solution for the dual problem (Eq. (2)). We tackle the problem by breaking it down into kt reduced problems, each of which focuses on a single dual variable. Formally, for the j’th variable, the j’th reduced problem solves Eq. (2) while fixing αt j′ = 0 for all j′ ̸= j. Each reduced optimization problem amounts to the following problem max αt j αt j −1 2 ωt + αt j yt j xt j 2 s.t. αt j ∈[0, C] . (3) We next obtain an exact or approximate solution for each reduced problem as if it were independent of the rest. We then choose a distribution µt ∈∆kt, where ∆kt = {µ ∈Rkt : P j µj = 1, µj ≥0} is the probability simplex, and multiply each αt j by the corresponding µt j. Since µt ∈∆kt, this yields a feasible solution to the dual problem defined in Eq. (2) for the following reason. Each µt jαt j ≥0 and the fact that αt j ≤C implies that Pkt j=1 µt jαt j ≤C. Finally, the algorithm uses the combined solution and sets ωt+1 = ωt + Pkt j=1 µt j αt j yt j xt j. Input: Aggressiveness parameter C > 0 Initialize: ω1 = (0, . . . , 0) For t = 1, 2, . . . , T: Receive instance matrix Xt ∈Rkt×n Predict ˆyt = Xt ωt Receive correct labels yt Suffer loss ℓ(ωt; (Xt, yt)) If ℓ> 0: Choose importance weights µt ∈∆kt Choose individual dual solutions αt j Update ωt+1 = ωt + Pkt j=1 µt j αt j yt j xt j Figure 2: Simultaneous projections algorithm. We next present three schemes to obtain a solution for the reduced problem (Eq. (3)) and then combine the solution into a single update. Simultaneous Perceptron: The simplest of the update forms generalizes the famous Perceptron algorithm from [8] by setting αt j to C if the j’th instance is incorrectly labeled, and to 0 otherwise. We similarly set the weight µt j to be 1 |Mt| for j ∈Mt and to 0 otherwise. We abbreviate this scheme as the SimPerc algorithm. Soft Simultaneous Projections: The soft simultaneous projections scheme uses the fact that each reduced problem has an analytic solution, yielding αt j = min  C, ℓ ωt; (xt j, yt j)  / xt j 2 . We independently assign each αt j this optimal solution. We next set µt j to be 1 |Γt| for j ∈Γt and to 0 otherwise. We would like to comment that this solution may update αt j also for instances which were correctly classified as long as the margin they attain is not sufficiently large. We abbreviate this scheme as the SimProj algorithm. Conservative Simultaneous Projections: Combining ideas from both methods, the conservative simultaneous projections scheme optimally sets αt j according to the analytic solution. The difference with the SimProj algorithm lies in the selection of µt. In the conservative scheme only the instances which were incorrectly predicted (j ∈Mt) are assigned a positive weight. Put differently, µt j is set to 1 |Mt| for j ∈Mt and to 0 otherwise. We abbreviate this scheme as the ConProj algorithm. To recap, on each trial t we obtain a feasible solution for the instantaneous dual given in Eq. (2). This solution combines independently calculated αt j, according to a weight vector µt ∈∆kt. While this solution may not be optimal, it does constitutes an infrastructure for obtaining a mistake bound and, as we demonstrate in Sec. 6, performs well in practice. 5 Analysis The algorithms described in the previous section perform updates in order to increase the instantaneous dual problem defined in Eq. (2). We now use the mistake bound model to derive an upper bound on the number of trials on which the predictions of SimPerc and ConProj algorithms are imperfect. Following [6], the first step in the analysis is to tie the instantaneous dual problems to a global loss function. To do so, we introduce a primal optimization problem defined over the entire sequence of examples as follows, minω∈Rn 1 2 ∥ω∥2 +C PT t=1 ℓ(ω; (Xt, Y t)) . We rewrite the optimization problem as the following equivalent constrained optimization problem, min ω∈Rn,ξ∈RT 1 2 ∥ω∥2 + C T X t=1 ξt s.t. ∀t ∈[T], ∀j ∈[kt] : yt j ω · xt j  ≥1 −ξt ∀t : ξt ≥0. (4) We denote the value of the objective function at (ω, ξ) for this optimization problem by P(ω, ξ). A competitor who may see the entire sequence of examples in advance may in particular set (ω, ξ) to be the minimizer of the problem which we denote by (ω⋆, ξ⋆). Standard usage of Lagrange multipliers yields that the dual of Eq. (4) is, max λ T X t=1 kt X j=1 λt,j −1 2 T X t=0 kt X j=1 λt,j yt j xt j 2 s.t. ∀t : kt X j=1 λt,j ≤C ∀t, j : λt,j ≥0 . (5) We denote the value of the objective function of Eq. (5) by D(λ1, · · · , λT ), where each λt is a vector in Rkt. Through our derivation we use the fact that any set of dual variables λ1, · · · , λT defines a feasible solution ω = PT t=1 Pkt j=1 λt,jyt jxt j with a corresponding assignment of the slack variables. Clearly, the optimization problem given by Eq. (5) depends on all the examples from the first trial through time step T and thus can only be solved in hindsight. We note however, that if we ensure that λs,j = 0 for all s > t then the dual function no longer depends on instances occurring on rounds proceeding round t. As we show next, we use this primal-dual view to derive the skeleton algorithm from Fig. 2 by finding a new feasible solution for the dual problem on every trial. Formally, the instantaneous dual problem, given by Eq. (2), is equivalent (after omitting an additive constant) to the following constrained optimization problem, max λ D(λ1, · · · , λt−1, λ, 0, · · · , 0) s.t. λ ≥0 , kt X j=1 λj ≤C . (6) That is, the instantaneous dual problem is obtained from D(λ1, · · · , λT ) by fixing λ1, . . . , λt−1 to the values set in previous rounds, forcing λt+1 through λT to the zero vectors, and choosing a feasible vector for λt. Given the set of dual variables λ1, . . . , λt−1 it is straightforward to show that the prediction vector used on trial t is ωt = Pt−1 s=1 P j λs,jys jxs j. Equipped with these relations and omitting constants which do not depend on λt Eq. (6) can be rewritten as, max λ1,...,λkt kt X j=1 λj −1 2 ωt + kt X j=1 λjyt jxt j 2 s.t. ∀j : λj ≥0, kt X j=1 λj ≤C . (7) The problems defined by Eq. (7) and Eq. (2) are equivalent. Thus, weighing the variables αt 1, . . . , αt kt by µt 1, . . . , µt kt also yields a feasible solution for the problem defined in Eq. (6), namely λt,j = µt j αt j. We now tie all of these observations together by using the weak-duality theorem. Our first bound is given for the SimPerc algorithm. Theorem 1. Let X1, y1 , . . . , XT , yT  be a sequence of examples where Xt is a matrix of kt examples and yt are the associated labels. Assume that for all t and j the norm of an instance xt j is at most R. Then, for any ω⋆∈Rn the number of trials on which the prediction of SimPerc is imperfect is at most, 1 2∥ω⋆∥2 + C PT t=1 ℓ(ω⋆; (Xt, yt)) C −1 2C2R2 . Proof. To prove the theorem we make use of the weak-duality theorem. Recall that any dual feasible solution induces a value for the dual’s objective function which is upper bounded by the optimum value of the primal problem, P (ω⋆, ξ⋆). In particular, the solution obtained at the end of trial T is dual feasible, and thus D(λ1, . . . , λT ) ≤P(ω⋆, ξ⋆) . We now rewrite the left hand-side of the above equation as the following sum, D(0, . . . , 0) + T X t=1  D(λ1, . . . , λt, 0, . . . , 0) −D(λ1, . . . , λt−1, 0, . . . , 0)  . (8) Note that D(0, . . . , 0) equals 0. Therefore, denoting by ∆t the difference in two consecutive dual objective values, D(λ1, . . . , λt, 0, . . . , 0) −D(λ1, . . . , λt−1, 0, . . . , 0), we get that PT t=1 ∆t ≤ P(ω⋆, ξ⋆). We now turn to bounding ∆t from below. First, note that if the prediction on trial t is perfect (Mt = ∅) then SimPerc sets λt to the zero vector and thus ∆t = 0. We can thus focus on trials for which the algorithm’s prediction is imperfect. We remind the reader that by unraveling the update of ωt we get that ωt = P s<t Pks j=1 λs,jys jxs j. We now rewrite ∆t as follows, ∆t = kt X j=1 λt,j −1 2 ωt + kt X j=1 λt,jyt jxt j 2 + 1 2 ωt 2 . (9) By construction, λt,j = µt jαt j and Pkt j=1 µt j = 1, which lets us further expand Eq. (9) and write, ∆t = kt X j=1 µt jαt j −1 2 ωt + kt X j=1 µt jαt jyt jxt j 2 + 1 2 kt X j=1 µt j ωt 2 . The squared norm, ∥·∥2 is a convex function in its vector argument and thus ∆t is concave, which yields the following lower bound on ∆t, ∆t ≥ kt X j=1 µt j  αt j −1 2 ωt + αt jyt jxt j 2 + 1 2 ωt 2  . (10) The SimPerc algorithm sets µt j to be 1/|Mt| for all j ∈Mt and to be 0 otherwise. Furthermore, for all j ∈Mt, αt j is set to C. Thus, the right hand-side of Eq. (10) can be further simplified and written as, ∆t ≥ X j∈Mt µt j  C −1 2 ωt + Cyt jxt j 2 + 1 2 ωt 2  . We expand the norm in the above equation and obtain that, ∆t ≥ X j∈Mt µt j  C −1 2 ωt 2 −Cyt jωt · xt j −1 2C2 yt jxt j 2 + 1 2 ωt 2  . (11) The set Mt consists of indices of instances which were incorrectly classified. Thus, yt j(ωt ·xt j) ≤0 for every j ∈Mt. Therefore, ∆t can further be bounded from below as follows, ∆t ≥ X j∈Mt µt j  C −1 2C2 yt jxt j 2  ≥ X j∈Mt µt j  C −1 2C2R2  = C −1 2C2R2 , (12) where for the second inequality we used the fact that the norm of all the instances is bounded by R. To recap, we have shown that on trials for which the prediction is imperfect ∆t ≥C −1 2C2R2, while in perfect trials where no mistake is made ∆t = 0. Putting all the inequalities together we obtain the following bound,  C −1 2C2R2  ϵ ≤ T X t=1 ∆t = D(λ1, . . . , λT ) ≤P(ω⋆, ξ⋆) , (13) where ϵ is the number of imperfect trials. Finally, rewriting P(ω⋆, ξ⋆) as 1 2∥ω⋆∥2 + C PT t=1 ℓ(ω⋆; (Xt, yt) yields the bound stated in the theorem. The ConProj algorithm updates the same set of dual variables as the SimPerc algorithm, but selects αt j to be the optimal solution of Eq. (3). Thus, the value of ∆t attained by the ConProj algorithm is never lower than the value attained by the SimPerc algorithm. The following corollary is a direct consequence of this observation. Corollary 1. Under the same conditions of Thm. 1 and for any ω⋆∈Rn, the number of trials on which the prediction of ConProj is imperfect is at most, 1 2∥ω⋆∥2 + C PT t=1 ℓ(ω⋆; (Xt, yt)) C −1 2C2R2 . username k m SimProj ConProj SimPerc Max-SP Max-MP Mira beck-s 101 1973 50.0 55.2 55.9 56.6 63.8 63.7 farmer-d 25 3674 27.4 30.3 30.7 30.0 28.6 31.8 kaminski-v 41 4479 43.1 47.8 47.0 49.5 49.6 47.3 kitchen-l 47 4017 42.9 47.0 49.0 48.0 54.9 52.6 lokay-m 11 2491 18.8 25.3 25.3 23.0 25.4 25.3 sanders-r 30 1190 20.7 25.6 23.2 23.8 36.3 34.1 williams-w3 18 2771 4.2 5.0 5.4 4.2 5.8 5.9 Table 1: The percentage of online mistakes of the three variants compared to Max-Update (Single prototype (SP) and Multi prototype (MP)) and the Mira algorithm. Experiments were performed on seven users of the Enron data set. Note that the predictions of the SimPerc algorithm do not depend on the specific value of C, thus for R = 1 and an optimal choice of C the bound attained in Thm. 1 now becomes. ℓ ω⋆; (Xt, yt)  + 1 2∥ω⋆∥2 + 1 2 p ∥ω⋆∥4 + ∥ω⋆∥2ℓ(ω⋆; (Xt, yt)) . We omit the proof for lack of space, see [6] for a closely related analysis. We conclude this section with a few closing words about the SimProj variant. The SimPerc and ConProj algorithms ensure a minimal increase in the dual by focusing solely on classification errors and ignoring margin errors. While this approach ensures a sufficient increase of the dual, in practice it appears to be a double edged sword as the SimProj algorithm performs empirically better. This superior empirical performance can be motivated by a refined derivation of the optimal choice for µ. This derivation will be provided in a long version of this manuscript. 6 Experiments In this section we describe experimental results in order to demonstrate some of the merits of our algorithms. We tested performance of the three variants described in Sec. 4 on a multiclass categorization task and compared them to previously studied algorithms for multiclass categorization. We compared our algorithms to the single-prototype and multiprototype Max-Update algorithms from [9] and to the Mira algorithm [2]. The experiments were performed on the task of email classification using the Enron email dataset (Available at http://www.cs.cmu.edu/∼enron/enron_mail_030204.tar.gz). The learning goal was to correctly classify email messages into user defined folders. Thus, the instances in this dataset are email messages, while the set of classes are the user defined folders denoted by {1, . . . , k}. We ran the experiments on the sequence of email messages from 7 different users. Since each user employs different criteria for email classification, we treated each person as a separate online learning problem. We represented each email message as a vector with a component for every word in the corpus. On each trial, and for each class r, we constructed class-dependent vectors as follows. We set φj(xt, r) to twice the number of time the j’th word appeared in the message if it had also appeared in a fifth of the messages previously assigned to folder r. Similarly, we set φj(xt, r) to minus the number of appearances of the word appeared if it had appeared in less than 2 percent of previous messages. In all other cases, we set φj(xt, r) to 0. This class-dependent construction is closely related to the construction given in [10]. Next, we employed the mapping described in Sec. 3, and defined a set of k −1 instances for each message as follows. Denote the relevant class by r, then for every irrelevant class s ̸= r, we define an instance xt s = φ(xt, r)−φ(xt, s) and set its label to 1. All these instances were combined into a single matrix Xt and were provided to the algorithm in trial t. The results of the experiments are summarized in Table 1. It is apparent that the SimProj algorithm outperforms all other algorithms. The performances of SimPerc and ConProj are comparable with no obvious winner. It is worth noting that the Mira algorithm finds the optimum of a projection problem on each trial while our algorithms only find an approximate solution. However, Mira employs a different approach in which there is a single input instance (instead of the set Xt) and constructs multiple predictors (instead of a single vector ω). Thus, Mira employs a larger hypothesis space which is more difficult to learn in online settings. In addition, by employing a single vector 0 1000 2000 3000 200 400 600 800 1000 farmer−d SimProj ConProj SimPerc Mira 0 500 1000 1500 2000 100 200 300 400 500 600 lokay−m 0 200 400 600 800 1000 50 100 150 200 250 300 350 400 sanders−r Figure 3: The cumulative number of mistakes as a function of the number of trials. representation of the email message, Mira cannot benefit from feature selection which yields classdependent features. It is also obvious that the simultaneous projection variants, while remaining simple to implement, consistently outperform the Max-Update technique which is commonly used in online multiclass classification. In Fig. 3 we plot the cumulative number of mistakes as a function of the trial number for 3 of the 7 users. The graphs clearly indicate the high correlation between the SimPerc and ConProj variants, while indicating the superiority of the SimProj variant. 7 Extensions and discussion We presented a new approach for online categorization with complex output structure. Our algorithms decouple the complex optimization task into multiple sub-tasks, each of which is simple enough to be solved analytically. While the dual representation of the online problem imposes a global constraint on all the dual variables, namely P j αt j ≤C, our framework of simultaneous projections which are followed by averaging the solutions automatically adheres with this constraint and hence constitute a feasible solution. It is worthwhile noting that our approach can also cope with multiple constraints of the more general form P j νjαj ≤C, where νj ≥0 for all j. The box constraint implied for each individual projection problem distils to 0 ≤αj ≤C/νj and thus the simultaneous projection algorithm can be used verbatim. We are currently exploring the usage of this extension in complex decision problems with multiple structural constraints. Another possible extension is to replace the squared norm regularization with other twice differentiable penalty functions. Algorithms of this more general framework still attain similar mistake bounds and are easy to implement so long as the induced individual problems are efficiently solvable. A particularly interesting case is obtained when setting the penalty to the relative entropy. In this case we obtain a generalization of the Winnow and the EG algorithms [11, 12] for complex classification problems. Another interesting direction is the usage of simultaneous projections for problems with more constrained structured output such as max-margin networks [3]. References [1] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In Proc. of the Seventh European Symposium on Artificial Neural Networks, April 1999. [2] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. J. of Machine Learning Res., 3:951–991, 2003. [3] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural Information Processing Systems 17, 2003. [4] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In Proc. of the 21st Intl. Conference on Machine Learning, 2004. [5] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, August 1997. [6] S. Shalev-Shwartz and Y. Singer. Online learning meets optimization in the dual. In Proc. of the Nineteenth Annual Conference on Computational Learning Theory, 2006. [7] R.E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 32(2/3), 2000. [8] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. (Reprinted in Neurocomputing (MIT Press, 1988).). [9] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. Journal of Machine Learning Research, 7, Mar 2006. [10] M. Fink, S. Shalev-Shwartz, Y. Singer, and S. Ullman. Online multiclass learning by interclass hypothesis sharing. In Proc. of the 23rd International Conference on Machine Learning, 2006. [11] N. Littlestone. Learning when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285–318, 1988. [12] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–64, January 1997.
2006
20
3,033
MLLE: Modified Locally Linear Embedding Using Multiple Weights Zhenyue Zhang Department of Mathematics Zhejiang University, Yuquan Campus, Hangzhou, 310027, P. R. China zyzhang@zju.edu.cn Jing Wang College of Information Science and Engineering Huaqiao University Quanzhou, 362021, P. R. China Dep. of Mathematics, Zhejiang University wroaring@yahoo.com.cn Abstract The locally linear embedding (LLE) is improved by introducing multiple linearly independent local weight vectors for each neighborhood. We characterize the reconstruction weights and show the existence of the linearly independent weight vectors at each neighborhood. The modified locally linear embedding (MLLE) proposed in this paper is much stable. It can retrieve the ideal embedding if MLLE is applied on data points sampled from an isometric manifold. MLLE is also compared with the local tangent space alignment (LTSA). Numerical examples are given that show the improvement and efficiency of MLLE. 1 Introduction The problem of nonlinear dimensionality reduction is to find the meaningful low-dimensional structure hidden in high dimensional data. Recently, there have been advances in developing effective and efficient algorithms to perform nonlinear dimension reduction which include isometric mapping Isomap [7], locally linear embedding (LLE) [5] and its variations, manifold charting [2], Hessian LLE [1] and local tangent space alignment (LTSA) [9]. All these algorithms cover two common steps: learn the local geometry around each data point and nonlinearly map the high dimensional data points into a lower dimensional space using the learned local information [3]. The performances of these algorithms, however, are different both in learning local information and in constructing global embedding, though each of them solves an eigenvalue problem eventually. The effectiveness of the local geometry retrieved determines the efficiency of the methods. This paper will focus on the reconstruction weights that characterize intrinsic geometric properties of each neighborhood in LLE [5]. LLE has many applications such as image classification, image recognition, spectra reconstruction and data visualization because of its simple geometric intuitions, straightforward implementation, and global optimization [6, 11]. It is however also reported that LLE may be not stable and may produce distorted embedding if the manifold dimension is larger than one. One of the curses that make LLE fail is that the local geometry exploited by the reconstruction weights is not well-determined, since the constrained least squares (LS) problem involved for determining the local weights may be ill-conditioned. A Tikhonov regularization is generally used for the ill conditions LS problem. However, a regularized solution may be not a good approximation to the exact solution if the regularization parameter is not suitably selected. The purpose of this paper is to improve LLE by making use of multiple local weight vectors. We will show the existence of linearly independent weight vectors that are approximately optimal. The local geometric structure determined by multiple weight vectors is much stable and hence can be used to improve the standard LLE. The modified LLE named as MLLE uses multiple weight vectors for each point in reconstruction of lower dimensional embedding. It can stably retrieve the ideal isometric 10 −20 10 −10 10 0 10 −5 10 0 10 5 ||y0||=2.6706e−5 γ error 10 −20 10 −10 10 0 10 −10 10 −5 10 0 10 5 ||y0||=8.5272e−4 γ error 10 −20 10 −10 10 0 10 −10 10 −5 10 0 10 3 ||y0||=1.6107 γ error Figure 1: Examples of ∥w(γ) −w∗∥(solid line) and ∥w(γ) −u∥(dotted line) for swiss-roll data. embedding approximately for an isometric manifold. MLLE has properties similar to LTSA both in measuring linear dependence of neighborhood and in constructing the (sparse) matrix whose smallest eigenvectors form the wanted lower dimensional embedding. It exploits the tight relations between LLE/MLLE and LTSA. Numerical examples given in this paper show the improvement and efficiency of MLLE. 2 The Local Combination Weights Let {x1, . . . , xN} be a given data set of N points in Rm. LLE constructs locally linear structures at each point xi by representing xi using its selected neighbor set Ni = {xj, j ∈Ji}. The optimal combination weights are determined by solving the constrained least squares problem min ∥xi −  j∈Ji wjixj∥, s.t.  j∈Ji wji = 1. (2.1) Once all the reconstruction weights {wji, j ∈Ji}, i = 1, · · · , N, are computed, LLE maps the set {x1, . . . , xN} to {t1, . . . , tN} in a lower dimensional space Rd (d < m) that preserves the local combination properties totally, min T =[t1,...,tN ]  i ∥ti −  j∈Ji wjitj∥2, s.t. TT T = I. The low dimensional embedding T constructed by LLE tightly depends on the local weights. To formulate the weight vector wi consisting of the local weights wji, j ∈Ji, let us denote matrix Gi = [. . . , xj −xi, . . .]j∈Ji. Using the constraint  j∈Ji wji = 1, we can write the combination error as xi − j∈Ji wjixj = Giwi and hence (2.1) reads min ∥Giw∥, s.t. wT 1ki = 1, where 1ki denotes the ki-dimensional vector of all 1’s. Theoretically, a null vector of Gi that is not orthogonal to 1ki can be normalized to be a weight vector as required. Otherwise, a weight vector is given by wi = yi/1T kiyi with yi a solution to the linear system GT i Giy = 1ki [6]. Indeed, one can formulate the solution using the singular value decomposition (SVD) of Gi. Theorem 2.1 Let G be a given matrix of k column vectors. Denote by y0 the orthogonal projection of 1k onto the null space of G and y1 = (GT G)+1k.1 Then the vector w∗= y∗ 1T k y∗, y∗=  y0, y0 ̸= 0 y1, y0 = 0 (2.2) is an optimal solution to min1T k w=1 ∥Gw∥. The problem of solving min1T w=1 ∥Gw∥is not stable if GT G is singular (has zero eigenvalues) or nearly singular (has relative small eigenvalues). To regularize the problem, it is suggested in [5] to solve the regularized linear system replaced (GT G + γ∥G∥2 F I)y = 1k, w = y/1T k y (2.3) 1(·)+ denotes the Moore-Penrose generalized inverse of a matrix. 0 0.5 1 0 0.2 0.4 0.6 0.8 1 ||X−Y(1)||=1.277 0 0.5 1 0 0.2 0.4 0.6 0.8 1 ||X−Y(2)||=0.24936 0 0.5 1 0 0.2 0.4 0.6 0.8 1 ||X−Y(3)||=0.39941 Figure 2: A 2D data set (◦-points) and computed coordinates (dot points) by LLE using different sets of optimal weight vectors (left two panels) or regularization weight vectors (right panel). with a small positive γ. Let y(γ) be the unique solution to the regularized linear system. One can prove that w(γ) = y(γ)/1T k y(γ) converges to w∗as γ →0. However, the convergence behavior of w(γ) is quite uncertain for small γ > 0. In fact, if y0 ̸= 0 is small, then w(γ) tends to u = y1 1T y1 at first and then turns to the limit value w∗= y0 1T y0 eventually. Note that u and w∗are orthogonal each other. In Figure 1, we plot three examples of the error curves ∥w(γ)−w∗∥(solid line) and ∥w(γ)−u∥ (dotted line) with different values of ∥y0∥for the swiss-roll data. The left two panels show the metaphase phenomenon clearly, where ∥y0∥≈0. Therefore, w∗can not be well approximated by w(γ) if γ is not small enough. This partially explains the instability of LLE. Other factor that results in the instability of LLE is that the learned linear structure by using single weight vector at each point is brittle. LLE may give a wrong embedding even if all weight vector is well approximated in a high accuracy. It is imaginable if Gi is rank reducible since multiple optimal weight vectors exist in that case. Figure 2 shows a small example of N = 20 two-dimensional points for which LLE fails even if exact optimal weight vectors are used. We plot three sets of computed 2D embeddings T (j) (within an optimal affine transformation to the ideal X) by LLE with k = 4 using two sets of exact optimal weight vectors and one set of weight vectors that solve the regularized equations, respectively. The errors ∥X −Y (j)∥= minc,L ∥X −(c1T + LT (j))∥between the ideal set X and the computed sets within optimal affine transformation are large in the example. The uncertainty of w(γ) with small γ occurs because of existence of small singular values of G. Fortunately, it also implies the existence of multiple almost optimal weight vectors simultaneously. Indeed, if G has s ≤k small singular values, then there are s approximately optimal weight vectors that are linear independent on each others. The following theorem characterizes construction of the approximately optimal weight vectors w(ℓ) using the matrix V of left singular vectors corresponding to the s smallest singular values and bounds the combination errors ∥Gw(ℓ)∥in terms of the minimum of ∥Gw∥and the largest one of the s smallest singular values. Theorem 2.2 Let G ∈Rm×k and σ1(G) ≥. . . ≥σk(G) be the singular values of G. Denote w(ℓ) = (1 −α)w∗+ V H(:, ℓ), ℓ= 1, · · · , s, where V is the eigenvector matrix of G corresponding to the s smallest right singular values, α = 1 √s∥V T 1k∥, and H is a Householder matrix that satisfies HV T 1k = α1s.Then ∥Gw(ℓ)∥≤∥Gw∗∥+ σk−s+1(G). (2.4) The Householder matrix is symmetric and orthogonal. It is given by H = I −2hhT with vector h ∈Rs defined as follows. Let h0 = α1s −V T 1k. If h0 = 0, then h = 0. Otherwise, h = h0 ∥h0∥. Note that ∥w∗∥can be very large when G is approximately singular. In that case, (1 −α)w∗dominates w(ℓ) and hence w(1), . . . , w(s) are almost same and numerically linear dependent each others. Equivalently, W = [w(1), . . . , w(s)] has large condition number cond(W) = σmax(W ) σmin(W ) . For numerical stability, we replace w∗by a regularized weight vector w(γ) like in LLE. This modification is quite practical in application and, more importantly, it can reinforce the numerically linear independence of {w(ℓ)}. In our experiment, the construction of the {w(ℓ)} is stable with respect to the choice of γ. We show an estimation of the condition number cond(W) for the modified W below. Theorem 2.3 Let W = (1 −α)w(γ)1T s + V H. Then cond(W) ≤(1 + √ k(1 −α)∥w(γ)∥)2. 3 MLLE: Modified locally linear embedding It is justifiable to learn the local structure by multiple optimal weight vectors at each point, rather than a single one. Though the exact optimal weight vector may be unique, multiple approximately optimal weight vectors exist by Theorem 2.2. We will use these weight vectors to determine an improved and more stable embedding. Below we show the details of the modified locally linear embedding using multiple local weight vectors. Consider the neighbor set of xi with ki neighbors. Assume that the first ri singular values of Gi are large compared with the remaining si = ki −ri singular values. (We will discuss how to choose it later.) Let w(1) i , . . . , w(si) i be si ≤k linearly independent weight vectors, w(ℓ) i = (1 −αi)wi(γ) + ViHi(:, ℓ), ℓ= 1, · · · , si. Here wi(γ) is the regularized solution defined in (2.2) with G = Gi, Vi is the matrix of Gi corresponding to the si smallest right singular values, αi = 1 √si ∥vi∥with vi = V T i 1ki, and Hi is a Householder matrix that satisfies HiV T i 1ki = αi1si. We look for a d-dimensional embedding {t1, . . . , tN}, that minimizes the embedding cost function E(T) = N  i=1 si  ℓ=1 ∥ti −  j∈Ji w(ℓ) ji tj∥2 (3.5) with the constraint TT T = I. Denote by Wi = (1 −αi)wi(γ)1T si + ViHi the local weight matrix and let ˆWi ∈RN×si be the embedded matrix of Wi into the N-dimensional space such that ˆWi(Ji, :) = Wi, ˆW(i, :) = −1T si, ˆW(j, :) = 0, j /∈Ii = Ji ∪{i}. The cost function (3.5) can be rewritten as E(T) =  i ∥T ˆWi∥2 F = Tr(T  i ˆWi ˆW T i T T ) = Tr(TΦT T ), (3.6) where Φ =  i ˆWi ˆW T i . The minimizer of E(T) is given by the matrix T = [u2, . . . , ud+1]T of the d eigenvectors of Φ corresponding to the 2nd to d + 1st smallest eigenvalues. 3.1 Determination of number si of approximation optimal weight vectors Obviously, si should be selected such that σki−si+1(Gi) is relatively small. In general, if the data points are sampled from a d-dimensional manifold and the neighbor set is well selected, then σd(Gi) ≫σd+1(Gi). So si can be any integer satisfying si ≤ki −d, and si = ki −d is the best choice. However because of noise and that the neighborhood is possibly not well selected, σd+1(Gi) may be not relatively small. It makes sense to choose si as large as possible if the ratio λ(i) ki−si+1+···+λ(i) ki λ(i) 1 +···+λ(i) ki−si is small, where λ(i) j = σ2 j (Gi) are the eigenvalues of GT i Gi. There is a trade between the number of weight vectors and the approximation to ∥Giw∗ i ∥. We suggest si = max ℓ  ℓ≤ki −d, ki j=ki−ℓ+1 λ(i) j ki−ℓ j=1 λ(i) j < η  , (3.7) for a given η < 1 that is a threshold error. Here d can be over estimated to be d′ > d. Obviously, si depends on the parameter η monotonically. The smaller η is, the smaller si is, and of course, the smaller the combination errors for the weight vectors used are. We use an adaptive strategy to set η as follows. Let ρi = ki j=d+1 λ(i) j / d j=1 λ(i) j , i = 1, . . . , N, and reorder {ρi} as ρπ1 ≤. . . ≤ρπN . Then we set η to be the middle term of {ρi}, η = ρπ⌈N/2⌉, where ⌈N/2⌉is the nearest integer of N/2 towards infinity. In general, if the manifold near xi is float or has small curvatures and the neighbors are well selected, ρi is smaller than η and si = k −d. For those neighbor sets with large local curvatures, ρi > η and si < ki −d. So less number of weight vectors are used in constructing the local linear structures and the combination errors decrease. We summarize the Modified Locally linear Embedding (MLLE) algorithm as follows. Algorithm MLLE (Modified Locally linear Embedding). 1. For each i = 1, · · · , N, 1.1 Determine a neighbor set Ni = {xj, j ∈Ji} of xi, i /∈Ji. 1.2 Compute the regularized solution wi(γ) by (2.3) with a small γ > 0. 1.3 Compute the eigenvalues λ(i) 1 , . . . , λ(i) ki and eigenvectors v(i) 1 , . . . , v(i) ki of GT i Gi. Set ρi = ki j=d+1 λ(i) j /d j=1 λ(i) j . 2. Sort {ρi} to be {ρπi} in increasing order and set η = ρπ⌈N/2⌉. 3. For each i = 1, · · · , N, 3.1 Set si by (3.7) and set Vi = [v(i) ki−si+1, . . . , v(i) ki ], αi = ∥1T kiVi∥. 3.2 Construct Φ by using Wi = wi(γ)1T si + Vi. 4. Compute the d + 1 smallest eigenvectors of Φ and pick up the eigenvector matrix corresponding to the 2nd to d + 1st smallest eigenvalues, and set T = [u2, . . . , ud+1]T . The computational cost of MLLE is almost the same as that of LLE. The additional flops of MLLE for computing the eigendecomposition of GT i Gi is O(k3 i ) and totally O(k3N) with k = maxi ki. Note that the most computationally expensive steps in both LLE and MLLE are the neighborhood selection and the computation of the d + 1 eigenvectors of the alignment matrix Φ corresponding to small eigenvalues. They cost O(mN2) and O(dN 2), respectively. Because k ≪N, the additional cost of MLLE is ignorable. 4 An analysis of MLLE for isometric manifolds Consider the application of MLLE on an isometric manifold M = f(Ω) with open set Ω⊂Rd and smooth function f. Assume that {xi} are sampled from M, xi = f(τi), i = 1, . . . , N. We have ∥xi −  j∈Ji wjixj∥= ∥τi −  j∈Ji wjiτj∥+ O(ε2 i ), (4.8) due to the isometry of f. If ki > d, then the optimal reconstruction error of τi should be zero. So we have that ∥xi − j∈Ji w∗ jixj∥= O(ε2 i ). For the approximately optimal weight vectors w(ℓ) i , we have ∥xi − j∈Ji w(ℓ) ji xj∥≈σki−si+1(Gi) + O(ε2 i ). Inversely, if follows from (4.8) that ∥τi − j∈Ji w(ℓ) ji τj∥≈σki−si+1(Gi) + O(ε2 i ). Therefore, denoting T ∗= [τ1, . . . , τN], we have E(T ∗) = N  i=1 si  ℓ=1 ∥  j∈Ji w(ℓ) ji τj −τi∥2 ≤ N  i=1 siσ2 ki−si+1(Gi) + O(max i ε2 i ). For the orthogonalized U of T ∗, i.e., T ∗= LU and UU T = I, since L = T ∗U T ∈Rd×d, we have that σd(L) = σd(T ∗) and E(U) ≤E(T ∗)/σ2 d(T ∗). Note that σ2 ki−si+1(Gi) is very small generally. So E(U) is always small and approximately achieves the minimum. Roughly speaking, MLLE can retrieve the isometric embedding. 5 Comparison to LTSA MLLE has similar properties similar to those of LTSA. In this section, we compare MLLE and LTSA in the linear dependence of neighbors and alignment matrices. For simplicity, we assume that ri = d, i.e., ki −d weight vectors are used in MLLE for each neighbor set. 5.1 Linear dependence of neighbors. The total combination error ϵMLLE(Ni) = ki−d  ℓ=1 ∥  j∈Ji w(ℓ) ji xj −xi∥2 = ∥GiWi∥2 F of xi can be a measure of the linear dependence of the neighborhood Ni. To compare it with the measure of linear dependence defined by LTSA, we denote by ¯xi = 1 |Ii|  j∈Ii xj the mean of members in the whole neighbors of xi including xi itself, and ¯Xi = [. . . , xj −¯xi, . . .]j∈Ii. It can be verified that GiWi = ¯Xi ˜Wi with ˜Wi = ˆWi(Ii, :). So ϵMLLE(Ni) = ∥¯Xi ˜Wi∥2 F . In LTSA, the linear dependence of Ni is measured by the total errors ϵLT SA(Ni) =  j∈Ii ∥xj −¯xi −Qiθ(i) j ∥2 = ∥¯Xi −QiΘi∥2 F = ∥¯Xi ˜Vi∥2 F , where ˜Vi is the matrix consists of the right singular vectors of ¯Xi corresponding to ki −d smallest singular values. The MLLE-measure ϵMLLE and the LTSA-measure ϵLT SA of neighborhood linear dependence are similar, ϵMLLE(Ni) = ∥¯Xi ˜Wi∥2 F , ∥¯Xi ˜w(ℓ) i ∥≈min, ℓ≤ki −d, ϵLT SA(Ni) = ∥¯Xi ˜Vi∥2 F = min ZT Z=I ∥¯XiZ∥2 F . 5.2 Alignment matrices. Both MLLE and LTSA minimize a trace function of an alignment matrix Φ to obtain an embedding, minT T T =I trace(TΦT T ). The alignment matrix can be written in the same form Φ = N  i=1 SiΦiST i , where Si is a selection matrix consisting of the columns j ∈Ii of the large identity matrix of order N. In LTSA, the local matrix Φi is given by the orthogonal projection, i.e. ΦLT SA i = ˜Vi ˜V T i , see [10]. For MLLE, ΦMLLE i = ˜Wi ˜W T i . It is interesting that the range space of ˜Wi span( ˜Wi) and the range space span( ˜Vi) of ˜Vi are tightly close each other if the reconstruction error of xi is small. The following theorem gives an upper bound of the closeness using the distance dist( ˜Wi, ˜Vi) between span( ˜Wi) and span( ˜Vi) that denotes the largest angle between the two subspaces. (See [4] for discussion about distance of subspaces.) Theorem 5.1 Let Gi = [· · · , xj −xi, · · ·]j∈Ji. Then dist( ˜Wi, ˜Vi) ≤ ∥GiWi∥ σd( ˜ Wi)σd( ¯ Xi). 6 Experimental Results. In this section, we present several numerical examples to illustrate the performance of MLLE algorithm. The test data sets include simulated date sets and real world examples. First, we compare Isomap, LLE, LTSA, and MLLE on the Swiss roll with a hole. The data points generated from a rectangle with a missing rectangle strip punched out of the center and then the resulting Swiss roll is not convex. We run these four algorithms with k = 10. In the top middle of Figure 3, we plot the computed coordinates by Isomap, and there is a dilation of the missing region and a warp on the rest of the embedding. As seen in the top right of Figure 3, there is a strong distortion on the computed coordinates by LLE. As we have shown in the bottom of Figure 3, LTSA and MLLE perform well. We now compare MLLE and LTSA for a 2D manifold with 3 peaks embedded in 3D space. We generate N = 1225 3D-points xi = [ti, si, h(ti, si)]T , where ti and si are uniformly distributed in the interval [−1.5, 1.5] and h(t, s) is defined by h(t, s) = e−10  (t−0.5)2+(s−0.5)2 −e−10  t2+(s+1)2 −e−10  (1+t)2+s2 . −10 0 10 0 10 20 30 −15 −10 −5 0 5 10 15 Original 0 50 100 0 5 10 15 20 Generating Coordinates −60 −40 −20 0 20 40 −20 −10 0 10 20 ISOMAP −2 0 2 −3 −2 −1 0 1 2 3 LLE −0.05 0 0.05 −0.05 0 0.05 LTSA −2 0 2 −2 −1 0 1 2 MLLE Figure 3: Left column: Swiss-roll data and generating coordinates with a missing rectangle. Middle column: computed results by Isomap and LTSA. Right column: results of LLE and MLLE. −2 0 2 −2 0 2 −1 0 1 −0.1 −0.05 0 0.05 0.1 −0.1 −0.05 0 0.05 0.1 LTSA −2 −1 0 1 2 −1.5 −1 −0.5 0 0.5 1 1.5 Generating parameter −2 −1 0 1 2 −2 −1 0 1 2 MLLE Figure 4: Left column:Plots of the 3-peak data and the generating coordinates. Right column: Results of LTSA and MLLE. See the left of Figure 4 for the data points and the generating parameters. It is easy to show that the manifold parameterized by f(t, s) = [t, s, h(t, s)]T is approximately isometric since the Jacobian Jf(t, s) is orthonormal approximately. In the right of Figure 4, we plot the computed coordinates by LTSA and MLLE with k = 12. The deformations of the computed coordinates by LTSA near the peaks are prominent because the curvature of the 3-peak manifold varies very much. This bias can be reduced by the modified curvature model of LTSA proposed in [8]. MLLE can recover the generating parameter perfectly up to an affine transformation. Next, we consider a data set containing N = 4400 handwritten digits (’2’-’5’) with 1100 examples of each class. The gray scale images of handwritten numerals are at 16×16 resolution and converted m = 256 dimensional vectors2. The data points are mapped into a 2-dimensional space using LLE and MLLE respectively. These experiments are shown in Figure 5. It is clear that MLLE performs much better than LLE. Most of the digit classes (digits ’2’-’5’ are marked by ’◦’, ’⋄’, ’▷’ and ’△’ respectively) are well clustered in the resulting embedding of MLLE. Finally, we consider application of MLLE and LLE on the real data set of 698 face images with variations of two pose parameters (left-right and up-down) and one lighting parameter. The image size is 64-by-64 pixel, and each image is converted to an m = 4096 dimensional vector. We apply MLLE with k = 14 and d = 3 on the data set. The first two coordinates of MLLE are plotted in the middle of Figure 6. We also extract four paths along the boundaries of the set of the first two coordinates, and display the corresponding images along each path. These components appear to capture well the pose and lighting variations in a continuous way. References [1] D. Donoho and C. Grimes. Hessian Eigenmaps: new tools for nonlinear dimensionality reduction. Proceedings of National Academy of Science, 5591-5596, 2003 2The data set can be downloaded at http://www.cs.toronto.edu/ roweis/data.html. −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −5 −4 −3 −2 −1 0 1 2 3 4 5 LLE −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 MLLE Figure 5: Embedding results of N = 4400 handwritten digits by LLE(left) and MLLE(right). Figure 6: Images of faces mapped into the embedding described by the first two coordinates of MLLE, using the parameters k = 14 and d = 3. [2] M. Brand. Charting a manifold. Advances in Neural Information Processing Systems, 15, MIT Press, 2003 [3] Jihun Ham, Daniel D. Lee, Sebastian Mika, Bernhard Scholkopf. A kernel view of the dimensionality reduction of manifolds. International Conference On Machine Learning 21, 2004. [4] G. H. Golub and C. F Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, Maryland, 3nd edition, 1996. [5] S. Roweis and L Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290: 2323–2326, 2000. [6] L. Saul and S. Roweis. Think globally, fit locally: unsupervised learning of nonlinear manifolds. Journal of Machine Learning Research, 4:119-155, 2003. [7] J Tenenbaum, V. De Silva and J. Langford. A global geometric framework for nonlinear dimension reduction. Science, 290:2319–2323, 2000 [8] J. Wang, Z. Zhang and H. Zha. Adaptive Manifold Learning. Advances in Neural Information Processing Systems 17, edited by Lawrence K. Saul and Yair Weiss and L´eon Bottou, MIT Press, Cambridge, MA, pp.1473-1480, 2005. [9] Z. Zhang and H. Zha. Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment. SIAM J. Scientific Computing, 26(1):313–338, 2004. [10] H. Zha and Z. Zhang. Spectral Analysis of Alignment in Manifold Learning. Submitted, 2006. [11] M. Vlachos, C. Domeniconi, D. Gunopulos, G. Kollios, and N. Koudas Non-Linear Dimensionality Reduction Techniques for Classification and Visualization Proc. Eighth ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining, July 2002.
2006
200
3,034
Hyperparameter Learning for Graph Based Semi-supervised Learning Algorithms Xinhua Zhang∗ Statistical Machine Learning Program National ICT Australia, Canberra, Australia and CSL, RSISE, ANU, Canberra, Australia xinhua.zhang@nicta.com.au Wee Sun Lee Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 leews@comp.nus.edu.sg Abstract Semi-supervised learning algorithms have been successfully applied in many applications with scarce labeled data, by utilizing the unlabeled data. One important category is graph based semi-supervised learning algorithms, for which the performance depends considerably on the quality of the graph, or its hyperparameters. In this paper, we deal with the less explored problem of learning the graphs. We propose a graph learning method for the harmonic energy minimization method; this is done by minimizing the leave-one-out prediction error on labeled data points. We use a gradient based method and designed an efficient algorithm which significantly accelerates the calculation of the gradient by applying the matrix inversion lemma and using careful pre-computation. Experimental results show that the graph learning method is effective in improving the performance of the classification algorithm. 1 Introduction Recently, graph based semi-supervised learning algorithms have been used successfully in various machine learning problems including classification, regression, ranking, and dimensionality reduction. These methods create graphs whose vertices correspond to the labeled and unlabeled data while the edge weights encode the similarity between each pair of data points. Classification is performed using these graphs by labeling unlabeled data in such a way that instances connected by large weights are given similar labels. Example graph based semi-supervised algorithms include min-cut [3], harmonic energy minimization [11], and spectral graphical transducer [8]. The performance of the classifier depends considerably on the similarity measure of the graph, which is normally defined in two steps. Firstly, the weights are defined locally in a pair-wise parametric form using functions that are essentially based on a distance metric such as radial basis functions (RBF). It is argued in [7] that modeling error can degrade performance of semi-supervised learning. As the distance metric is an important part of graph based semi-supervised learning, it is crucial to use a good distance metric. In the second step, smoothing is applied globally, typically, based on the spectral transformation of the graph Laplacian [6, 10]. There have been only a few existing approaches which address the problem of graph learning. [13] learns a nonparametric spectral transformation of the graph Laplacian, assuming that the weight and distance metric are given. [9] learns the spectral parameters by performing evidence maximization using approximate inference and gradient descent. [12] uses evidence maximization and Laplace approximation to learn simple parameters of the similarity function. Instead of learning one single good graph, [4] proposed building robust graphs by applying random perturbation and edge removal ∗This work was done when the author was at the National University of Singapore. from an ensemble of minimum spanning trees. [1] combined graph Laplacians to learn a graph. Closest to our work is [11], which learns different bandwidths for different dimensions by minimizing the entropy on unlabeled data; like the maximum margin motivation in transductive SVM, the aim here is to get confident labeling of the data by the algorithm. In this paper, we propose a new algorithm to learn the hyperparameters of distance metric, or more specifically, the bandwidth for different dimensions in the RBF form. In essence, these bandwidths are just model parameters and normal model selection methods include k-fold cross validation or leave-one-out (LOO) cross validation in the extreme case can be used for selecting the bandwidths. Motivated by the same spirit, we base our learning algorithm on the aim of achieving low LOO prediction loss on labeled data, i.e., each labeled data can be correctly classified by the other labeled data in a semi-supervised style with as high probability as possible. This idea is similar to [5] which learns multiple parameters for SVM. Since most LOO style algorithms are plagued with prohibitive computational cost, an efficient algorithm is designed. With a simple regularizer, the experimental results show that learning the hyperparameters by minimizing the LOO loss is effective. 2 Graph Based Semi-supervised Learning Suppose we have a set of labeled data points {(xi, yi)} for i ∈L ≜{1, ..., l}. In this paper, we only consider binary classification, i.e., yi ∈{1 (positive), 0 (negative)}. In addition, we also have a set of unlabeled data points {xi} for i ∈U ≜{l + 1, ..., l + u}. Denote n ≜l + u. Suppose the dimensionality of input feature vectors is m. 2.1 Graph Based Classification Algorithms One of the earliest graph based semi-supervised learning algorithms is min-cut by [3], which minimizes: E(f) ≜ X i,j wij(fi −fj)2 (1) where the nonnegative wij encodes the similarity between instance i and j. The label fi is fixed to yi ∈{1, 0} if i ∈L. The optimization variables fi (i ∈U) are constrained to {1, 0}. This combinatorial optimization problem can be efficiently solved by the max-flow algorithm. [11] relaxed the constraint fi ∈{1, 0} (i ∈U) to real numbers. The optimal solution of the unlabeled data’s soft labels can be written neatly as: fU = (DU −WUU)−1WULfL = (I −PUU)−1PULfL (2) where fL is the vector of soft labels (fixed to yi) for L. D ≜diag(di), where di ≜P j wij and DU is the submatrix of D associated with unlabeled data. P ≜D−1W. WUU, WUL, PUU, and PUL are defined by: W =  WLL WLU WUL WUU  , P =  PLL PLU PUL PUU  . The solution (2) has a number of interesting properties pointed out by [11]. All fi (i ∈U) are automatically bounded by [0, 1], so it is also known as square interpolation. They can be interpreted by using Markov random walk on the graph. Imagine a graph with n nodes corresponding to the n data points. Define the probability of transferring from xi to xj as pij, which is actually row-wise normalization of wij. The random walk starts from any unlabeled points, and stops once it hits any labeled point (absorbing boundary). Then fi is the probability of hitting a positive labeled point. In this sense, the labeling of each unlabeled point is largely based on its neighboring labeled points, which helps to alleviate the problem of noisy data. (1) can also be interpreted as a quadratic energy function and its minimizer is known to be harmonic: fi (i ∈U) equals the average of fj (j ̸= i) weighted by pij. So we call this algorithm Harmonic Energy Minimization (HEM). By (1), fU is independent of wii (i = 1, ..., n), so henceforth we fix wii = pii = 0. Finally, to translate the soft labels fi to hard labels pos/neg, the simplest way is by thresholding at 0.5, which works well when the two classes are well separated. [11] proposed another approach, called Class Mass Normalization (CMN), to make use of prior information such as class ratio in unlabeled data, estimated by that in labeled data. Specifically, they normalize the soft labels to f + i ≜ fi .Pn j=1 fj as the probabilistic score of being positive, and to f − i ≜(1 −fi) .Pn j=1 (1 −fj) as the score of being negative. Suppose there are r+ positive points and r−negative points in the labeled data, then we classify xi to positive iff f + i r+ > f − i r−. 2.2 Basic Hyperparameter Learning Algorithms One of the simplest parametric form of wij is RBF: wij = exp  − X d (xi,d −xj,d)2 σ2 d  (3) where xi,d is the dth component of xi, and likewise the meaning of fU,i in (4). The bandwidth σd has considerable influence on the classification accuracy. HEM uses one common bandwidth for all dimensions, which can be easily selected by cross validation. However, it will be desirable to learn different σd for different dimensions; this allows a form of feature selection. [11] proposed learning the hyperparameters σd by minimizing the entropy on unlabeled data points (we call it MinEnt): H (fU) = − Xu i=1 (fU,i log fU,i + (1 −fU,i) log(1 −fU,i)) (4) The optimization is conducted by gradient descent. To prevent numerical problems, they replaced P with ˜P = ϵU + (1 −ϵ)P, where ϵ ∈[0, 1), and U is the uniform matrix with Uij = n−1. 3 Leave-one-out Hyperparameter Learning In this section, we present the formulation and efficient calculation of our graph learning algorithm. 3.1 Formulation and Efficient Calculation We propose a graph learning algorithm which is similar to minimizing the leave-one-out cross validation error. Suppose we hold out a labeled example xt and predict its label by using the rest of the labeled and unlabeled examples. Making use of the result in (2), the soft label for xt is s⊤f t U (the first component of f t U), where s ≜(1, 0, ..., 0)⊤∈Ru+1 , f t U ≜(f t 0, f t l+1, ..., ft n)⊤. Here, the value of f t U can be determined by f t U = (I −˜P t UU)−1 ˜P t ULf t L, where f t L ≜(f1, .., ft−1, ft+1, ..., fl)⊤, ˜pij ≜(1 −ε)pij + ε/n , P t UU ≜  ptt ptU pUt PUU  , pUt ≜(pl+1,t, ..., pn,t)⊤, ptU ≜(pt,l+1, ..., pt,n) , P t UL ∆=    pt,1 · · · pt,t−1 pt,t+1 · · · pt,l pl+1,1 · · · pl+1,t−1 pl+1,t+1 · · · pl+1,l · · · · · · · · · · · · · · · · · · pn,1 · · · pn,t−1 pn,t+1 · · · pn,l   . If xt is positive, then we hope that f t U,1 is as close to 1 as possible. Otherwise, if xt is negative, we hope that f t U,1 is as close to 0 as possible. So the cost function to be minimized can be written as: Q = Xl t=1 ht f t U,1  = Xl t=1 ht  s⊤(I −˜P t UU)−1 ˜P t ULf t L  (5) where ht(x) is the cost function for instance t. We denote ht(x) = h+(x) for yt = 1 and ht(x) = h−(x) for yt = 0. Possible choices of h+(x) include 1 −x, (1 −x)a, a−x, and −log x with a > 1. Possible choices for h−(x) include x, xa, ax−1, and −log(1−x). Let Loo loss(xt, yt) ≜ht f t U,1  . To minimize Q, we use gradient-based optimization methods. The gradient is: ∂Q/∂σd = Pl t=1 h′ t f t U,1  s⊤(I −˜P t UU)−1  ∂˜P t UU . ∂σd · f t U + ∂˜P t UL . ∂σd · f t L  , using matrix property dX−1 = −X−1(dX)X−1. Denoting (βt)⊤≜h′ t(f t U,1)s⊤(I −˜P t UU)−1 and noting ˜P = εU + (1 −ε)P, we have ∂Q/∂σd = (1 −ε) Xl t=1 (βt)⊤∂P t UU  ∂σd · f t U + ∂P t UL  ∂σd · f t L  . (6) Since in both P t UU and P t UL , the first row corresponds to xt, and the ith row (i ≥2) corresponds to xi+l−1, denoting P t UN ≜(P t UL P t UU) makes sense as each row of P t UN corresponds to a well defined single data point. Let all notations about P carry over to the corresponding W. We now use swt i ≜Pn k=1 wt UN(i, k) and Pn k=1 ∂wt UN(i, k)/∂σd (i = 1, ..., u + 1) to denote the sum of these corresponding rows. Now (6) can be rewritten in ground terms by the following “two” equations: ∂P t U•(i, j)  ∂σd = (swt i)−1  ∂wt U•(i, j)  ∂σd −pt U•(i, j) Xn k=1 ∂wt UN(i, k)  ∂σd  , where • can be U or L. ∂wij/∂σd = 2wij(xi,d −xj,d)2 σ3 d by (3). The na¨ıve way to calculate the function value Q and its gradient is presented in Algorithm 1. We call it leave-one-out hyperparameter learning (LOOHL). Algorithm 1 na¨ıve form of LOOHL 1: function value Q ←0, gradient g ←(0, ..., 0)⊤∈Rm 2: for each t = 1, ..., l (leave-one-out loop for each labeled point) do 3: f t L ←(f1, .., ft−1, ft+1, ..., fl)⊤, f t U ←(I −˜P t UU)−1 ˜P t ULf t L, Q ←Q + ht f t U,1  , (βt)⊤←h′ t(f t U,1)s⊤(I −˜P t UU)−1 4: for each d = 1, ..., m (for all feature dimensions) do 5: ∂P t UU (i,j) ∂σd ← 1 swt i  ∂wt UU (i,j) ∂σd −pt UU(i, j) nP k=1 ∂wt UN (i,k) ∂σd  where swt i = Pn k=1 wt UN(i, k), i, j = 1, ..., u + 1 6: ∂P t UL(i,j) ∂σd ← 1 swt i  ∂wt UL(i,j) ∂σd −pt UL(i, j) nP k=1 ∂wt UN (i,k) ∂σd  i = 1, ..., u + 1, j = 1, ..., l −1 7: gd ←gd + (1 −ϵ)(βt)⊤ ∂P t UU ∂σd f t U + ∂P t UL ∂σd f t L  8: end for 9: end for The computational complexity of the na¨ıve algorithm is expensive: O(lu(mn+u2)), just to calculate the gradient once. Here we assume the cost of inverting a u × u matrix is O(u3). We reduce the two terms in the cost by means of using matrix inversion lemma and careful pre-computation. One part of the cost, O(lu3), stems from inverting I −˜P t UU, a (u+1)×(u+1) matrix, for l times in (5). We note that for different t, I −˜P t UU differs only by the first row and first column. So there exist two vectors α, β ∈Ru+1 such that I −˜P t1 UU = (I −˜P t2 UU) + eα⊤+ βe⊤, where e = (1, 0, ..., 0)⊤ ∈Ru+1. With I −˜P t UU expressed in this form, we are ready to apply matrix inversion lemma: A + αβ⊤−1 = A−1 −A−1α · β⊤A−11 + α⊤Aβ  . (7) We only need to invert I −˜P t UU for t = 1 from scratch, and then apply (7) twice for each t ⩾2. The new total complexity related to matrix inversion is O u3 + lu2 . The other part of the cost, O(lumn) , can be reduced by using careful pre-computation. Written in detail, we have: ∂Q ∂σd = l X t=1 u+1 X i=1 βt i swt i u+1 X j=1 ∂wt UU (i, j) ∂σd f t U,j + l−1 X j=1 ∂wt UL (i, j) ∂σd f t L,j − n X k=1 ∂wt UN (i, k) ∂σd u+1 X j=1 pt UU (i, j) f t U,j + l−1 X j=1 pt UL (i, j) f t L,j !! ≜ n X i=1 n X j=1 αij ∂wij ∂σd The crucial observation is the existence of αij, which are independent of dimension index d. Therefore, they can be pre-computed efficiently. The Algorithm 2 below presents the efficient approach to gradient calculation. Algorithm 2 Efficient algorithm to gradient calculation 1: for i, j = 1, ..., n do 2: for all feature dimension d on which either xi or xj is nonzero do 3: gd = gd + αij · ∂wij/∂σd 4: end for 5: end for Figure 1: Examples of degenerative graphs learned by pure LOOHL. Letting swi ≜Pn k=1 wik and δ(·) be Kroneker delta, we derive the form of αij as: αij = sw−1 i l X t=1 βt i−l+1 f t U,j−l+1 − n X k=l+1 pikf t U,k−l+1 −pitf t U,1 − l X k=1:k̸=t pikfk  for i > l and j > l; αij = sw−1 i l X t=1 βt i−l+1  f t U,1δ(t = j) + fjδ(t ̸= j) −pitf t U,1 − n X k=l+1 pikf t U,k−l+1 − l X k=1:k̸=t pikfk   for i > l and j ⩽l; αij = sw−1 i βi 1 f i U,j−l+1 − n X k=l+1 pikf i U,k−l+1 − l X k=1 pikfk ! for i ⩽l and j > l; αij = sw−1 i βi 1 fj − n X k=l+1 pikf i U,k−l+1 − l X k=1 pikfk ! for i ⩽l and j ⩽l, and αii are fixed to 0 for all i since we fix wii = pii = 0. All αij can be computed in O(u2l) time and Algorithm 2 can be completed in O(n2 ˜m) time, where ˜m ≜2n−1(n −1)−1 · X 1⩽i<j⩽n |{d ∈1...m| xi or xj is not zero on feature d}|. In many applications such as text classification and image pattern recognition, the data is very sparse and ˜m ≪m. In sum, the computational cost has been reduced from O(lu(mn + u2)) to O(lnu + n2 ˜m + u3) . The space cost is mild at O(n2 + n ˜m). 4 Regularizing the Graph Learning Similar to the MinEnt method, purely applying LOOHL can lead to degenerative graphs. In this section, we show two such examples and then propose a simple approach which regularizes the graph learning process. Two degenerative graphs are shown in Figure 1. In example (a), the points with the same xv coordinate are from the same classes. For each labeled point, there is another labeled point from the opposite class which has the same xh coordinate. So the leave-one-out hyperparameter learning will push 1/σh to zero and 1/σv to infinity, i.e., all points can transfer only horizontally. Therefore the graph will effectively split into six disconnected sub-graphs, each sharing the same xv coordinate as showed in (a). So the desired gradual change of label from positive to negative along dimension xv cannot appear. As a result, the point at question mark cannot hit any labeled points and cannot be classified. One way to prevent such degenerate graphs is to prevent 1/σv from growing too large, e.g., with a regularizer such as P d (1/σd)2. In example (b), although the negative points will encourage both horizontal and vertical walk, horizontal walk will make the leave-one-out error large on positive points. So the learned 1/σv will be far smaller than 1/σh, i.e., the result strongly encourages walking in vertical direction and ignoring the information from the horizontal direction. As a result, the point at the question mark will be labeled as positive, although by nearest neighbor intuition, it should be labeled as negative. We notice that the four negative points will be partitioned into two groups as shown in the figure. In such a case, the regularizer P d (1/σd)2 will not be helpful with utilizing dimensions that are informative. A different regularizer that encourages the use of more dimensions may be better in this case. One simple regularizer that has this property is to minimize the variance of the inverse bandwidth P d (1/σd −µ)2, where µ = m−1 P d 1/σd, assuming that the mean is non-zero. It Table 1: Dataset properties. Sparsity is the average frequency of features to be zero in the whole dataset. The rightmost column gives the size of the whole dataset from which the labeled data in experiment is sampled. Some data in text dataset has unknown label, thus always used as unlabeled. is a priori unclear which regularizer will be better empirically, but for the datasets in our experiments, the minimum variance regularizer is overwhelmingly better, even when useless features are intentionally added to the datasets. Since the gradient based optimization can get stuck in local minima, it is advantageous to test several different parameter initialization. With this in mind, we implement a simple approximation to the minimum variance regularizer that tests different parameter initialization as well. We discretize µ and minimize the leave-one-out loss plus P d (1/σd −1/˜σ)2, where ˜σ is fixed a priori to several different possible values. We run with different ˜σ and set all initial σd to ˜σ. Then we choose the function produced by the value of ˜σ that has the smallest regularized cost function value. This process is similar to restarting from various values to avoid local minima, but now we are also trying with different mean of estimated optimal bandwidth at the same time. A similar way to regularize is by using a Gaussian prior with mean µ−1 and minimizing Q + C P d (1/σd −1/µ)2 with respect to σd and µ simultaneously. 5 Experimental Results Using HEM as a basis for classification, we compare the test accuracy of three model selection methods: LOOHL, 5-CV (tying all bandwidths and choose by 5-fold cross validation), and MinEnt, each with both thresholding and CMN. Since the topic of this paper is how to learn the hyperparameters of a graph, we pay more attention to how the performance of a given recognized classifier can be improved by means of learning the graph, than to the comparison between different classifers’ performance, i.e., comparing with other semi-supervised or supervised learning algorithms. Ionosphere is from UCI repository. The other four datasets used in the experiment are from NIPS 2003 Workshop on feature selection challenge. Each of them has two versions: original version and probe version which adds useless probing features in order to investigate the algorithm’s performance in the presence of useless features, though at current stage we do not use the algorithm as feature selector. Since the workshop did not provide the original datasets, we downloaded the original datasets from other sites. Our original intention was to use original versions that we downloaded and to reproduce the probe version ourself using the pre-processing described in NIPS 2003 workshop, so that we can check the performance of the algorithms on datasets with and without redundant features. Unfortunately, we find that with our own effort at pre-processing, the datasets with probes yield far different accuracies compared with the datasets with probes downloaded from the workshop web site. Thus we are using the original version and the probe version downloaded from difference sources, and the comparison between them should be done with care, though the demonstration of LOOHL’s efficacy is not affected. The properties of the five datasets are summarized in Table 1. We randomly pick the labeled subset L from all labeled data available under the constraint that both classes must be present in L. The remaining labeled and unlabeled data are used as unlabeled data. For example, by saying |L| = 20 for text dataset, we mean randomly picking 20 points from the 600 labeled data as labeled, and label the other 1980 points by using our algorithm. Finally we calculate the prediction accuracy on the 580 (originally) labeled points. For other datasets, say cancer, testing is on 180 points since we know the label of all points. For each fixed |L|, this random test is conducted for 10 times and the average accuracy is reported. Then |L| is varied. We normalized all input feature vectors to have length 1. (a) 4 vs 9 (original) (b) cancer (original) (c) text (original) (d) thrombin (original) (e) 4 vs 9 (probe) (f) cancer (probe) (g) text (probe) (h) thrombin (probe) Figure 2: Accuracy of original and probe versions in percentage vs. number of labeled data. The initial common bandwidth and smoothing factor ϵ in MinEnt are selected by five fold cross validation. For LOOHL, We fix h+(x) = (1 −x)2 and h−(x) = x2. The final objective function is: C1 × Loo loss Normal + C2 × P d (1/σd −1/˜σ)2. m, Loo loss Normal ≜(2r+)−1 X yi=1 Loo loss(xi, yi)+(2r−)−1 X yi=0 Loo loss(xi, yi), (8) and there are r+ positive labeled examples and r−negative labeled examples. For each C1:C2 ratio, we run on ˜σ = 0.05, 0.1, 0.15, 0.2, 0.25, 0.3 for all datasets and select the function that corresponds to the smallest objective function value for use in cross validation testing. The final C1:C2 value was picked by five fold cross validation, with discrete levels at 10−i, where i = 1, 2, 3, 4, 5, since strong regularizer is needed given the large number of features (variables) and much fewer labeled points. The optimization solver we use is the Toolkit for Advanced Optimization [2]. From the results in Figure 2 and Figure 3, we can make the following observations and conclusions: 1. LOOHL generally outperforms 5-CV and MinEnt. Both LOOHL+Thrd and LOOHL+CMN outperform 5-CV and MinEnt (regardless of Thrd or CMN) on all datasets except thrombin and ionosphere, where either LOOHL+CMN or LOOHL+Thrd finally performs best. 2. For 5-CV, CMN is almost always better than thresholding, except on the original form of cancer and thrombin dataset, where CMN hurts 5-CV. In [11], it is claimed that although the theory of HEM is sound, CMN is still necessary to achieve reasonable performance because the underlying graph is often poorly estimated and may not reflect the classification goal, i.e., one should not rely exclusively on the graph. Now that our LOOHL is aimed at learning a good graph, the ideal case is that the graph learned is suitable for our classification such that the improvement by CMN will not be large. In other words, the difference between LOOHL+CMN and LOOHL+Thrd, compared with the difference between 5-CV+CMN and 5-CV+Thrd, can be viewed as an approximate indicator of how well the graph is learned by LOOHL. The efficacy of LOOHL can be clearly observed in datasets 4vs9, cancer, text, ionosphere and original version of thrombin. In these cases, we see that LOOHL+Thrd is already achieving high accuracy and LOOHL+CMN does not offer much improvement then or even hurts performance due to inaccurate class ratio estimation. In fact, LOOHL+Thrd performs reliably well on all datasets. It is thus desirable to learn the bandwidth for each dimension of the feature vector, and there is no longer any need to post-process by using class ratio information. 3. The performance of MinEnt is generally inferior to 5-CV and LOOHL. MinEnt+Thrd has equal chance of out-performing or losing to 5-CV+Thrd, while 5-CV+CMN is almost always better than MinEnt+CMN. Most of the time, MinEnt+CMN performs significantly better than MinEnt+Thrd, so we can conclude that MinEnt fails to learn a good graph. This may be due to converging to a poor local minimum, or that the idea of minimizing the entropy on unlabeled data is by itself insufficient. Figure 3: Accuracy of Ionosphere in percentage vs. number of labeled data. Figure 4: Accuracy comparison of priors in percentage between minimizing sum of square inverse bandwidth P d σ−2 d and minimizing variance of inverse bandwidth. 4. For these datasets, assuming low variance of inverse bandwidth with discretization as regularizer is more reasonable than assuming that many features are irrelevant to the classification. This is even true for probe versions of the datasets. Figure 4 shows the comparison. 6 Conclusions In this paper, we proposed learning the graph for graph based semi-supervised learning by minimizing the leave-one-out prediction error, with a simple regularizer. Efficient gradient calculation algorithms are designed and the empirical result is encouraging. Acknowledgements This work is partially funded by the Singapore-MIT Alliance. National ICT Australia is funded through the Australian Government’s Backing Australia’s Ability initiative, in part through the Australian Research Council. References [1] Andreas Argyriou, Mark Herbster, and Massimiliano Pontil. Combining Graph Laplacians for SemiSupervised Learning. In NIPS 2005, Vancouver, Canada, 2005. [2] Steven Benson, Lois McInnes, Jorge Mor´e, and Jason Sarich. TAO User Manual ANL/MCS-TM-242, http://www.mcs.anl.gov/tao, 2005. [3] Avrin Blum, and Shuchi Chawla. Learning From Labeled and Unlabeled Data using Graph Mincuts. In ICML 2001. [4] Miguel ´A Carreira-Perpiˇn´an, and Richard S. Zemel. Proximity Graphs for Clustering and Manifold Learning. In NIPS 2004. [5] Olivier Chapelle, Vladimir Vapnik, Olivier Bousquet, and Sayan Mukherjee. Choosing Multiple Parameters for Support Vector Machines. Machine Learning, 46, 131–159, 2002. [6] Olivier Chapelle, Jason Weston, and Bernhard Sch¨olkopf. Cluster Kernels for Semi-Supervised Learning. In NIPS 2002. [7] Fabio G. Cozman, Ira Cohen, and Marcelo C. Cirelo. Semi-Supervised Learning of Mixture Models and Bayesian Networks. In ICML 2003. [8] Thorsten Joachims. Transductive Learning via Spectral Graph Partitioning. In ICML 2003. [9] Ashish Kapoor, Yuan Qi, Hyungil Ahn, and Rosalind Picard. Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification. In NIPS 2005. [10] Alexander Smola, and Risi Kondor. Kernels and Regularization on Graphs. In COLT 2003. [11] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In ICML 2003. [12] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Semi-Supervised Learning: From Gaussian Fields to Gaussian Processes. CMU Technical Report CMU-CS-03-175. [13] Xiaojin Zhu, Jaz Kandola, Zoubin Ghahramani, and John Lafferty. Non-parametric Transforms of Graph Kernels for Semi-Supervised Learning. In NIPS 2004.
2006
201
3,035
Boosting Structured Prediction for Imitation Learning Nathan Ratliff, David Bradley, J. Andrew Bagnell, Joel Chestnutt Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 {ndr, dbradley, dbagnell, joel.chestnutt}@ri.cmu.edu Abstract The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion. 1 Introduction “Imitation learning” of control or navigational behaviors is important in many application areas. Recently, (Ratliff et al., 2006) demonstrated that imitation learning of long horizon and goal-directed behavior can be naturally formulated as a structured prediction problem over a space of policies or system trajectories. In this work, the authors demonstrate that efficient planning algorithms (e.g. for deterministic systems or general Markov Decision Problems) can be taught to generalize a set of examples provided by a supervisor. In essence, the algorithm attempts to linearly combine features into costs so that the resulting cost functions make demonstrated example policies appear optimal by a margin over all other policies. The technique utilizes the idea that while a desired behavior or control strategy is often quite clear to a human expert, hand designing cost functions that induce this behavior may be difficult. Unfortunately, this Maximum Margin Planning (MMP) approach, as well as related techniques for maximum margin structured learning developed in (Taskar et al., 2005) and (Taskar et al., 2003), depend on linearly combining a prespecified set of features. 1 Adopting a new variant of the general ANYBOOST algorithm described in (Mason et al., 1999) or similarly (Friedman, 1999a), we propose an alternate extension to Maximum Margin Planning specifically, and maximum margin structured learning generally, in which we perform subgradient descent in the space of cost functions rather than within any fixed parameterization. In this way, we show that we can “boost in” new features using simple classification that help us a solve a more difficult structured prediction problem. The 1Alternatively, all of these methods admit straightforward kernelization allowing the implicit learning within a Reproducing Kernel Hilbert space, but these kernel versions can be extremely memory and computationally intensive. application of boosting to structured learning techniques was first explored in (Dietterich et al., 2004), within the context of boosting Conditional Random Fields. This paper extends that result to maximum-margin techniques, and provides a more general functional gradient derivation. We then demonstrate three applications of our technique. First, using only smoothed versions of an overhead image as input, we show that MMPBOOST is able to match human navigation performance well on the task of navigating outdoor terrain. Using the same input, linear MMP, by contrast, performs almost no better than straight line paths. Next we demonstrate that we can develop a local obstacle detection/avoidance control system for an autonomous outdoor robot by observing an expert teleoperator drive. Finally, we demonstrate on legged locomotion problems the use a slow but highly accurate planner to train a fast, approximate planner using MMPBOOST . 2 Preliminaries We model, as in (Ratliff et al., 2006), planning problems as discrete Markov Decision Processes. Let s and a index the state and action spaces S and A, respectively; and let pi(s′|s, a) denote transition probabilities for example i. A discount factor on rewards (if any) is absorbed into the transition probabilities. Our cost (negative reward) functions are learned from supervised trajectories to produce policies that mimic the demonstrated behavior. Policies are described by µ ∈G, where G is the space of all state-action frequency counts. In the case of deterministic planning, µ is simply an indicator variable denoting whether the state-action s, a transition is encountered in the optimal policy. In the following, we use M both to denote a particular MDP, as well as to refer to the set of all state-action pairs in that MDP. We hypothesize the existence of a base feature space X from which all other features are derived. A cost function over an MDP M is defined through this space as c(fM), where fM : M →X denotes a mapping from state-action pairs to points in base feature space, and c is a cost function over X. Intuitively, each state-action pair in the MDP has an associated feature vector, and the cost of that state-action pair is a function of that vector. The input to the linear MMP algorithm is a set of training instances D = {(Mi, pi, Fi, µi, li)}n i=1. Each training instance consists of an MDP with transition probabilities pi and state-action pairs (si, ai) ∈Mi over which d-dimensional vectors of features mapped from the base feature space X are placed in the form of a d × |M| feature matrix Fi. In linear MMP, Fi is related to c above by [wT F]s,a = c(fMi(s, a)). µi denotes the desired trajectory (or full policy) that exemplifies behavior we hope to match. The loss vector li is a vector on the state-action pairs that indicates the loss for failing to match the demonstrated trajectory µi. Typically, in this work we use a simple loss function that is 0 on all states occupied in the example trajectory and 1 elsewhere. We use subscripts to denote indexing by training instance, and reserve superscripts for indexing into vectors. (E.g. µs,a i is the expected state-action frequency for state s and action a of example i.) It is useful for some problems, such as robot path planning, to imagine representing the features as a set of maps and example paths through those maps. For instance, one feature map might indicate the elevation at each state, another the slope, and a third the presence of vegetation. 3 Theory We discuss briefly the linear MMP regularized risk function as derived in (Ratliff et al., 2006) and provide the subgradient formula. We then present an intuitive and algorithmic exposition on the boosted version of this algorithm we use to learn a nonlinear cost function. The precise derivation of this algorithm is available as an appendix to the extended version of the paper, which can be found on the author’s website. 3.1 The Maximum Margin Planning risk function Crucial to the Maximum Margin Planning (MMP) approach is the development of a convex, but non-differentiable regularized risk function for the general margin or slack scaled (Tsochantaridis et al., 2005) maximum margin structured prediction problem. In (Ratliff et al., 2006), the authors show that a subgradient descent procedure on this objective function can utilize efficient inference techniques resulting in an algorithm that is tractable in both computation and memory for large problems. The risk function under this framework is R(w) = 1 n n X i=1 βi  wT Fiµi −min µ∈Gi(wT Fi −lT i )µ  + λ 2 ∥w∥2, which gives the following subgradient with respect to w gw = 1 n n X i=1 Fi∆wµi + λw, Here Fi is the current set of learned features over example i, µ∗= arg minµ∈Gi(wT Fi −lT i )µ and ∆wµi = µ∗−µi. This latter expression points out that, intuitively, the subgradient compares the state-action visitation frequency counts between the example policy and the optimal policy with respect to the current reward function wT Fi. The algorithm in its most basic form is given by the update rule wt+1 ←wt −γtgt, where {γt}∞ t=1 is a prespecified stepsize sequence and gt is a subgradient at the current timestep t. Note that computing the subgradient requires solving the problem µ∗= arg minµ∈Gi(wT Fi −lT i )µ for each MDP. This is precisely the problem of solving the particular MDP with the cost function wT Fi −lT i , and can be implemented efficiently via a myriad of specialized algorithms such as A∗ in the context of planning. 3.2 Structured boosting of MMP Maximum margin planning in its original formulation assumed the cost map is a linear function of a set of prespecified features. This is arguably the most restrictive assumption made in this framework. Similar to many machine learning algorithms, we find in practice substantial effort is put into choosing these features well. In this section, we describe at an intuitive and algorithmic level a boosting procedure for learning a nonlinear function of our base features. For clarity of exposition, a full derivation in terms of the functional gradient descent view of boosting (Mason et al., 1999) is postponed to the appendix of the extended version of this paper (available from the author’s website). We encourage the reader to review this derivation as it differs in flavor from those previously seen in the literature in ways important to its application to general structured prediction problems. This gradient boosting framework serves as a reduction (Beygelzimer et al., 2005) from the problem of finding good features for structured prediction to a problem of simple classification. At a high level, this algorithm learns a new feature by learning a classifier that is best correlated with the changes we would like to have made to locally decrease the loss had we an infinite number of parameters at our disposal. In the case of MMPBOOST , this forms the following algorithm which is iterated: • Fit the current model (using the current features) and compute the resulting loss-augmented cost map. • Run the planner over this loss-augmented cost map to get the best loss-augmented path. Presumably, when the current feature set is not yet expressive enough, this path will differ significantly from the example path. • Form positive examples by gathering feature vectors encountered along this lossaugmented path {(x(i) planned, 1)} and form negative examples by gathering feature vectors encountered along the example path {(x(j) example, −1)}. • Learn a classifier using this data set to generalize these suggestions to other points on the map. • Apply this classifier to every cell of all example maps and add the result as a new feature to the feature matrix. Figure 1: The four subimages to the left show (clockwise from upper left) a grayscale image used as base features for a hold out region, the first boosted feature learned by boosted MMP for this region, the results of boosted MMP on an example over this region (example red, learned path green), and the best linear fit of this limited feature set. The plot on the right compares boosting objective function value (red) and loss on a hold out set (blue) per boosting iteration between linear MMP (dashed) and boosted MMP (solid). This simple procedure forms the MMPBOOST algorithm. If the original set of features cannot correctly represent as a linear function the cost variation necessary to explain the decisions made by the trainer, this algorithm tries to find a new feature as a nonlinear function of the original base set of features, that can best simultaneously raise the cost of the current erroneous path, and lower the cost of the example path. Importantly, this function takes the form of a classifier that can generalize this information to each cell of every map. Adding this feature to the current feature set provides an incremental step toward explaining the decisions made in the example paths. 4 Applications In this section we demonstrate on three diverse problems how MMPBOOST improves performance in navigation and planning tasks. 4.1 Imitation Learning for Path Planning We first consider a problem of learning to imitate example paths drawn by humans on publicly available overhead imagery. In this experiment, a teacher demonstrates optimal paths between a set of start and goal points on the image, and we compare the performance of MMPBOOST to that of a linear MMP algorithm in learning to imitate the behavior. The base features for this experiment consisted of the raw grayscale image, 5 Gaussian convolutions of it with standard deviations 1, 3, 5, 7, and 9, and a constant feature. Cost maps were created as a linear combination of these features in the case of MMP, and as a nonlinear function of these features in the case of MMPBOOST . The planner being trained was an 8-connected implementation of A*. The results of these experiments are shown in Figure 1. The upper right panel on the left side of that Figure shows the grayscale overhead image of the holdout region used for testing. The training region was similar in nature, but taken over a different location. The features are particularly difficult for MMP since the space of cost maps it considers for this problem consists of only linear combinations of the same image at different resolutions. e.g. imagine taking various blurred versions of an image and trying to combine them to make any reasonable cost map. The lower left panel on the left side of Figure 1 shows that the best cost map MMP was able to find within this space was largely just a map with uniformly high cost everywhere. The learned cost map was largely uninformative causing the planner to choose the straight-line path between endpoints. The lower right panel on the left side of Figure 1 shows the result of MMPBOOST on this problem on a holdout image of an area similar to that on which we trained. In this instance, we used regression trees with 10 terminal nodes as our dictionary H, and trained them on the base features to match the functional gradient as described in Section 3.2. Since MMPBOOST searches through a space learned cost map 1 2 3 4 5 6 7 8 x 10 −3 demonstrated engineered learned 1 2 3 4 5 6 7 8 4 5 6 7 8 9 10 11 12 13 Boosting round Average loss of D* path Average loss per round of boosting learned engineered Figure 2: Left: An example learned cost map for a narrow path through trees showing the engineered system (blue line) wanting to take a short cut to the goal by veering off into dense woods instead of staying on the path as the human demonstrated (green line). In the learned cost map several boosted features combine to make the lowest-cost path (red line) match the human’s preference of staying on the path. The robot is currently located in the center of the cost map. Right: A graph of the average A* path loss over the examples in each round of boosting. In just a few rounds the learned system exceeds the performance of the carefully engineered system. nonlinear cost functions, it is able to perform significantly better than the linear MMP. Interestingly, the first feature it learned to explain the supervised behavior was to a large extent a road detection classifier. The right panel of Figure 1 compares plots of the objective value (red) and the loss on the holdout set (blue) per iteration between the linear MMP (dashed) and MMPBOOST (solid). The first feature shown in figure 1 is interesting in that it largely represents the result of a path detector. The boosting algorithm chooses positive examples along the example path, and negative examples along the loss-augmented path, which are largely disjoint from the example paths. Surprisingly, MMPBOOST also outperformed linear MMP applied to additional features that were hand-engineered for this imagery. In principle, given example plans, MMPBOOST can act as a sophisticated image processing technique to transform any overhead (e.g. satellite) image directly to a cost map with no human intervention and feature engineering. 4.2 Learning from Human Driving Demonstration We next consider the problem of learning to mimic human driving of an autonomous robot in complex outdoor and off-road terrain. We assume that a coarse global planning problem has been solved using overhead imagery and the MMPBOOST application presented above. Instead, we use MMPBOOST to learn an local obstacle detection/avoidance system. We consider the local region around the vehicle’s position at time t separately from the larger global environment. Our goal is to use the vehicle’s onboard sensors to detect obstacles which were not visible in the overhead imagery or were not present when the imagery was collected. The onboard sensor suite used consists of ladar scanners to provide structural information about the environment, and color and NIR cameras to provide appearance information. From these sensors we compute a set of base features for each cell in a discretized 2-D map of the local area. These base features include quantities such as the estimated elevation and slope of the ground plane in the cell, and the average color and density of ladar points in the cell for various height ranges above the estimated ground plane. As training data we use logged sensor data from several kilometers of teleoperation of a large mobile robot through a challenging outdoor environment by an experienced operator. In the previous example, the algorithm had access to both the complete path demonstrated by the teacher, and the same input data (overhead image) the teacher used while generating the path. However, in this example not only is the input data different (since the teacher generally controls the robot from behind and to the side using their own prior knowledge of the environment and highly capable vision system), but we face the additional challenge of estimating the path planned by the teacher at a particular time step from the vehicle motion we observe in future time steps, when the teacher is using additional data. For this experiment we assume that the next 10 m of the path driven by the vehicle after time t matches the operator’s intended path at time t, and only compute loss over that section of the path. In practice this means that we create a set of local examples from each teleoperated path by sampling the internal state of the robot at discrete points in time. At each time t we record the feature map generated by the robots onboard sensors of the local 10 m radius area surrounding it as well as the path the robot followed to the boundary of that area. Additionally, we model the operator’s prior knowledge of the environment and their sensing of obstacles beyond the 10 m range by using our global planning solution to generate the minimum path costs from a set of points on the boundary of each local map to the global goal. The operator also attempted to match the range at which he reacted to obstacles not visible in the overhead data (such as vehicles that were placed in the robot’s path) with the 10 m radius of the local map. An 8-connected variant of A* then chooses a path to one of the points on the boundary of the local map that minimizes the sum of costs accumulated along the path to the boundary point with the cost-to-goal from the boundary point to the goal. Using 8 terminal node classification trees as our dictionary H, we then apply the MMPBOOST algorithm to determine transformations from base features to local costs so that the local trajectories executed by the human are chosen by the planner with large margin over all the other possible local trajectories. The results of running MMPBOOST on the 301 examples in our data set are compared to the results given by the current human engineered cost production system used on the robot in Figure 2. The engineered system is the result of many man-hours of parameter tunning over weeks of field testing. The learned system started with the engineered feature maps, and then boosted in additional features as necessary. After just a few iterations of boosting the learned system displays significantly lower average loss than the engineered system, and corrects important navigational errors such as the one shown. 4.3 Learning a Fast Planner from a Slower one Legged robots have unique capabilities not found in many mobile robots. In particular, they can step over or onto obstacles in their environment, allowing them to traverse complicated terrain. Algorithms have been developed which plan for foot placement in these environments, and have been successfully used on several biped robots (Chestnutt et al., 2005). In these cases, the planner evaluates various steps the robot can execute, to find a sequence of steps that is safe and is within the robot’s capabilities. Another approach to legged robot navigation uses local techniques to reactively adjust foot placement while following a predefined path (Yagi & Lumelsky, 1999). This approach can fall into local minima or become stuck if the predefined path does not have valid footholds along its entire length. Footstep planners have been shown to produce very good footstep sequences allowing legged robots to efficiently traverse a wide variety of terrain. This approach uses much of the robot’s unique abilities, but is more computationally expensive than traditional mobile robot planners. Footstep planning occurs in a high-dimensional state space and therefore is often too computationally burdensome to be used for real-time replanning, limiting its scope of application to largely static environments. For most applications, the footstep planner implicitly solves a low dimensional navigational problem simultaneously with the footstep placement problem. Using MMPBOOST , we use body trajectories produced by the footstep planner to learn the nuances of this navigational problem in the form of a 2.5-dimensional navigational planner that can reproduce these trajectories. We are training a simple, navigational planner to effectively reproduce the body trajectories that typically result from a sophisticated footstep planner. We could use the resulting navigation planner in combination with a reactive solution (as in (Yagi & Lumelsky, 1999)). Instead, we pursue a hybrid approach of using the resulting simple planner as a heuristic to guide the footstep planner. Using a 2-dimensional robot planner as a heuristic has been shown previously (Chestnutt et al., 2005) to dramatically improve planning performance, but the planner must be manually tuned to provide costs that serve as reasonable approximations of the true cost. To combat these computational problems we focus on the heuristic, which largely defines the behavior of the A* planner. Poorly informed admissible heuristics can cause the planner to erroneously attempt numerous dead ends before happening upon the optimal solution. On the other hand, well informed inadmissible heuristics can pull the planner quickly toward a solution whose cost, though suboptimal, is very close to the minimum. This lower-dimensional planner is then used in the heuristic to efficiently and Figure 3: Left is an image of the robot used for the quadruped experiments. The center pair of images shows a typical height map (top), and the corresponding learned cost map (bottom) from a holdout set of the biped planning experiments. Notice how platform-like regions are given low costs toward the center but higher costs toward the edges, and the learned features interact to lower cost chutes to direct the planner through complicated regions. Right are two histograms showing the ratio distribution of the speed of both the admissible Euclidean (top) and the engineered heuristic (bottom) over an uninflated MMPBOOST heuristic on a holdout set of 90 examples from the biped experiment. In both cases, the MMPBOOST heuristic was uniformly better in terms of speed. cost diff speedup cost diff speedup mean std mean std mean std mean std biped admissible biped inflated MMPBOOST vs Euclidean 0.91 10.08 123.39 270.97 9.82 11.78 10.55 17.51 MMPBOOST vs Engineered -0.69 6.7 20.31 33.11 2.55 6.82 11.26 32.07 biped best-first quadruped inflated MMPBOOST vs Euclidean -609.66 5315.03 272.99 1601.62 3.69 7.39 2.19 2.24 MMPBOOST vs Engineered 3.42 37.97 6.4 17.85 -4.34 8.93 3.51 4.11 Figure 4: Statistics comparing the MMPBOOST heuristic to both a Euclidean and discrete navigational heuristic. See the text for descriptions of the values. intelligently guide the footstep planner toward the goal, effectively displacing a large portion of the computational burden. We demonstrate our results in both simulations and real-world experiments. Our procedure is to run a footstep planner over a series of randomly drawn two-dimensional terrain height maps that describe the world the robot is to traverse. The footstep planner produces trajectories of the robot from start to goal over the terrain map. We then apply MMPBOOST again using regression trees with 10 terminal nodes as the base classifier to learn cost features and weights that turn height maps into cost functions so that a 2-dimensional planner over the cost map mimics the body trajectory. We apply the planner to two robots: first the HRP-2 biped robot and second the LittleDog2 quadruped robot.The quadruped tests were demonstrated on the robot.3 Figure 4 shows the resulting computational speedups (and the performance gains) of planning with the learned MMPBOOST heuristic over two previously implemented heuristics: a simple Euclidean heuristic that estimates the cost-to-go as the straight-line distance from the current state to the goal; and an alternative 2-dimensional navigational planner whose cost map was hand engineered. We tested three different versions of the planning configuration: (1) no inflation, in which the heuristic is expected to give its best approximation of the exact cost so that the heuristics are close to admissible (Euclidean is the only one who is truly admissible); (2) inflated, in which the heuristics are inflated by approximately 2.5 (this is the setting commonly used in practice for these planners); and (3) Bestfirst search, in which search nodes are expanded solely based on their heuristic value. The cost diff column relates on average the extent to which the cost of planning under the MMPBOOST heuristic is above or below the opposing heuristic. Loosely speaking this indicates how many more footsteps are taken under the MMPBOOST heuristic, i.e. negative values support MMPBOOST . The speedup column relates the average ratio of total nodes searched between the heuristics. In this case, large values are better, indicating the factor by which MMPBOOST outperforms its competition. 2Boston Dynamics designed the robot and provided the motion capture system used in the tests. 3A video demonstrating the robot walking across a terrain board is provided with this paper. The most direct measure of heuristic performance arguably comes from the best-first search results. In this case, both the biped and quadruped planner using the learned heuristic significantly outperform their counterparts under a Euclidean heuristic.4 While Euclidean often gets stuck for long periods of time in local minima, both the learned heuristic and to a lesser extent the engineered heuristic are able to navigate efficiently around these pitfalls. We note that A* biped performance gains were considerably higher: we believe this is because orientation plays a large role in planning for the quadruped. 5 Conclusions and Future Work MMPBOOST combines the powerful ideas of structured prediction and functional gradient descent enabling learning by demonstration for a wide variety of applications. Future work will include extending the learning of mobile robot path planning to more complex configuration spaces that allow for modeling of vehicle dynamics. Further, we will pursue applications of the gradient boosting approach to other problems of structured prediction. Acknowledgments The authors gratefully acknowledge the partial support of this research by the DARPA Learning for Locomotion and UPI contracts, and thank John Langford for enlightening conversations on reduction of structured learning problems. References Beygelzimer, A., Dani, V., Hayes, T., Langford, J., & Zadrozny, B. (2005). Error limiting reductions between classification tasks. ICML ’05. New York, NY. Chestnutt, J., Lau, M., Cheng, G., Kuffner, J., Hodgins, J., & Kanade, T. (2005). Footstep planning for the Honda ASIMO humanoid. Proceedings of the IEEE International Conference on Robotics and Automation. Dietterich, T. G., Ashenfelter, A., & Bulatov, Y. (2004). Training conditional random fields via gradient tree boosting. ICML ’04. Friedman, J. H. (1999a). Greedy function approximation: A gradient boosting machine. Annals of Statistics. Hassani, S. (1998). Mathematical physics. Springer. Mason, L., J.Baxter, Bartlett, P., & Frean, M. (1999). Functional gradient techniques for combining hypotheses. Advances in Large Margin Classifiers. MIT Press. Ratliff, N., Bagnell, J. A., & Zinkevich, M. (2006). Maximum margin planning. Twenty Second International Conference on Machine Learning (ICML06). Taskar, B., Chatalbashev, V., Guestrin, C., & Koller, D. (2005). Learning structured prediction models: A large margin approach. Twenty Second International Conference on Machine Learning (ICML05). Taskar, B., Guestrin, C., & Koller, D. (2003). Max margin markov networks. Advances in Neural Information Processing Systems (NIPS-14). Tsochantaridis, I., Joachims, T., Hofmann, T., & Altun, Y. (2005). Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 1453–1484. Yagi, M., & Lumelsky, V. (1999). Biped robot locomotion in scenes with unknown obstacles. Proceedings of the IEEE International Conference on Robotics and Automation (pp. 375–380). Detroit, MI. 4The best-first quadruped planner under the MMPBOOST heuristic is on average approximately 1100 times faster than under the Euclidean heuristic in terms of the number of nodes searched.
2006
202
3,036
Denoising and Dimension Reduction in Feature Space Mikio L. Braun Fraunhofer Institute1 FIRST.IDA Kekul´estr. 7, 12489 Berlin mikio@first.fhg.de Joachim Buhmann Inst. of Computational Science ETH Zurich CH-8092 Z¨urich jbuhmann@inf.ethz.ch Klaus-Robert M¨uller2,1 Technical University of Berlin2 Computer Science Franklinstr. 28/29, 10587 Berlin krm@cs.tu-berlin.de Abstract We show that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved even by linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data to perform classification. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to help in model selection, and to (3) de-noise in feature space in order to yield better classification results. 1 Introduction Kernel machines use a kernel function as a non-linear mapping of the original data into a highdimensional feature space; this mapping is often referred to as empirical kernel map [6, 11, 8, 9]. By virtue of the empirical kernel map, the data is ideally transformed such that a linear discriminative function can separate the classes with low generalization error, say via a canonical hyperplane with large margin. The latter is used to provide an appropriate mechanism of capacity control and thus to “protect” against the high dimensionality of the feature space. The idea of this paper is to add another aspect, not covered by this picture. We will show theoretically that if the learning problem matches the kernel well, the relevant information of a supervised learning data set is always contained in a finite number of leading kernel PCA components (that is, the label information projected to the kernel PCA directions), up to negligible error. This result is based on recent approximation bounds dealing with the eigenvectors of the kernel matrix which show that if a function can be reconstructed using only a few kernel PCA components asymptotically, then the same already holds in a finite sample setting, even for small sample sizes. Consequently, the use of a kernel function not only greatly increases the expressive power of linear methods by non-linearly transforming the data, but it does so ensuring that the high dimensionality of the feature space will not become overwhelming: the relevant information for classification will stay confined within a comparably low dimensional subspace. This finding underlines the efficient use of data that is made by kernel machines using a kernel suited to the problem. While the number of data points stays constant for a given problem, a smart choice of kernel permits to make better use of the available data at a favorable “data point per effective dimension”-ratio, even for infinitedimensional feature spaces. Furthermore we can use de-noising techniques in feature space, much in the spirit of Mika et al. [8, 5] and thus regularize the learning problem in an elegant manner. Let us consider an example. Figure 1 shows the first six kernel PCA components for an example data set. Above each plot, the variance of the data along this direction and the contribution of this Figure 1: Although the data set is embedded into a high-dimensional manifold, not all directions contain interesting information. Above the first six kernel PCA components are plotted. Of these, only the fourths is highly relevant for the learning problem. Note, however, that this example is atypical in having a single relevant component. In general, several components will have to be combined to construct the decision boundary. component to the class labels are plotted (normalized such that the maximal possible contribution is one1). Of these six components, only the fourth contributes significantly to the class memberships. As we will see below, the contributions in the other directions is mostly noise. This is true especially for components with small variance. Therefore, after removing this noise, a finite number of components suffice to represent the optimal decision boundary. The dimensionality of the data set in feature space is characteristic for the relation between a data set and a kernel. Roughly speaking, the relevant dimensionality of the data set corresponds to the complexity of the learning problem when viewed through the lens of the kernel function. This notion of complexity relates the number of data points required by the learning problem and the noise, as a small relevant dimensionality enables the de-noising of the data set to obtain an estimate of the true class labels, making the learning process much more stable. This combination of dimension and noise estimate allows us to distinguish among data sets showing weak performance which might either be complex or noisy. To summarize the main contributions of this paper: (1) We provide theoretical bounds showing that the relevant information (defined in section 2) is actually contained in the leading projected kernel principal components under appropriate conditions. (2) We propose an algorithm which estimates the relevant dimensionality of the data set and permits to analyze the appropriateness of a kernel for the data set, and thus to perform model selection among different kernels. (3) We show how the dimension estimate can be used in conjunction with kernel PCA to perform effective de-noising. We analyze some well-known benchmark data sets and evaluate the performance as a de-noising tool in Section 5. Note that we do not claim to obtain better performance within our framework when compared to, for example, cross-validation techniques. Rather, we are on par. Our contribution is to foster an understanding about a data set and to gain better insights of whether a mediocre classification result is due to intrinsic high dimensionality of the data or overwhelming noise level. 2 The Relevant Information and Kernel PCA Components In this section, we will define the notion of the relevant information contained in the class labels, and show that the location of this vector with respect to the kernel PCA components is linked to the scalar products with the eigenvectors of the kernel matrix. 1Note, however, that these numbers do not simply add up, instead the contribution of a and b is √ a2 + b2. Let us start to formalize the ideas introduced so far. As usual, we will consider a data set (X1, Y1), . . . , (Xn, Yn) where the X lie in some space X and the Y are in Y = {±1}. We assume that the (Xi, Yi) are drawn i.i.d. from PX×Y. In kernel methods, the data is non-linearly mapped into some feature space F via the feature map Φ. Scalar products in F can be computed by the kernel k in closed form: ⟨Φ(x), Φ(x′)⟩= k(x, x′). Summarizing all the pairwise scalar products results in the (normalized) kernel matrix K with entries k(Xi, Xj)/n. We wish to summarize the information contained in the class label vector Y = (Y1, . . . , Yn) about the optimal decision boundary. We define the relevant information vector as the vector G = (E(Y1|X1), . . . , E(Yn|Xn)) containing the expected class labels for the objects in the training set. The idea is that since E(Y |X) = P(Y = 1|X) −P(Y = −1|X), the sign of G contains the relevant information on the true class membership by telling us which class is more probable. The observed class label vector can be written as Y = G −N with N = G −Y denoting the noise in the class labels. We want to study the relation of G with respect to the kernel PCA components. The following lemma relates projections of G to the eigenvectors of the kernel matrix K: Lemma 1 The kth kernel PCA component fk evaluated on the Xis is equal to the kth eigenvector2 of the kernel matrix K: (fk(X1), . . . , fk(Xn)) = uk. Consequently, the projection of a vector Y ∈Rn to the leading d kernel PCA components is given by πd(Y ) = Pd k=1 uku⊤ kY. Proof The kernel PCA directions are given as (see [10]) vk = Pn i=1 αiΦ(Xi), where αi = [uk]i/lk, [uk]i denoting the ith component of uk, and lk, uk being the eigenvalues and eigenvectors of the kernel matrix K. Thus, the kth PCA component for a point Xj in the training set is fk(Xj) = ⟨Φ(Xj), vk⟩= 1 lk n X i=1 ⟨Φ(Xj), Φ(Xi)⟩[uk]i = 1 lk n X i=1 k(Xj, Xi)[uk]i. The sum computes the jth component of Kuk = lkuk, because uk is an eigenvector of K. Therefore fk(Xj) = 1 lk [lkuk]j = [uk]j. Since the uk are orthogonal (K is a symmetric matrix), the projection of Y to the space spanned by the first d kernel PCA components is given by Pd i=1 uiu⊤ iY . ■ 3 A Bound on the Contribution of Single Kernel PCA Components As we have just shown, the location of G is characterized by its scalar products with the eigenvectors of the kernel matrix. In this section, we will apply results from [1, 2] which deal with the asymptotic convergence of spectral properties of the kernel matrix to show that the decay rate of the scalar products are linked to the decay rate of the kernel PCA principal values. It is clear that we cannot expect G to generally locate favorably with respect to the kernel PCA components, but only when there is some kind of match between G and the chosen kernel. This link will be established by asymptotic considerations. Kernel PCA is closely linked to the spectral properties of the kernel matrix, and it is known [3, 4] that the eigenvalues and the projections to eigenspaces converge. Their asymptotic limits are given as the eigenvalues λi and eigenfunctions ψi of the integral operator Tkf = R k( · , x)f(x)PX (dx) defined on L2(PX ), where PX is the marginal measure of PX×Y which generates our samples. The eigenvalues and eigenfunctions also occur in the well-known Mercer’s formula: By Mercer’s theorem, k(x, x′) = P∞ i=1 λiψi(x)ψi(x′). The asymptotic counterpart of G is given by the function g(x) = E(Y |X = x). We will encode fitness between k and g by requiring that g lies in the image of Tk. This is equivalent to saying that there exists a sequence (αi) ∈ℓ2 such that g = P∞ i=1 λiαiψi.3 Under this condition, the scalar products decay as quickly as the eigenvalues, because ⟨g, ψi⟩= λiαi = O(λi). Because of the known convergence of spectral projections, we can expect the same behavior asymptotically 2As usual, the eigenvectors are arranged in descending order by corresponding eigenvalue. 3A different condition is that g lies in the RKHS generated by k. This amounts to saying that g lies in the image of T 1/2 k . Therefore, the condition used here is slightly more restrictive. from the finite sample case. However, the convergence speed is the crucial question. This question is not trivial, because eigenvector stability is known to be linked to the gap between the corresponding eigenvalues, which will be fairly small for small eigenvalues. In fact, for example, the results from [14] do not scale properly with the corresponding eigenvalue, such that the bounds are too loose. A number of recent results on the spectral properties of the kernel matrix [1, 2] specifically deal with error bounds for small eigenvalues and their associated spectral projections. Using these results, we obtain the following bound on u⊤ iG.4 Theorem 1 Let g = P∞ i=1 αiλiψi as explained above, and let G = (g(X1), . . . , g(Xn)). Then, with high probability. 1 √n|u⊤ iG| < 2liarci(1 + O(rn−1/4)) + rarΛrO(1) + Tr + p ATrO(n−1/4) + rar p ΛrO(n−1/2), where r balances the different terms (1 ≤r ≤n), ci measures the size of the eigenvalue cluster around li, ar = Pr i=1 |αi| is a measure of the size of the first r components, Λr is the sum of all eigenvalues smaller than λr, A is the supremum norm of g, and Tr is the error of projecting g to the space spanned by the first r eigenfunctions. The bound consists of a part which scales with li (first term) and a part which does not (remaining terms). Typically, the bound initially scales with li until the non-scaling part dominates the bound for larger i. These two parts are balanced by r. However, note that all terms which do not scale with li will typically be small: for smooth kernels, the eigenvalues quickly decay to zero as r →∞. The related quantities Λr, and Tr, will also decay to zero at slightly slower rates. Therefore, by adjusting r (as n →∞), the non-scaling part can be made arbitrarily small, leading to a small bound on |u⊤ iG| for larger i. Put differently, the bound shows that the relevant information vector G (as introduced in Section 2) is contained in a number of leading PCA components up to a negligible error. The number of dimensions depends on the asymptotic coefficients αi and the decay rate of the asymptotic eigenvalues of k. Since this rate is related to the smoothness of the kernel function, the dimension will be small for smooth kernels whose leading eigenfunctions ψi permit good approximation of g. 4 The Relevant Dimension Estimation Algorithm In this section, we will propose the relevant dimension estimation (RDE) algorithm which estimates the dimensionality of the relevant information from a finite sample, allowing us to analyze the fit between a kernel function and a data set in a practical way. Dimension Estimation We propose an approach which is motivated by the geometric findings explained above. Since G is not known, we can only observe the contributions of the kernel PCA components to Y , which can be written as Y = G + N (see Section 2). The contributions u⊤ iY will thus be formed as a superposition of u⊤ iY = u⊤ iG + u⊤ iN. Now, by Theorem 1, we know that G will be very close to zero for the latter coefficients, while on the other hand, the noise N will be equally distributed over all coefficients. Therefore, the kernel PCA coefficients s = u⊤ iY will have the shape of an evenly distributed noise floor u⊤ iN from which the coefficients u⊤ iG of the relevant information protrude (see Figure 2(b) for an example). We thus propose the following algorithm: Given a fixed kernel k, we estimate the true dimension by fitting a two component model to the coordinates of the label vector. Let s = (u⊤ 1Y, . . . , u⊤ nY ). Then, assume that si ∼ N(0, σ2 1) 1 ≤i ≤d N(0, σ2 2) d < i ≤n. 4We have tried to reduce the bound to its most prominent features. For a more detailed explanation of the quantities and the proof, see the appendix. Also, the confidence δ of the “with high probability” part is hidden in the O( · ) notation. We have used the O( · ) notation rather deliberately to exhibit the dominant constants. (a) (b) (c) Figure 2: Further plots on the toy example from the introduction. (a) contains the kernel PCA component contributions (dots), and the training and test error by projecting the data set to the given number of leading kernel PCA components. (b) shows the negative log-likelihood of the two component model used to estimate the dimensionality of the data. (c) The resulting fit when using only the first four components. We select the d minimizing the negative log-likelihood, which is proportional to ℓ(d) = d n log σ2 1 + n −d n log σ2 2, with σ2 1 = 1 d d X i=1 s2 i , σ2 2 = 1 n −d n X i=d+1 s2 i . (1) Model Selection for Kernel Choice For different kernels, we again use the likelihood and select the kernel which leads to the best fit in terms of the likelihood. If the kernel width does not match the scale of the structure of the data set, the fit of the two component model will be inferior: for very small or very large kernels, the kernel PCA coefficients of Y have no clear structure, such that the likelihood will be small. For example, for Gaussian kernels, for very small kernel widths, noise is interpreted as relevant information, such that there appears to be no noise, only very high-dimensional data. On the other hand, for very large kernel widths, any structure will be indistinguishable from noise such that the problem appears to be very noisy with almost no structure. In both cases, fitting the two component model will not work very well, leading to large values of ℓ. Experimental Error Estimation The estimated dimension can be used to estimate the noise level present in the data set. The idea is to measure the error between the projected label vector ˆG = πd(Y ), which approximates the true label information G. The resulting number ˆ err = 1 n Pn i=1 1{[ ˆG]i ̸= Yi} is an estimate of the fraction of misclassified examples in the training set, and therefore an estimate for the noise level in the class labels. A Note on Consistency Both the estimate of ˆG and the noise level are consistent if the estimated dimension d scales sub-linearly with n. The argument can be sketched as follows: since the kernel PCA components do not depend on Y , the noise N contained in Y is projected to a random subspace of dimension d. Therefore, 1 n∥πd(N)∥2 ≈ d n( 1 n∥N∥2) →0 as n →∞, since d/n →0 and 1 n∥N∥2 →E(N 2). Empirically, d was found to be rather stable, but in principle, the condition on d could even be enforced by adding a small sub-linear term (for example, √n, or log n, to the estimated dimension d). 5 Experiments Toy Data Set Returning to the toy example from the introduction, let us now take a closer look at this data set. In Figure 2(a), the spectrum for the toy data set is plotted. We can see that every kernel PCA component contributes to the observed class label vector. However, most of these contributions are noise, since the classes are overlapping. The RDE method estimates that only the first four components are relevant. This behavior of the algorithm can also be seen from the training and independent test error measured on a second data set of size 1000 which can also be found in this plot. In Figure 2(b), the log-likelihoods from (1) are shown, and one observes a well pronounced minimum. Finally, in Figure 2(c), the resulting fit is shown. Benchmark data sets We performed experiments on the classification learning sets from [7]. For each of the data sets, we de-noise the data set using a family of rbf kernels by projecting the class labels to the estimated number of leading kernel PCA components. The kernel width is also selected automatically using the achieved log-likelihood as described above. The width of the rbf kernel is selected from 20 logarithmically spaced points between 10−2 and 104 for each data set. For the dimension estimation task, we compare our RDE method to a dimensionality estimate based on cross-validation. More concretely, the matrix S = Pd i=1 uiu⊤ i computes the projection to the leading d kernel PCA components. Interpreting the matrix S as a linear fit matrix, the leave-oneout cross-validation error can be computed in closed form (see [12])5, since S is diagonal with respect to the eigenvector basis ui. Evaluating the cross-validation error for all dimensions and for a number of kernel parameters, one can select the best dimension and kernel parameter. Since the cross-validation can be computed efficiently, the computational demands of both methods are equal. Table 3 shows the resulting dimension estimates. We see that both methods perform on par, which shows that the strong structural prior assumption underlying RDE is justified. For the de-noising task, we have compared a (unregularized) least-squares fit in the reduced feature space (kPCR) against kernel ridge regression (KRR) and support vector machines (SVM) on the same data set. The resulting test errors are plotted also in Table 3. We see that a relatively simple method on the reduced features leads to classification which is on par with the state-of-the-art competitors. Also note that the estimated error rates match the actually observed error rates quite well, although there is a tendency to under-estimate the true error. Finally, inspecting the estimated dimension and noise level reveals that the data sets breast-cancer, diabetis, flare-solar, german, and titanic all have only moderately large dimensionalities. This suggest that these data sets are inherently noisy and better results cannot be expected, at least within the family of rbf kernels. On the other hand, the data set image seems to be particularly noise free, given that one can achieve a small error in spite of the large dimensionality. Finally, the splice data set seems to be a good candidate to benefit from more data. 6 Conclusion Both in theory and on practical data sets, we have shown that the relevant information in a supervised learning scenario is contained in the leading projected kernel PCA components if the kernel matches the learning problem. The theory provides a consistent estimation for the expected class labels and the noise level. This behavior complements the common statistical learning theoretical view on kernel based learning with insight on the interaction of data and kernel: A well chosen kernel (a) makes the model estimate efficiently and generalize well, since only a comparatively low dimensional representation needs to be learned for a fixed given data size and (b) permits a de-noising step that discards some void projected kernel PCA directions and thus provides a regularized model. Practically, our RDE algorithm automatically selects the appropriate kernel model for the data and extracts as additional side information an estimate of the effective dimension and estimated expected 5This applies only to the 2-norm. However, as the performance of 2-norm based methods like kernel ridge regression on classification problems show, the 2-norm is also informative on the classification performance. data set dim dim (cv) est. error rate kPCR KRR SVM banana 24 26 8.8 ± 1.5 11.3 ± 0.7 10.6 ± 0.5 11.5 ± 0.7 breast-cancer 2 2 25.6 ± 2.1 27.0 ± 4.6 26.5 ± 4.7 26.0 ± 4.7 diabetis 9 9 21.5 ± 1.3 23.6 ± 1.8 23.2 ± 1.7 23.5 ± 1.7 flare-solar 10 10 32.9 ± 1.2 33.3 ± 1.8 34.1 ± 1.8 32.4 ± 1.8 german 12 12 22.9 ± 1.1 24.1 ± 2.1 23.5 ± 2.2 23.6 ± 2.1 heart 4 5 15.8 ± 2.5 16.7 ± 3.8 16.6 ± 3.5 16.0 ± 3.3 image 272 368 1.7 ± 1.0 4.2 ± 0.9 2.8 ± 0.5 3.0 ± 0.6 ringnorm 36 37 1.9 ± 0.7 4.4 ± 1.2 4.7 ± 0.8 1.7 ± 0.1 splice 92 89 9.2 ± 1.3 13.8 ± 0.9 11.0 ± 0.6 10.9 ± 0.6 thyroid 17 18 2.0 ± 1.0 5.1 ± 2.1 4.3 ± 2.3 4.8 ± 2.2 titanic 4 6 20.8 ± 3.8 22.9 ± 1.6 22.5 ± 1.0 22.4 ± 1.0 twonorm 2 2 2.3 ± 0.7 2.4 ± 0.1 2.8 ± 0.2 3.0 ± 0.2 waveform 14 23 8.4 ± 1.5 10.8 ± 0.9 9.7 ± 0.4 9.9 ± 0.4 Figure 3: Estimated dimensions and error rates for the benchmark data sets from [7]. “dim” shows the medians of the estimated dimensionalities over the resamples. “dim (cv)” shows the same quantity, but this time, the dimensions have been estimated by leave-one-out cross-validation. “est. error rate” is the estimated error rate on the training set by comparing the de-noise class labels to the true class labels. The last three columns show the test error rates of three algorithms: “kPCR” predicts using a simple least-squares hyperplane on the estimated subspace in feature space, “KRR” is kernel ridge regression with parameters estimated using leave-one-out cross-validation, and “SVM” are the original error rates from [7]. error for the learning problem. Compared to common cross-validation techniques one could argue that all we have achieved is to find a similar model as usual at a comparable computing time. However, we would like to emphasize that the side information extracted by our procedure contributes to a better understanding of the learning problem at hand: Is the classification result limited due to intrinsic high dimensional structure or are we facing noise and nuisance dimensions? Simulations show the usefulness of our RDE algorithm. An interesting future direction lies in combining these results with generalization bounds which are also based on the notion of an effective dimension, this time, however, with respect to some regularized hypothesis class (see, for example, [13]). Linking the effective dimension of the data set with the dimension of a learning algorithm, one could obtain data dependent bounds in a natural way with the potential to be tighter than bounds which are based on the abstract capacity of a hypothesis class. References [1] ML Braun. Spectral Properties of the Kernel Matrix and Their Application to Kernel Methods in Machine Learning. PhD thesis, University of Bonn, 2005. Available electronically at http://hss.ulb.unibonn.de/diss online/math nat fak/2005/braun mikio. [2] ML Braun. Accurate error bounds for the eigenvalues of the kernel matrix. Journal of Machine Learning Research, 2006. To appear. [3] V Koltchinskii and E Gin´e. Random matrix approximation of spectra of integral operators. Bernoulli, 6(1):113–167, 2000. [4] VI Koltchinskii. Asymptotics of spectral projections of some random matrices approximating integral operators. Progress in Probability, 43:191–227, 1998. [5] S Mika, B Sch¨olkopf, A Smola, K-R M¨uller, M Scholz, and Gunnar R¨athsch. Kernel PCA and de-noising in feature space. In Advances in Neural Information Processing Systems 11. MIT Press, 1999. [6] K-R M¨uller, S Mika, G R¨atsch, K Tsuda, and B Sch¨olkopf. An introduction to kernel-based learning algorithms. IEEE Transaction on Neural Networks, 12(2):181–201, May 2001. [7] G R¨atsch, T Onoda, and K-R M¨uller. Soft margins for AdaBoost. Machine Learning, 42(3):287–320, March 2001. [8] B Sch¨olkopf, S Mika, CJC Burges, P Knirsch, K-R M¨uller, G R¨atsch, and AJ Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000–1017, September 1999. [9] B Sch¨olkopf and AJ Smola. Learning with Kernels. MIT Press, 2001. [10] B Sch¨olkopf, AJ Smola, and K-R M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [11] V Vapnik. Statistical Learning Theory. Wiley, 1998. [12] G Wahba. Spline Models For Observational Data. Society for Industrial and Applied Mathematics, 1990. [13] T Zhang. Learning bounds for kernel regression using effective data dimensionality. Neural Computation, 17:2077–2098, 2005. [14] L Zwald and G Blanchard. On the convergence of eigenspaces in kernel principal components analysis. In Advances in Neural Information Processing Systems (NIPS 2005), volume 18, 2006. A Proof of Theorem 1 First, let us collect the definitions concerning kernel functions. Let k be a Mercer kernel with k(x, x′) = P∞ i=1 λiψi(x)ψi(x′), and k(x, x) ≤K < ∞. The kernel matrix of k for an n-sample X1, . . . , Xn is [K]ij = k(Xi, Xj)/n. Eigenvalues of K are li, and its eigenvectors are ui. The kernel k is approximated by the truncated kernel matrix is kr(x, x′) = Pr i=1 λiψi(x)ψi(x′), and its kernel matrix is denoted by Kr, whose eigenvalues are mi. The approximation error is measured by Er = Kr −K. We will measure the amount of clustering ci of the eigenvalues by the number of eigenvalues of Kr between li/2 and 2li. The matrix containing the sample vectors of the first r eigenfunctions ψi of k is given by [Ψr]iℓ= ψℓ(Xi)/√n, 1 ≤i ≤n, 1 ≤ℓ≤r. Since the eigenfunctions are orthogonal asymptotically, we can expect that the sample vectors of the eigenfunctions converge to either 0 or 1. The error is measured by the matrix Cr = Ψ⊤ rΨr −I. Finally, let Λr = P∞ i=r+1 λi. Next, we collect definitions concerning some function f. Let f = P∞ i=1 λiαiψi with (αi) ∈ ℓ2, and |f| ≤A < ∞. The size of the contribution of the first r terms is measured by ar = Pr i=1 |αi|. Define the error of truncating f to the first r elements of its series expansion by Tr = (P∞ i=r+1 λ2 i α2 i )1/2. The proof of Theorem 1 is based on performing rough estimates of the bound from Theorem 4.92 in [1]. The bound is 1 √n|u⊤ if(X)| < min 1≤r≤n  liD(r, n) + E(r, n) + T(r, n)  where the three terms are given by D(r, n) = 2ar∥Ψ+ r ∥ci, E(r, n) = 2rar∥Ψ+ r ∥∥Er∥, T(r, n) = Tr + p FTr 4 r 1 nδ , It holds that ∥Ψ+ r ∥≤(1 −∥Ψ⊤ rΨr −I∥)1/2 = (1 −∥Cr∥)−1/2 ([1], Lemma 4.44). Furthermore, ∥Cr∥→1 as n →∞for fixed r. For kernel with bounded diagonal, it holds that with probability larger than 1−δ ([1], Lemma 3.135) that ∥Cr∥≤r p r(r + 1)K/λrnδ = r2O(n−1/2) with a rather large constant, especially, if λr is small. Consequently, ∥Ψ+ r ∥≤(1−∥Cr∥)−1/2 = 1+O(rn−1/4). Now, Lemma 3.135 in [1] bounds ∥Er∥from which we can derive the asymptotic ∥Er∥≤λr + Λr + r 2KΛr nδ = Λr + p ΛrO(n−1 2 ), assuming that K will be reasonably small (for example, for rbf-kernels, K = 1). Combining this with our rate for ∥Ψ+ r ∥, we obtain E(r, n) = 2rar  Λr + p ΛrO(n−1 2 )  (1+O(rn−1 4 )) = 2rarΛr(1+O(rn−1 4 ))+rar p ΛrO(n−1 2 ). Finally, we obtain 1 √n|u⊤ if(X)| = 2liarci(1 + O(rn−1 4 )) + 2rarΛr(1 + O(rn−1 4 )) + rar p ΛrO(n−1 2 ). + Tr + p ATrO(n−1 2 ). If we assume that Λr will be rather small, we replace 1+O(rn−1 4 ) by O(1) for the second term and obtain the claimed rate. ■
2006
203
3,037
Relational Learning with Gaussian Processes Wei Chu CCLS Columbia Univ. New York, NY 10115 Vikas Sindhwani Dept. of Comp. Sci. Univ. of Chicago Chicago, IL 60637 Zoubin Ghahramani Dept. of Engineering Univ. of Cambridge Cambridge, UK S. Sathiya Keerthi Yahoo! Research Media Studios North Burbank, CA 91504 Abstract Correlation between instances is often modelled via a kernel function using input attributes of the instances. Relational knowledge can further reveal additional pairwise correlations between variables of interest. In this paper, we develop a class of models which incorporates both reciprocal relational information and input attributes using Gaussian process techniques. This approach provides a novel non-parametric Bayesian framework with a data-dependent covariance function for supervised learning tasks. We also apply this framework to semi-supervised learning. Experimental results on several real world data sets verify the usefulness of this algorithm. 1 Introduction Several recent developments such as the growth of the world wide web and the maturation of genomic technologies, have brought new domains of application to machine learning research. Many such domains involve relational data in which instances have “links” or inter-relationships between them that are highly informative for learning tasks, e.g. (Taskar et al., 2002). For example, hyper-linked web-documents are often about similar topics, even if their textual contents are disparate when viewed as bags of words. In document categorization, the citations are important as well since two documents referring to the same reference are likely to have similar content. In computational biology, knowledge about physical interactions between proteins can supplement genomic data for developing good similarity measures for protein network inference. In such cases, a learning algorithm can greatly benefit by taking into account the global network organization of such inter-relationships rather than relying on input attributes alone. One simple but general type of relational information can be effectively represented in the form of a graph G = (V, E). The vertex set V represents a collection of input instances (which may contain the labelled inputs as a subset, but is typically a much larger set of instances). The edge set E ⊂V × V represents the pairwise relations over these input instances. In this paper, we restrict our attention to undirected edges, i.e., reciprocal relations, though directionality may be an important aspect of some relational datasets. These undirected edges provide useful structural knowledge about correlation between the vertex instances. In particular, we allow edges to be of two types – “positive” or “negative” depending on whether the associated adjacent vertices are positively or negatively correlated, respectively. On many problems, only positive edges may be available. This setting is also applicable to semi-supervised tasks even on traditional “flat” datasets where the linkage structure may be derived from data input attributes. In graph-based semi-supervised methods, G is typically an adjacency graph constructed by linking each instance (including labelled and unlabelled) to its neighbors according to some distance metric in the input space. The graph G then serves as an estimate of the global geometric structure of the data. Many algorithmic frameworks for semi-supervised (Sindhwani et al., 2005) and transductive learning, see e.g. (Zhou et al., 2004; Zhu et al., 2003), have been derived under the assumption that data points nearby on this graph are positively correlated. Several methods have been proposed recently to incorporate relational information within learning algorithms, e.g. for clustering (Basu et al., 2004; Wagstaff et al., 2001), metric learning (Bar-Hillel et al., 2003), and graphical modeling (Getoor et al., 2002). The reciprocal relations over input instances essentially reflect the network structure or the distribution underlying the data, which enrich our prior belief of how instances in the entire input space are correlated. In this paper, we integrate relational information with input attributes in a non-parametric Bayesian framework based on Gaussian processes (GP) (Rasmussen & Williams, 2006), which leads to a data-dependent covariance/kernel function. We highlight the following aspects of our approach: 1) We propose a novel likelihood function for undirected linkages and carry out approximate inference using efficient Expectation Propagation techniques under a Gaussian process prior. The covariance function of the approximate posterior distribution defines a relational Gaussian process, hereafter abbreviated as RGP. RGP provides a novel Bayesian framework with a data-dependent covariance function for supervised learning tasks. We also derive explicit formulae for linkage prediction over pairs of test points. 2) When applied to semi-supervised learning tasks involving labelled and unlabelled data, RGP is closely related to the warped reproducing kernel Hilbert Space approach of (Sindhwani et al., 2005) using a novel graph regularizer. Unlike many recently proposed graph-based Bayesian approaches, e.g. (Zhu et al., 2003; Krishnapuram et al., 2004; Kapoor et al., 2005), which are mainly transductive by design, RGP delineates a decision boundary in the input space and provides probabilistic induction over unseen test points. Furthermore, by maximizing the joint evidence of known labels and linkages, we explicitly involve unlabelled data in the model selection procedure. Such a semi-supervised hyper-parameter tuning method can be very useful when there are very few, possibly noisy labels. 3) On a variety of classification tasks, RGP requires very few labels for providing high-quality generalization on unseen test examples as compared to standard GP classification that ignores relational information. We also report experimental results on semi-supervised learning tasks comparing with competitive deterministic methods. The paper is organized as follows. In section 2 we develop relational Gaussian processes. Semisupervised learning under this framework is discussed in section 3. Experimental results are presented in section 4. We conclude this paper in section 5. 2 Relational Gaussian Processes In the standard setting of learning from data, instances are usually described by a collection of input attributes, denoted as a column vector x ∈X ⊂Rd. The key idea in Gaussian process models is to introduce a random variable fx for all points in the input space X. The values of these random variables {fx}x∈X are treated as outputs of a zero-mean Gaussian process. The covariance between fx and fz is fully determined by the coordinates of the data pair x and z, and is defined by any Mercer kernel function K(x, z). Thus, the prior distribution over f = [fx1 . . . fxn] associated with any collection of n points x1 . . . xn is a multivariate Gaussian, written as P(f) = 1 (2π)n/2 det(Σ)1/2 exp  −1 2f T Σ−1f  (1) where Σ is the n × n covariance matrix whose ij-th element is K(xi, xj). In the following, we consider the scenario with undirected linkages over a set of instances. 2.1 Undirected Linkages Let the vertex set V in the relational graph be associated with n input instances x1 . . . xn. Consider a set of observed pairwise undirected linkages on these instances, denoted as E = {Eij}. Each linkage is treated as a Bernoulli random variable, i.e. Eij ∈{+1, −1}. Here Eij = +1 indicates that the instances xi and xj are “positively tied” and Eij = −1 indicates the instances are “negatively tied”. We propose a new likelihood function to capture these undirected linkages, which is defined as follows: Pideal  Eij|fxi, fxj  =  1 if fxifxjEij > 0 0 otherwise (2) This formulation is for ideal, noise-free cases; it enforces that the variable values corresponding to positive and negative edges have the same and opposite signs respectively. In the presence of uncertainty in observing Eij, we assume the variable values fxi and fxj are contaminated with Gaussian noise that allows some tolerance for noisy observations. The Gaussian noise is of zero mean and unknown variance σ2.1 Let N(δ; µ, σ2) denote a Gaussian random variable δ with mean µ and variance σ2. Then the likelihood function (2) becomes P  Eij = +1|fxi, fxj  =   Pideal  Eij = +1|fxi + δi, fxj + δj  N(δi; 0, σ2)N(δj; 0, σ2) dδi dδj = Φ  fxi σ  Φ  fxj σ  +  1 −Φ  fxi σ   1 −Φ  fxj σ  (3) where Φ(z) =  z −∞N(γ; 0, 1) dγ. The integral in (3) evaluates the volume of a joint Gaussian in the first and third quadrants where fxi and fxj have the same sign. Note that P  Eij = −1|fxi, fxj  = 1 −P  Eij = +1|fxi, fxj  and P  Eij = +1|fxi, fxj  = P  Eij = +1| −fxi, −fxj  . Remarks: One may consider other ways to define a likelihood function for the observed edges. For example, we could define Pl(Eij = +1|fxi, fxj) = 1 1+exp(−νfxifxj ) where ν > 0. However the computation of the predictive probability (9) and its derivatives becomes complicated with this form. Instead of treating edges as Bernoulli variables, we could consider a graph itself as a random variable and then the probability of observing the graph G can be simply evaluated as: P(G|f) = 1 Z exp  −1 2f T Ψ f  where Ψ is a graph-regularization matrix (e.g. graph Laplacian) and Z is a normalization factor that depends on the variable values f. Given that there are numerous graph structures over the instances, the normalization factor Z is intractable in general cases. In the rest of this paper, we will use the likelihood function developed in (3). 2.2 Approximate Inference Combining the Gaussian process prior (1) with the likelihood function (3), we obtain the posterior distribution as follows, P(f|E) = 1 P(E)P(f) ij P  Eij|fxi, fxj  (4) where f = [fx1, . . . , fxn]T and ij runs over the set of observed undirected linkages. The normalization factor P(E) =  P(E|f)P(f)df is known as the evidence of the model parameters that serves as a yardstick for model selection. The posterior distribution is non-Gaussian and multi-modal with a saddle point at the origin. Clearly the posterior mean is at the origin as well. It is important to note that reciprocal relations update the correlation between examples but never change individual mean. To preserve computational tractability and the true posterior mean, we would rather approximate the posterior distribution as a joint Gaussian centered at the true mean than resort to sampling methods. A family of inference techniques can be applied for the Gaussian approximation. Some popular methods include Laplace approximation, mean-field methods, variational methods and expectation propagation. It is inappropriate to apply the Laplace approximation to this case since the posterior distribution is not unimodal and it is a saddle point at the true posterior mean. The standard mean-field methods are also hard to use due to the pairwise relations in observation. Both the variational methods and the expectation propagation (EP) algorithm (Minka, 2001) can be applied here. In this paper, we employ the EP algorithm to approximate the posterior distribution as a zero-mean Gaussian. Importantly this still captures the posterior covariance structure allowing prediction of link presence. The key idea of our EP algorithm here is to approximate P(f) ij P  Eij|fxi, fxj  as a parametric product distribution2 in the form of Q(f) = P(f) ij ˜t(f ij) = P(f) ij sij exp  −1 2f T ijΠijf ij  where ij runs over the edge set, f ij = [fxi, fxj]T , and Πij is a symmetric 2 × 2 matrix. The parameters {sij, Πij} in {˜t(f ij)} are successively optimized by locally minimizing the KullbackLeibler divergence, 1We could specify different noise levels for weighted edges. In this paper, we focus on unweighted edges only. 2The likelihood function we defined could also be approximated by a Gaussian mixture of two symmetric components, but the difficulty lies in the number of components growing exponentially after multiplication. ˜t(f ij)new = arg min ˜t(f ij) KL Q(f) ˜t(f ij)old P(Eij|f ij) Q(f) ˜t(f ij)old ˜t(f ij) . (5) Since Q(f) is in the exponential family, this minimization can be simply solved by moment matching up to the second order. At the equilibrium the EP algorithm returns a Gaussian approximation to the posterior distribution P(f|E) ≈N(0, A) (6) where A = (Σ−1 + Π)−1, Π =  ij ˇΠij and ˇΠij is an n × n matrix with four non-zero entries augmented from Πij. Note that the matrix Π could be very sparse. The normalization factor in this Gaussian approximation serves as approximate model evidence that can be explicitly written as P(E) ≈|A| 1 2 |Σ| 1 2 ij sij (7) The detailed updating formulations have to be omitted here to save space. The approximate evidence (7) holds an upper bound on the true value of P(E) (Wainwright et al., 2005). Its partial derivatives with respect to the model parameters can be analytically derived (Seeger, 2003) and then a gradientbased procedure can be employed for hyperparameter tuning. Although the EP algorithm is known to work quite well in practice, there is no guarantee of convergence to the equilibrium in general. Opper and Winther (2005) proposed expectation consistent (EC) as a new framework for approximations that requires two tractable distributions matching on a set of moments. We plan to investigate the EC algorithm as future work. 2.3 Data-dependent Covariance Function After approximate inference as outlined above, the posterior process conditioned on E is explicitly given by a modified covariance function defined in the following proposition. Proposition: Given (6), for any finite collection of data points X, the latent random variables {fx}x∈X conditioned on E have a multivariate normal distribution N(0, ˜Σ) where ˜Σ is the covariance matrix whose elements are given by evaluating the kernel function ˜K(x, z) : X × X →R for x, z ∈X given by: ˜K(x, z) = K(x, z) −kT x (I + ΠΣ)−1Πkz (8) where I is an n × n identity matrix, kx is the column vector [K(x1, x), . . . , K(xn, x)]T , Σ is an n × n covariance matrix of the vertex set V obtained by evaluating the base kernel K, and Π is defined as in (6). A proof of this proposition involves some simple matrix algebra and is omitted for brevity. RGP is obtained by a Bayesian update of a standard GP using relational knowledge, which is closely related to the warped reproducing kernel Hilbert space approach (Sindhwani et al., 2005) using a novel graph regularizer Π in place of the standard graph Laplacian. Alternatively, we could simply employ the standard graph Laplacian as an approximation of the matrix Π. This efficient approach has been studied by (Sindhwani et al., 2007) for semi-supervised classification problems. 2.4 Linkage Prediction Given a RGP, the joint distribution of the random variables frs = [fxr, fxs]T , associated with a test pair xr and xs, is a Gaussian as well. The linkage predictive distribution P(f rs|E) can be explicitly written as a zero-mean bivariate Gaussian N(f rs; 0, ˜Σrs) with covariance matrix ˜Σrs =  ˜K(xr, xr) ˜K(xr, xs) ˜K(xs, xr) ˜K(xs, xs)  where ˜K is defined as in (8). The predictive probability of having a positive edge can be evaluated as P(Ers|E) =  Pideal(Ers|f rs)N(f rs; 0, ˜Σrs)dfxrdfxs which can be simplified as P(Ers|E) = 1 2 + arcsin(ρErs) π (9) where ρ = ˜K(xr,xs) √ ˜K(xs,xs) ˜K(xr,xr). It essentially evaluates the updated correlation between fxr and fxs after we learn from the observed linkages. 3 Semi-supervised Learning We now apply the RGP framework for semi-supervised learning where a large collection of unlabelled examples are available and labelled data is scarce. Unlabelled examples often identify data clusters or low-dimensional data manifolds. It is commonly assumed that the labels of points within a cluster or nearby on a manifold are highly correlated (Chapelle et al., 2003; Zhu et al., 2003). To apply RGP, we construct positive reciprocal relations between examples within K nearest neighborhood. K could be heuristically set at the minimal integer of nearest neighborhood that could setup a connected graph over labelled and unlabelled examples, where there is a path between each pair of nodes. Learning on these constructed relational data results in a RGP as described in the previous section (see section 4.1 for an illustration). With the RGP as our new prior, supervised learning can be carried out in a straightforward way. In the following we focus on binary classification, but this procedure is also applicable to regression, multi-class classification and ranking. Given a set of labelled pairs {zℓ, yℓ}m ℓ=1 where yℓ∈{+1, −1}, the Gaussian process classifier (Rasmussen & Williams, 2006) relates the variable fzℓat zℓto the label yℓthrough a probit noise model, i.e. P(yℓ|fzℓ) = Φ( yℓfzℓ σn ) where Φ is the cumulative normal and σ2 n specifies the label noise level. Combining the probit likelihood with the RGP prior defined by the covariance function (8), we have the posterior distribution as follows, P(f ℓ|Y, E) = 1 P(Y|E)P(f ℓ|E) ℓ P(yℓ|fzℓ) where f ℓ= [fz1, . . . , fzm]T , P(f ℓ|E) is a zero-mean Gaussian with an m×m covariance matrix ˜Σℓ whose entries are defined by (8), and P(Y|E) is the normalization factor. The posterior distribution can be approximated as a Gaussian as well, denoted as N(µ, C), and the quantity P(Y|E) can be evaluated accordingly (Seeger, 2003). The predictive distribution of the variable fzt at a test case zt then becomes a Gaussian, i.e. P(fzt|Y, E) ≈N(µt, σ2 t ), where µt = kt ˜Σ−1 ℓµ and σ2 t = ˜K(zt, zt) −kT t (˜Σ−1 ℓ −˜Σ−1 ℓ C ˜Σ−1 ℓ)kt with kt = [ ˜K(z1, zt), . . . , ˜K(zm, zt)]T . One can compute the Bernoulli distribution over the test label yt by P(yt|Y, E) = Φ µt  σ2n + σ2 t . (10) To summarize, we first incorporate linkage information into a standard GP that leads to a RGP, and then perform standard inference with the RGP as the prior in supervised learning. Although we describe RGP in two separate steps, these procedures can be seamlessly merged within the Bayesian framework. As for model selection, it is advantageous to directly use the joint evidence P(Y, E) = P(Y|E)P(E), (11) to determine the model parameters (such as the kernel parameter, the edge noise level and the label noise level). Note that P(Y, E) explicitly involves unlabelled data for model selection. This can be particularly useful when labelled data is very scarce and possibly noisy. 4 Numerical Experiments We start with a synthetic case to illustrate the proposed algorithm (RGP), and then verify the usefulness of this approach on three real world data sets. Throughout the experiments, we consistently compare with the standard Gaussian process classifier (GPC). RGP and GPC are different in the prior only. We employ the linear kernel K(x, z) = x · z or the Gaussian kernel K(x, z) = exp  −κ 2 ∥x −z∥2 2  , and shift the origin of the kernel space to the empirical mean, i.e. K(x, z)−1 n  i K(x, xi)−1 n  i K(z, xi)+ 1 n2  i  j K(xi, xj) where n is the number of available labelled and unlabelled data. The centralized kernel is then used as base kernel in our experiments. The label noise level σ2 n in the GPC and RGP models is fixed at 10−4. The edge noise level σ2 of the RGP models is usually varied from 5 to 0.05. The optimal setting of the σ2 and the κ in the Gaussian kernel is determined by the joint evidence (11) in each trial. When constructing undirected K nearest- neighbor graphs, K is fixed at the minimal integer required to have a connected graph. −5 −4 −3 −2 −1 0 1 2 3 4 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x P(+1|x) P(−1|x) (a) 10 −2 10 −1 10 0 10 1 −36 −34 −32 −30 −28 −26 −24 −22 −20 −18 κ log P(E) log P(E,Y) (b) x x −4 −3 −2 −1 0 1 2 3 4 5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (C) Figure 1: Results on the synthetic dataset. The 30 samples drawn from the Gaussian mixture are presented as dots in (a) and the two labelled samples are indicated by a diamond and a circle respectively. The best κ value is marked by the cross in (b). The curves in (a) present the semi-supervised predictive distributions. The prior covariance matrix of RGP learnt from the data is presented in (c). Table 1: The four universities are Cornell University, the University of Texas at Austin, the University of Washington and the University of Wisconsin. The numbers of categorized Web pages and undirected linkages in the four university dataset are listed in the second column. The averaged AUC scores of label prediction on unlabelled cases are recorded along with standard deviation over 100 trials. Task Web&Link Number Student or Not Other or Not Univ. Stud Other All Link GPC LapSVM RGP GPC LapSVM RGP Corn. 128 617 865 13177 0.825±0.016 0.987±0.008 0.989±0.009 0.708±0.021 0.865±0.038 0.884±0.025 Texa. 148 571 827 16090 0.899±0.016 0.994±0.007 0.999±0.001 0.799±0.021 0.932±0.026 0.906±0.026 Wash. 126 939 1205 15388 0.839±0.018 0.957±0.014 0.961±0.009 0.782±0.023 0.828±0.025 0.877±0.024 Wisc. 156 942 1263 21594 0.883±0.013 0.976±0.029 0.992±0.008 0.839±0.014 0.812±0.030 0.899±0.015 4.1 Demonstration Suppose samples are distributed as a Gaussian mixture with two components in one-dimensional space, e.g. 0.4 · N(−2.5, 1) + 0.6 · N(2.0, 1). We randomly collected 30 samples from this distribution, shown as dots on the x axis of Figure 1(a). With K = 3, there are 56 “positive” edges over these 30 samples. We fixed σ2 = 1 for all the edges, and varied the parameter κ from 0.01 to 10. At each setting, we carried out the Gaussian approximation by EP as described in section 2.2. Based on the approximate model evidence P(E) (7), presented in Figure 1(b), we located the best κ = 0.4. Figure 1(c) presents the posterior covariance function ˜K (8) at this optimal setting. Compared to the data-independent prior covariance function defined by the Gaussian kernel, the posterior covariance function captures the density information of the unlabelled samples. The pairs within the same cluster become positively correlated, whereas the pairs between the two clusters turn out to be negatively correlated. This is learnt without any explicit assumption on density distributions. Given two labelled samples, one per class, indicated by the diamond and the circle in Figure 1(a), we carried out supervised learning on the basis of the new prior ˜K, as described in section 3. The joint model evidence P(Y|E)P(E) is plotted out in Figure 1(b). The corresponding predictive distribution (10) with the optimal κ = 0.4 is presented in Figure 1(a). Note that the decision boundary of the standard GPC should be around x = 1. We observed our decision boundary significantly shifts to the low-density region that respects the geometry of the data. 4.2 The Four University Dataset We considered a subset of the WebKB dataset for categorization tasks.3 The subset, collected from the Web sites of computer science departments of four universities, contains 4160 pages and 9998 hyperlinks interconnecting them. These pages have been manually classified into seven categories: student, course, faculty, staff, department, project and other. The text content of each Web page was preprocessed as bag-of-words, a vector of “term frequency” components scaled by “inverse document frequency”, which was used as input attributes. The length of each document vector was normalized to unity. The hyperlinks were translated into 66249 undirected “positive” linkages over the pages under the assumption that two pages are likely to be positively correlated if they are hyper-linked by the same hub page. Note there are no “negative” linkages in this case. We considered two classification tasks, student vs. non-student and other vs. non-other, for each of the four universities. The numbers of samples and linkages of the four universities are listed in Table 1. We randomly selected 10% samples as labelled data and used the remaining samples as unlabelled data. The selection was repeated 100 times. The linear kernel 3The dataset comes from the Web→KB project, see http://www-2.cs.cmu.edu/∼webkb/. 0.1 0.5 1 5 10 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Area under ROC on Test Data Percentage of Labelled Data in Training Samples PCMAC (a) 0.1 0.5 1 5 10 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Area under ROC on Test Data Percentage of Labelled Data in Training Samples USPS 3vs5 (b) Figure 2: Test AUC results of the two semi-supervised learning tasks, PCMAC in (a) and USPS in (b). The grouped boxes from left to right represent the results of GPC, LapSVM, and RGP respectively at different percentages of labelled samples over 100 trials. The notched-boxes have lines at the lower quartile, median, and upper quartile values. The whiskers are lines extending from each end of the box to the most extreme data value within 1.5 interquartile range. Outliers are data with values beyond the ends of the whiskers, which are displayed as dots. was used as base kernel in these experiments. We conducted this experiment in a transductive setting where the entire linkage data was used to learn the RGP model and comparisons were made with GPC for predicting labels of unlabelled samples. We make comparisons with a discriminant kernel approach to semi-supervised learning – the Laplacian SVM (Sindhwani et al., 2005) using the linear kernel and a graph Laplacian based regularizer. We recorded the average AUC for predicting labels of unlabelled cases in Table 1.4 Our RGP models significantly outperform the GPC models by incorporating the linkage information in modelling. RGP is very competitive with LapSVM on “Student or Not” while yields better results on 3 out of 4 tasks of “Other or Not”. As future work, it would be interesting to utilize weighted linkages and to compare with other graph kernels. 4.3 Semi-supervised Learning We chose a binary classification problem in the 20 newsgroup dataset, 985 PC documents vs. 961 MAC documents. The documents were preprocessed, same as we did in the previous section, into vectors with 7510 elements. We randomly selected 1460 documents as training data, and tested on the remaining 486 documents. We varied the percentage of labelled data from 0.1% to 10% gradually, and at each percentage repeated the random selection of labelled data 100 times. We used the linear kernel in the RGP and GPC models. With K = 4, we got 4685 edges over the 1460 training samples. The test results on the 486 documents are presented in Figure 2(a) as a boxplot. Model parameters for LapSVM were tuned using crossvalidation with 50 labelled samples, since it is difficult for discriminant kernel approaches to carry out cross validation when the labelled samples are scarce. Our algorithm yields much better results than GPC and LapSVM, especially when the fraction of labelled data is less than 5%. When the labelled samples are few (a typical case in semi-supervised learning), cross validation becomes hard to use while our approach provides a Bayesian model selection by the model evidence. U.S. Postal Service dataset (USPS) of handwritten digits consists of 16 × 16 gray scale images. We focused on constructing a classifier to distinguish digit 3 from digit 5. We used the training/test split, generated and used by (Lawrence & Jordan, 2005), in our experiment for comparison purpose. This partition contains 1214 training samples (556 samples of digit 3 and 658 samples of digit 5) and 326 test samples. With K = 3, we obtained 2769 edges over the 1214 training samples. We randomly picked up a subset of the training samples as labelled data and treated the remaining samples as unlabelled. We varied the percentage of labelled data from 0.1% to 10% gradually, and at each percentage repeated the selection of labelled data 100 times. In this experiment, we employed the Gaussian kernel, varied the edge noise level σ2 from 5 to 0.5, and tried the following values for κ, [0.001, 0.0025, 0.005, 0.0075, 0.01, 0.025, 0.05, 0.075, 0.1]. The optimal values of κ and σ2 were decided by the joint evidence P(Y, E) (11). We report the error rate and AUC on the 326 test data in Figure 2(b) as a boxplot, along with the test results of GPC and LapSVM. When the percentage of labelled data is less than 5%, our algorithm achieved greatly better performance than GPC, and very competitive results compared with LapSVM (tuned with 50 labelled samples) though RGP used 4AUC stands for the area under the Receiver-Operator Characteristic (ROC) curve. fewer labelled samples in model selection. Comparing with the performance of transductive SVM (TSVM) and the null category noise model for binary classification (NCNM) reported in (Lawrence & Jordan, 2005), we are encouraged to see that our approach outperforms TSVM and NCNM on this experiment. 5 Conclusion We developed a Bayesian framework to learn from relational data based on Gaussian processes. The resulting relational Gaussian processes provide a unified data-dependent covariance function for many learning tasks. We applied this framework to semi-supervised learning and validated this approach on several real world data. While this paper has focused on modelling symmetric (undirected) relations, this relational Gaussian process framework can be generalized for asymmetric (directed) relations as well as multiple classes of relations. Recently, Yu et al. (2006) have represented each relational pair by a tensor product of the attributes of the associated nodes, and have further proposed efficient algorithms. This is a promising direction. Acknowledgements W. Chu is partly supported by a research contract from Consolidated Edison. We thank Dengyong Zhou for sharing the preprocessed Web-KB data. References Bar-Hillel, A., Hertz, T., Shental, N., & Weinshall, D. (2003). Learning distance functions using equivalence relations. Proceedings of International Conference on Machine Learning (pp. 11–18). Basu, S., Bilenko, M., & Mooney, R. J. (2004). A probabilisitic framework for semi-supervised clustering. Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 59–68). Chapelle, O., Weston, J., & Sch¨olkopf, B. (2003). Cluster kernels for semi-supervised learning. Neural Information Processing Systems 15 (pp. 585–592). Getoor, L., Friedman, N., Koller, D., & Taskar, B. (2002). Learning probabilistic models of link structure. Journal of Machine Learning Research, 3, 679–707. Kapoor, A., Qi, Y., Ahn, H., & Picard, R. (2005). Hyperparameter and kernel learning for graph-based semisupervised classification. Neural Information Processing Systems 18. Krishnapuram, B., Williams, D., Xue, Y., Carin, L., Hartemink, A., & Figueiredo, M. (2004). On semisupervised classification. Neural Information Processing Systems (NIPS). Lawrence, N. D., & Jordan, M. I. (2005). Semi-supervised learning via Gaussian processes. Advances in Neural Information Processing Systems 17 (pp. 753–760). Minka, T. P. (2001). A family of algorithms for approximate Bayesian inference. Ph.D. thesis, Massachusetts Institute of Technology. Opper, M., & Winther, O. (2005). Expectation consistent approximate inference. Journal of Machine Learning Research, 2117–2204. Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. The MIT Press. Seeger, M. (2003). Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse approximations. Doctoral dissertation, University of Edinburgh. Sindhwani, V., Chu, W., & Keerthi, S. S. (2007). Semi-supervised Gaussian process classification. The Twentieth International Joint Conferences on Artificial Intelligence. to appear. Sindhwani, V., Niyogi, P., & Belkin, M. (2005). Beyound the point cloud: from transductive to semi-supervised learning. Proceedings of the 22th International Conference on Machine Learning (pp. 825–832). Taskar, B., Abbeel, P., & Koller, D. (2002). Discriminative probabilistic models for relational data. Proceedings of Conference on Uncertainty in Artificial Intelligence. Wagstaff, K., Cardie, C., Rogers, S., & Schroedl, S. (2001). Constrained k-means clustering with background knowledge. Proceedings of International Conference on Machine Learning (pp. 577–584). Wainwright, M. J., Jaakkola, T., & Willsky, A. S. (2005). A new class of upper bounds on the log partition function. IEEE Trans. on Information Theory, 51, 2313–2335. Yu, K., Chu, W., Yu, S., Tresp, V., & Xu, Z. (2006). Stochastic relational models for discriminative link prediction. Advances in Neural Information Processing Systems. to appear. Zhou, D., Bousquet, O., Lal, T., Weston, J., & Sch¨olkopf, B. (2004). Learning with local and global consistency. Advances in Neural Information Processing Systems 18 (pp. 321–328). Zhu, X., Ghahramani, Z., & Lafferty, J. (2003). Semi-supervised learning using Gaussian fields and harmonic functions. Proceedings of the 20th International Conference on Machine Learning.
2006
204
3,038
Convex Repeated Games and Fenchel Duality Shai Shalev-Shwartz1 and Yoram Singer1,2 1 School of Computer Sci. & Eng., The Hebrew University, Jerusalem 91904, Israel 2 Google Inc. 1600 Amphitheater Parkway, Mountain View, CA 94043, USA Abstract We describe an algorithmic framework for an abstract game which we term a convex repeated game. We show that various online learning and boosting algorithms can be all derived as special cases of our algorithmic framework. This unified view explains the properties of existing algorithms and also enables us to derive several new interesting algorithms. Our algorithmic framework stems from a connection that we build between the notions of regret in game theory and weak duality in convex optimization. 1 Introduction and Problem Setting Several problems arising in machine learning can be modeled as a convex repeated game. Convex repeated games are closely related to online convex programming (see [19, 9] and the discussion in the last section). A convex repeated game is a two players game that is performed in a sequence of consecutive rounds. On round t of the repeated game, the first player chooses a vector wt from a convex set S. Next, the second player responds with a convex function gt : S →R. Finally, the first player suffers an instantaneous loss gt(wt). We study the game from the viewpoint of the first player. The goal of the first player is to minimize its cumulative loss, P t gt(wt). To motivate this rather abstract setting let us first cast the more familiar setting of online learning as a convex repeated game. Online learning is performed in a sequence of consecutive rounds. On round t, the learner first receives a question, cast as a vector xt, and is required to provide an answer for this question. For example, xt can be an encoding of an email message and the question is whether the email is spam or not. The prediction of the learner is performed based on an hypothesis, ht : X →Y, where X is the set of questions and Y is the set of possible answers. In the aforementioned example, Y would be {+1, −1} where +1 stands for a spam email and −1 stands for a benign one. After predicting an answer, the learner receives the correct answer for the question, denoted yt, and suffers loss according to a loss function ℓ(ht, (xt, yt)). In most cases, the hypotheses used for prediction come from a parameterized set of hypotheses, H = {hw : w ∈S}. For example, the set of linear classifiers, which is used for answering yes/no questions, is defined as H = {hw(x) = sign(⟨w, x⟩) : w ∈Rn}. Thus, rather than saying that on round t the learner chooses a hypothesis, we can say that the learner chooses a vector wt and its hypothesis is hwt. Next, we note that once the environment chooses a question-answer pair (xt, yt), the loss function becomes a function over the hypotheses space or equivalently over the set of parameter vectors S. We can therefore redefine the online learning process as follows. On round t, the learner chooses a vector wt ∈S, which defines a hypothesis hwt to be used for prediction. Then, the environment chooses a questionanswer pair (xt, yt), which induces the following loss function over the set of parameter vectors, gt(w) = ℓ(hw, (xt, yt)). Finally, the learner suffers the loss gt(wt) = ℓ(hwt, (xt, yt)). We have therefore described the process of online learning as a convex repeated game. In this paper we assess the performance of the first player using the notion of regret. Given a number of rounds T and a fixed vector u ∈S, we define the regret of the first player as the excess loss for not consistently playing the vector u, 1 T T X t=1 gt(wt) −1 T T X t=1 gt(u) . Our main result is an algorithmic framework for the first player which guarantees low regret with respect to any vector u ∈S. Specifically, we derive regret bounds that take the following form ∀u ∈S, 1 T T X t=1 gt(wt) −1 T T X t=1 gt(u) ≤f(u) + L √ T , (1) where f : S →R and L ∈R+. Informally, the function f measures the “complexity” of vectors in S and the scalar L is related to some generalized Lipschitz property of the functions g1, . . . , gT . We defer the exact requirements we impose on f and L to later sections. Our algorithmic framework emerges from a representation of the regret bound given in Eq. (1) using an optimization problem. Specifically, we rewrite Eq. (1) as follows 1 T T X t=1 gt(wt) ≤ inf u∈S 1 T T X t=1 gt(u) + f(u) + L √ T . (2) That is, the average loss of the first player should be bounded above by the minimum value of an optimization problem in which we jointly minimize the average loss of u and the “complexity” of u as measured by the function f. Note that the optimization problem on the right-hand side of Eq. (2) can only be solved in hindsight after observing the entire sequence of loss functions. Nevertheless, writing the regret bound as in Eq. (2) implies that the average loss of the first player forms a lower bound for a minimization problem. The notion of duality, commonly used in convex optimization theory, plays an important role in obtaining lower bounds for the minimal value of a minimization problem (see for example [14]). By generalizing the notion of Fenchel duality, we are able to derive a dual optimization problem, which can be optimized incrementally, as the game progresses. In order to derive explicit quantitative regret bounds we make an immediate use of the fact that dual objective lower bounds the primal objective. We therefore reduce the process of playing convex repeated games to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress. By doing so we are able to tie the primal objective value, the average loss of the first player, and the increase in the dual. The rest of this paper is organized as follows. In Sec. 2 we establish our notation and point to a few mathematical tools that we use throughout the paper. Our main tool for deriving algorithms for playing convex repeated games is a generalization of Fenchel duality, described in Sec. 3. Our algorithmic framework is given in Sec. 4 and analyzed in Sec. 5. The generality of our framework allows us to utilize it in different problems arising in machine learning. Specifically, in Sec. 6 we underscore the applicability of our framework for online learning and in Sec. 7 we outline and analyze boosting algorithms based on our framework. We conclude with a discussion and point to related work in Sec. 8. Due to the lack of space, some of the details are omitted from the paper and can be found in [16]. 2 Mathematical Background We denote scalars with lower case letters (e.g. x and w), and vectors with bold face letters (e.g. x and w). The inner product between vectors x and w is denoted by ⟨x, w⟩. Sets are designated by upper case letters (e.g. S). The set of non-negative real numbers is denoted by R+. For any k ≥1, the set of integers {1, . . . , k} is denoted by [k]. A norm of a vector x is denoted by ∥x∥. The dual norm is defined as ∥λ∥⋆= sup{⟨x, λ⟩: ∥x∥≤1}. For example, the Euclidean norm, ∥x∥2 = (⟨x, x⟩)1/2 is dual to itself and the ℓ1 norm, ∥x∥1 = P i |xi|, is dual to the ℓ∞norm, ∥x∥∞= maxi |xi|. We next recall a few definitions from convex analysis. The reader familiar with convex analysis may proceed to Lemma 1 while for a more thorough introduction see for example [1]. A set S is convex if for any two vectors w1, w2 in S, all the line between w1 and w2 is also within S. That is, for any α ∈[0, 1] we have that αw1 + (1 −α)w2 ∈S. A set S is open if every point in S has a neighborhood lying in S. A set S is closed if its complement is an open set. A function f : S →R is closed and convex if for any scalar α ∈R, the level set {w : f(w) ≤α} is closed and convex. The Fenchel conjugate of a function f : S →R is defined as f ⋆(θ) = supw∈S⟨w, θ⟩−f(w) . If f is closed and convex then the Fenchel conjugate of f ⋆is f itself. The Fenchel-Young inequality states that for any w and θ we have that f(w) + f ⋆(θ) ≥⟨w, θ⟩. A vector λ is a sub-gradient of a function f at w if for all w′ ∈S we have that f(w′) −f(w) ≥⟨w′ −w, λ⟩. The differential set of f at w, denoted ∂f(w), is the set of all sub-gradients of f at w. If f is differentiable at w then ∂f(w) consists of a single vector which amounts to the gradient of f at w and is denoted by ∇f(w). Sub-gradients play an important role in the definition of Fenchel conjugate. In particular, the following lemma states that if λ ∈∂f(w) then Fenchel-Young inequality holds with equality. Lemma 1 Let f be a closed and convex function and let ∂f(w′) be its differential set at w′. Then, for all λ′ ∈∂f(w′) we have, f(w′) + f ⋆(λ′) = ⟨λ′, w′⟩. A continuous function f is σ-strongly convex over a convex set S with respect to a norm ∥· ∥if S is contained in the domain of f and for all v, u ∈S and α ∈[0, 1] we have f(α v + (1 −α) u) ≤α f(v) + (1 −α) f(u) −1 2 σ α (1 −α) ∥v −u∥2 . (3) Strongly convex functions play an important role in our analysis primarily due to the following lemma. Lemma 2 Let ∥· ∥be a norm over Rn and let ∥· ∥⋆be its dual norm. Let f be a σ-strongly convex function on S and let f ⋆be its Fenchel conjugate. Then, f ⋆is differentiable with ∇f ⋆(θ) = arg maxx∈S⟨θ, x⟩−f(x). Furthermore, for any θ, λ ∈Rn we have f ⋆(θ + λ) −f ⋆(θ) ≤⟨∇f ⋆(θ), λ⟩+ 1 2 σ ∥λ∥2 ⋆. Two notable examples of strongly convex functions which we use are as follows. Example 1 The function f(w) = 1 2∥w∥2 2 is 1-strongly convex over S = Rn with respect to the ℓ2 norm. Its conjugate function is f ⋆(θ) = 1 2∥θ∥2 2. Example 2 The function f(w) = Pn i=1 wi log(wi/ 1 n) is 1-strongly convex over the probabilistic simplex, S = {w ∈Rn + : ∥w∥1 = 1}, with respect to the ℓ1 norm. Its conjugate function is f ⋆(θ) = log( 1 n Pn i=1 exp(θi)). 3 Generalized Fenchel Duality In this section we derive our main analysis tool. We start by considering the following optimization problem, inf w∈S  c f(w) + PT t=1 gt(w)  , where c is a non-negative scalar. An equivalent problem is inf w0,w1,...,wT  c f(w0) + PT t=1 gt(wt)  s.t. w0 ∈S and ∀t ∈[T], wt = w0 . Introducing T vectors λ1, . . . , λT , each λt ∈Rn is a vector of Lagrange multipliers for the equality constraint wt = w0, we obtain the following Lagrangian L(w0, w1, . . . , wT , λ1, . . . , λT ) = c f(w0) + PT t=1 gt(wt) + PT t=1⟨λt, w0 −wt⟩. The dual problem is the task of maximizing the following dual objective value, D(λ1, . . . , λT ) = inf w0∈S,w1,...,wT L(w0, w1, . . . , wT , λ1, . . . , λT ) = −c sup w0∈S  ⟨w0, −1 c PT t=1 λt⟩−f(w0)  −PT t=1 sup wt (⟨wt, λt⟩−gt(wt)) = −c f ⋆ −1 c PT t=1 λt  −PT t=1 g⋆ t (λt) , where, following the exposition of Sec. 2, f ⋆, g⋆ 1, . . . , g⋆ T are the Fenchel conjugate functions of f, g1, . . . , gT . Therefore, the generalized Fenchel dual problem is sup λ1,...,λT −c f ⋆ −1 c PT t=1 λt  −PT t=1 g⋆ t (λt) . (4) Note that when T = 1 and c = 1, the above duality is the so called Fenchel duality. 4 A Template Learning Algorithm for Convex Repeated Games In this section we describe a template learning algorithm for playing convex repeated games. As mentioned before, we study convex repeated games from the viewpoint of the first player which we shortly denote as P1. Recall that we would like our learning algorithm to achieve a regret bound of the form given in Eq. (2). We start by rewriting Eq. (2) as follows T X t=1 gt(wt) −c L ≤ inf u∈S c f(u) + m X t=1 gt(u) ! , (5) where c = √ T. Thus, up to the sublinear term c L, the cumulative loss of P1 lower bounds the optimum of the minimization problem on the right-hand side of Eq. (5). In the previous section we derived the generalized Fenchel dual of the right-hand side of Eq. (5). Our construction is based on the weak duality theorem stating that any value of the dual problem is smaller than the optimum value of the primal problem. The algorithmic framework we propose is therefore derived by incrementally ascending the dual objective function. Intuitively, by ascending the dual objective we move closer to the optimal primal value and therefore our performance becomes similar to the performance of the best fixed weight vector which minimizes the right-hand side of Eq. (5). Initially, we use the elementary dual solution λ1 t = 0 for all t. We assume that infw f(w) = 0 and for all t infw gt(w) = 0 which imply that D(λ1 1, . . . , λ1 T ) = 0. We assume in addition that f is σ-strongly convex. Therefore, based on Lemma 2, the function f ⋆is differentiable. At trial t, P1 uses for prediction the vector wt = ∇f ⋆ −1 c PT i=1 λt i  . (6) After predicting wt, P1 receives the function gt and suffers the loss gt(wt). Then, P1 updates the dual variables as follows. Denote by ∂t the differential set of gt at wt, that is, ∂t = {λ : ∀w ∈S, gt(w) −gt(wt) ≥⟨λ, w −wt⟩} . (7) The new dual variables (λt+1 1 , . . . , λt+1 T ) are set to be any set of vectors which satisfy the following two conditions: (i). ∃λ′ ∈∂t s.t. D(λt+1 1 , . . . , λt+1 T ) ≥D(λt 1, . . . , λt t−1, λ′, λt t+1, . . . , λt T ) (ii). ∀i > t, λt+1 i = 0 . (8) In the next section we show that condition (i) ensures that the increase of the dual at trial t is proportional to the loss gt(wt). The second condition ensures that we can actually calculate the dual at trial t without any knowledge on the yet to be seen loss functions gt+1, . . . , gT . We conclude this section with two update rules that trivially satisfy the above two conditions. The first update scheme simply finds λ′ ∈∂t and set λt+1 i =  λ′ if i = t λt i if i ̸= t . (9) The second update defines (λt+1 1 , . . . , λt+1 T ) = argmax λ1,...,λT D(λ1, . . . , λT ) s.t. ∀i ̸= t, λi = λt i . (10) 5 Analysis In this section we analyze the performance of the template algorithm given in the previous section. Our proof technique is based on monitoring the value of the dual objective function. The main result is the following lemma which gives upper and lower bounds for the final value of the dual objective function. Lemma 3 Let f be a σ-strongly convex function with respect to a norm ∥· ∥over a set S and assume that minw∈S f(w) = 0. Let g1, . . . , gT be a sequence of convex and closed functions such that infw gt(w) = 0 for all t ∈[T]. Suppose that a dual-incrementing algorithm which satisfies the conditions of Eq. (8) is run with f as a complexity function on the sequence g1, . . . , gT . Let w1, . . . , wT be the sequence of primal vectors that the algorithm generates and λT +1 1 , . . . , λT +1 T be its final sequence of dual variables. Then, there exists a sequence of sub-gradients λ′ 1, . . . , λ′ T , where λ′ t ∈∂t for all t, such that T X t=1 gt(wt) − 1 2 σ c T X t=1 ∥λ′ t∥2 ⋆≤D(λT +1 1 , . . . , λT +1 T ) ≤ inf w∈S c f(w) + T X t=1 gt(w) . Proof The second inequality follows directly from the weak duality theorem. Turning to the left most inequality, denote ∆t = D(λt+1 1 , . . . , λt+1 T ) −D(λt 1, . . . , λt T ) and note that D(λT +1 1 , . . . , λT +1 T ) can be rewritten as D(λT +1 1 , . . . , λT +1 T ) = PT t=1 ∆t −D(λ1 1, . . . , λ1 T ) = PT t=1 ∆t , (11) where the last equality follows from the fact that f ⋆(0) = g⋆ 1(0) = . . . = g⋆ T (0) = 0. The definition of the update implies that ∆t ≥D(λt 1, . . . , λt t−1, λ′ t, 0, . . . , 0)−D(λt 1, . . . , λt t−1, 0, 0, . . . , 0) for some subgradient λ′ t ∈∂t. Denoting θt = −1 c Pt−1 j=1 λj, we now rewrite the lower bound on ∆t as, ∆t ≥−c (f ⋆(θt −λ′ t/c) −f ⋆(θt)) −g⋆ t (λ′ t) . Using Lemma 2 and the definition of wt we get that ∆t ≥⟨wt, λ′ t⟩−g⋆ t (λ′ t) − 1 2 σ c∥λ′ t∥2 ⋆. (12) Since λ′ t ∈∂t and since we assume that gt is closed and convex, we can apply Lemma 1 to get that ⟨wt, λ′ t⟩−g⋆ t (λ′ t) = gt(wt). Plugging this equality into Eq. (12) and summing over t we obtain that PT t=1 ∆t ≥PT t=1 gt(wt) − 1 2 σ c PT t=1 ∥λ′ t∥2 ⋆. Combining the above inequality with Eq. (11) concludes our proof. The following regret bound follows as a direct corollary of Lemma 3. Theorem 1 Under the same conditions of Lemma 3. Denote L = 1 T PT t=1 ∥λ′ t∥2 ⋆. Then, for all w ∈S we have, 1 T PT t=1 gt(wt) −1 T PT t=1 gt(w) ≤c f(w) T + L 2 σ c . In particular, if c = √ T, we obtain the bound, 1 T PT t=1 gt(wt) −1 T PT t=1 gt(w) ≤f(w)+L/(2 σ) √ T . 6 Application to Online learning In Sec. 1 we cast the task of online learning as a convex repeated game. We now demonstrate the applicability of our algorithmic framework for the problem of instance ranking. We analyze this setting since several prediction problems, including binary classification, multiclass prediction, multilabel prediction, and label ranking, can be cast as special cases of the instance ranking problem. Recall that on each online round, the learner receives a question-answer pair. In instance ranking, the question is encoded by a matrix Xt of dimension kt × n and the answer is a vector yt ∈Rkt. The semantic of yt is as follows. For any pair (i, j), if yt,i > yt,j then we say that yt ranks the i’th row of Xt ahead of the j’th row of Xt. We also interpret yt,i −yt,j as the confidence in which the i’th row should be ranked ahead of the j’th row. For example, each row of Xt encompasses a representation of a movie while yt,i is the movie’s rating, expressed as the number of stars this movie has received by a movie reviewer. The predictions of the learner are determined based on a weight vector wt ∈Rn and are defined to be ˆyt = Xt wt. Finally, let us define two loss functions for ranking, both generalize the hinge-loss used in binary classification problems. Denote by Et the set {(i, j) : yt,i > yt,j}. For all (i, j) ∈Et we define a pair-based hinge-loss ℓi,j(w; (Xt, yt)) = [(yt,i −yt,j) −⟨w, xt,i −xt,j⟩]+, where [a]+ = max{a, 0} and xt,i, xt,j are respectively the i’th and j’th rows of Xt. Note that ℓi,j is zero if w ranks xt,i higher than xt,j with a sufficient confidence. Ideally, we would like ℓi,j(wt; (Xt, yt)) to be zero for all (i, j) ∈Et. If this is not the case, we are being penalized according to some combination of the pair-based losses ℓi,j. For example, we can set ℓ(w; (Xt, yt)) to be the average over the pair losses, ℓavg(w; (Xt, yt)) = 1 |Et| P (i,j)∈Et ℓi,j(w; (Xt, yt)) . This loss was suggested by several authors (see for example [18]). Another popular approach (see for example [5]) penalizes according to the maximal loss over the individual pairs, ℓmax(w; (Xt, yt)) = max(i,j)∈Et ℓi,j(w; (Xt, yt)) . We can apply our algorithmic framework given in Sec. 4 for ranking, using for gt(w) either ℓavg(w; (Xt, yt)) or ℓmax(w; (Xt, yt)). The following theorem provides us with a sufficient condition under which the regret bound from Thm. 1 holds for ranking as well. Theorem 2 Let f be a σ-strongly convex function over S with respect to a norm ∥· ∥. Denote by Lt the maximum over (i, j) ∈Et of ∥xt,i −xt,j∥2 ∗. Then, for both gt(w) = ℓavg(w; (Xt, yt)) and gt(w) = ℓmax(w; (Xt, yt)), the following regret bound holds ∀u ∈S, 1 T PT t=1 gt(wt) −1 T PT t=1 gt(u) ≤f(u)+ 1 T PT t=1 Lt/(2 σ) √ T . 7 The Boosting Game In this section we describe the applicability of our algorithmic framework to the analysis of boosting algorithms. A boosting algorithm uses a weak learning algorithm that generates weak-hypotheses whose performances are just slightly better than random guessing to build a strong-hypothesis which can attain an arbitrarily low error. The AdaBoost algorithm, proposed by Freund and Schapire [6], receives as input a training set of examples {(x1, y1), . . . , (xm, ym)} where for all i ∈[m], xi is taken from an instance domain X, and yi is a binary label, yi ∈{+1, −1}. The boosting process proceeds in a sequence of consecutive trials. At trial t, the booster first defines a distribution, denoted wt, over the set of examples. Then, the booster passes the training set along with the distribution wt to the weak learner. The weak learner is assumed to return a hypothesis ht : X →{+1, −1} whose average error is slightly smaller than 1 2. That is, there exists a constant γ > 0 such that, ϵt def = Pm i=1 wt,i 1−yiht(xi) 2 ≤ 1 2 −γ . (13) The goal of the boosting algorithm is to invoke the weak learner several times with different distributions, and to combine the hypotheses returned by the weak learner into a final, so called strong, hypothesis whose error is small. The final hypothesis combines linearly the T hypotheses returned by the weak learner with coefficients α1, . . . , αT , and is defined to be the sign of hf(x) where hf(x) = PT t=1 αt ht(x) . The coefficients α1, . . . , αT are determined by the booster. In AdaBoost, the initial distribution is set to be the uniform distribution, w1 = ( 1 m, . . . , 1 m). At iteration t, the value of αt is set to be 1 2 log((1 −ϵt)/ϵt). The distribution is updated by the rule wt+1,i = wt,i exp(−αt yi ht(xi))/Zt, where Zt is a normalization factor. Freund and Schapire [6] have shown that under the assumption given in Eq. (13), the error of the final strong hypothesis is at most exp(−2 γ2 T). Several authors [15, 13, 8, 4] have proposed to view boosting as a coordinate-wise greedy optimization process. To do so, note first that hf errs on an example (x, y) iff y hf(x) ≤0. Therefore, the exp-loss function, defined as exp(−y hf(x)), is a smooth upper bound of the zero-one error, which equals to 1 if y hf(x) ≤0 and to 0 otherwise. Thus, we can restate the goal of boosting as minimizing the average exp-loss of hf over the training set with respect to the variables α1, . . . , αT . To simplify our derivation in the sequel, we prefer to say that boosting maximizes the negation of the loss, that is, max α1,...,αT −1 m Pm i=1 exp  −yi PT t=1 αtht(xi)  . (14) In this view, boosting is an optimization procedure which iteratively maximizes Eq. (14) with respect to the variables α1, . . . , αT . This view of boosting, enables the hypotheses returned by the weak learner to be general functions into the reals, ht : X →R (see for instance [15]). In this paper we view boosting as a convex repeated game between a booster and a weak learner. To motivate our construction, we would like to note that boosting algorithms define weights in two different domains: the vectors wt ∈Rm which assign weights to examples and the weights {αt : t ∈[T]} over weak-hypotheses. In the terminology used throughout this paper, the weights wt ∈Rm are primal vectors while (as we show in the sequel) each weight αt of the hypothesis ht is related to a dual vector λt. In particular, we show that Eq. (14) is exactly the Fenchel dual of a primal problem for a convex repeated game, thus the algorithmic framework described thus far for playing games naturally fits the problem of iteratively solving Eq. (14). To derive the primal problem whose Fenchel dual is the problem given in Eq. (14) let us first denote by vt the vector in Rm whose ith element is vt,i = yiht(xi). For all t, we set gt to be the function gt(w) = [⟨w, vt⟩]+. Intuitively, gt penalizes vectors w which assign large weights to examples which are predicted accurately, that is yiht(xi) > 0. In particular, if ht(xi) ∈{+1, −1} and wt is a distribution over the m examples (as is the case in AdaBoost), gt(wt) reduces to 1 −2ϵt (see Eq. (13)). In this case, minimizing gt is equivalent to maximizing the error of the individual hypothesis ht over the examples. Consider the problem of minimizing c f(w)+PT t=1 gt(w) where f(w) is the relative entropy given in Example 2 and c = 1/(2 γ) (see Eq. (13)). To derive its Fenchel dual, we note that g⋆ t (λt) = 0 if there exists βt ∈[0, 1] such that λt = βtvt and otherwise g⋆ t (λt) = ∞(see [16]). In addition, let us define αt = 2 γ βt. Since our goal is to maximize the dual, we can restrict λt to take the form λt = βtvt = αt 2 γ vt, and get that D(λ1, . . . , λT ) = −c f ⋆ −1 c T X t=1 βt vt ! = −1 2 γ log 1 m m X i=1 e−PT t=1 αtyiht(xi) ! . (15) Minimizing the exp-loss of the strong hypothesis is therefore the dual problem of the following primal minimization problem: find a distribution over the examples, whose relative entropy to the uniform distribution is as small as possible while the correlation of the distribution with each vt is as small as possible. Since the correlation of w with vt is inversely proportional to the error of ht with respect to w, we obtain that in the primal problem we are trying to maximize the error of each individual hypothesis, while in the dual problem we minimize the global error of the strong hypothesis. The intuition of finding distributions which in retrospect result in large error rates of individual hypotheses was also alluded in [15, 8]. We can now apply our algorithmic framework from Sec. 4 to boosting. We describe the game with the parameters αt, where αt ∈[0, 2 γ], and underscore that in our case, λt = αt 2 γ vt. At the beginning of the game the booster sets all dual variables to be zero, ∀t αt = 0. At trial t of the boosting game, the booster first constructs a primal weight vector wt ∈Rm, which assigns importance weights to the examples in the training set. The primal vector wt is constructed as in Eq. (6), that is, wt = ∇f ⋆(θt), where θt = −P i αivi. Then, the weak learner responds by presenting the loss function gt(w) = [⟨w, vt⟩]+. Finally, the booster updates the dual variables so as to increase the dual objective function. It is possible to show that if the range of ht is {+1, −1} then the update given in Eq. (10) is equivalent to the update αt = min{2 γ, 1 2 log((1 −ϵt)/ϵt)}. We have thus obtained a variant of AdaBoost in which the weights αt are capped above by 2 γ. A disadvantage of this variant is that we need to know the parameter γ. We would like to note in passing that this limitation can be lifted by a different definition of the functions gt. We omit the details due to the lack of space. To analyze our game of boosting, we note that the conditions given in Lemma 3 holds and therefore the left-hand side inequality given in Lemma 3 tells us that PT t=1 gt(wt) − 1 2 c PT t=1 ∥λ′ t∥2 ∞ ≤ D(λT +1 1 , . . . , λT +1 T ) . The definition of gt and the weak learnability assumption given in Eq. (13) imply that ⟨wt, vt⟩≥2 γ for all t. Thus, gt(wt) = ⟨wt, vt⟩≥2 γ which also implies that λ′ t = vt. Recall that vt,i = yiht(xi). Assuming that the range of ht is [+1, −1] we get that ∥λ′ t∥∞≤1. Combining all the above with the left-hand side inequality given in Lemma 3 we get that 2 T γ −T 2 c ≤D(λT +1 1 , . . . , λT +1 T ). Using the definition of D (see Eq. (15)), the value c = 1/(2 γ), and rearranging terms we recover the original bound for AdaBoost 1 m Pm i=1 e−yi PT t=1 αt ht(xi) ≤e−2 γ2 T . 8 Related Work and Discussion We presented a new framework for designing and analyzing algorithms for playing convex repeated games. Our framework was used for the analysis of known algorithms for both online learning and boosting settings. The framework also paves the way to new algorithms. In a previous paper [17], we suggested the use of duality for the design of online algorithms in the context of mistake bound analysis. The contribution of this paper over [17] is three fold as we now briefly discuss. First, we generalize the applicability of the framework beyond the specific setting of online learning with the hinge-loss to the general setting of convex repeated games. The setting of convex repeated games was formally termed “online convex programming” by Zinkevich [19] and was first presented by Gordon in [9]. There is voluminous amount of work on unifying approaches for deriving online learning algorithms. We refer the reader to [11, 12, 3] for work closely related to the content of this paper. By generalizing our previously studied algorithmic framework [17] beyond online learning, we can automatically utilize well known online learning algorithms, such as the EG and p-norm algorithms [12, 11], to the setting of online convex programming. We would like to note that the algorithms presented in [19] can be derived as special cases of our algorithmic framework by setting f(w) = 1 2∥w∥2. Parallel and independently to this work, Gordon [10] described another algorithmic framework for online convex programming that is closely related to the potential based algorithms described by Cesa-Bianchi and Lugosi [3]. Gordon also considered the problem of defining appropriate potential functions. Our work generalizes some of the theorems in [10] while providing a somewhat simpler analysis. Second, the usage of generalized Fenchel duality rather than the Lagrange duality given in [17] enables us to analyze boosting algorithms based on the framework. Many authors derived unifying frameworks for boosting algorithms [13, 8, 4]. Nonetheless, our general framework and the connection between game playing and Fenchel duality underscores an interesting perspective of both online learning and boosting. We believe that this viewpoint has the potential of yielding new algorithms in both domains. Last, despite the generality of the framework introduced in this paper, the resulting analysis is more distilled than the earlier analysis given in [17] for two reasons. (i) The usage of Lagrange duality in [17] is somehow restricted while the notion of generalized Fenchel duality is more appropriate to the general and broader problems we consider in this paper. (ii) The strongly convex property we employ both simplifies the analysis and enables more intuitive conditions in our theorems. There are various possible extensions of the work that we did not pursue here due to the lack of space. For instanc, our framework can naturally be used for the analysis of other settings such as repeated games (see [7, 19]). The applicability of our framework to online learning can also be extended to other prediction problems such as regression and sequence prediction. Last, we conjecture that our primal-dual view of boosting will lead to new methods for regularizing boosting algorithms, thus improving their generalization capabilities. References [1] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2006. [2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [3] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [4] M. Collins, R.E. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 2002. [5] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. JMLR, 7, Mar 2006. [6] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT, 1995. [7] Y. Freund and R.E. Schapire. Game theory, on-line prediction and boosting. In COLT, 1996. [8] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2), 2000. [9] G. Gordon. Regret bounds for prediction problems. In COLT, 1999. [10] G. Gordon. No-regret algorithms for online convex programs. In NIPS, 2006. [11] A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. Machine Learning, 43(3), 2001. [12] J. Kivinen and M. Warmuth. Relative loss bounds for multidimensional regression problems. Journal of Machine Learning, 45(3),2001. [13] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In Advances in Large Margin Classifiers. MIT Press, 1999. [14] Y. Nesterov. Primal-dual subgradient methods for convex problems. Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (UCL), 2005. [15] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):1–40, 1999. [16] S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. Technical report, The Hebrew University, 2006. [17] S. Shalev-Shwartz and Y. Singer. Online learning meets optimization in the dual. In COLT, 2006. [18] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In ESANN, April 1999. [19] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.
2006
21
3,039
Towards a general independent subspace analysis Fabian J. Theis Max Planck Institute for Dynamics and Self-Organisation & Bernstein Center for Computational Neuroscience Bunsenstr. 10, 37073 G¨ottingen, Germany fabian@theis.name Abstract The increasingly popular independent component analysis (ICA) may only be applied to data following the generative ICA model in order to guarantee algorithmindependent and theoretically valid results. Subspace ICA models generalize the assumption of component independence to independence between groups of components. They are attractive candidates for dimensionality reduction methods, however are currently limited by the assumption of equal group sizes or less general semi-parametric models. By introducing the concept of irreducible independent subspaces or components, we present a generalization to a parameter-free mixture model. Moreover, we relieve the condition of at-most-one-Gaussian by including previous results on non-Gaussian component analysis. After introducing this general model, we discuss joint block diagonalization with unknown block sizes, on which we base a simple extension of JADE to algorithmically perform the subspace analysis. Simulations confirm the feasibility of the algorithm. 1 Independent subspace analysis A random vector Y is called an independent component of the random vector X, if there exists an invertible matrix A and a decomposition X = A(Y, Z) such that Y and Z are stochastically independent. The goal of a general independent subspace analysis (ISA) or multidimensional independent component analysis is the decomposition of an arbitrary random vector X into independent components. If X is to be decomposed into one-dimensional components, this coincides with ordinary independent component analysis (ICA). Similarly, if the independent components are required to be of the same dimension k, then this is denoted by multidimensional ICA of fixed group size k or simply k-ISA. So 1-ISA is equivalent to ICA. 1.1 Why extend ICA? An important structural aspect in the search for decompositions is the knowledge of the number of solutions i.e. the indeterminacies of the problem. Without it, the result of any ICA or ISA algorithm cannot be compared with other solutions, so for instance blind source separation (BSS) would be impossible. Clearly, given an ISA solution, invertible transforms in each component (scaling matrices L) as well as permutations of components of the same dimension (permutation matrices P) give again an ISA of X. And indeed, in the special case of ICA, scaling and permutation are already all indeterminacies given that at most one Gaussian is contained in X [6]. This is one of the key theoretical results in ICA, allowing the usage of ICA for solving BSS problems and hence stimulating many applications. It has been shown that also for k-ISA, scalings and permutations as above are the only indeterminacies [11], given some additional rather weak restrictions to the model. However, a serious drawback of k-ISA (and hence of ICA) lies in the fact that the requirement fixed group-size k does not allow us to apply this analysis to an arbitrary random vector. Indeed, FastICA JADE Extended Infomax 0 1 2 3 4 crosstalking error Figure 1: Applying ICA to a random vector X = AS that does not fulfill the ICA model; here S is chosen to consist of a two-dimensional and a one-dimensional irreducible component. Shown are the statistics over 100 runs of the Amari error of the random original and the reconstructed mixing matrix using the three ICA-algorithms FastICA, JADE and Extended Infomax. Clearly, the original mixing matrix could not be reconstructed in any of the experiments. However, interestingly, the latter two algorithms do indeed find an ISA up to permutation, which will be explained in section 3. theoretically speaking, it may only be applied to random vectors following the k-ISA blind source separation model, which means that they have to be mixtures of a random vector that consists of independent groups of size k. If this is the case, uniqueness up to permutation and scaling holds as noted above; however if k-ISA is applied to any random vector, a decomposition into groups that are only ‘as independent as possible’ cannot be unique and depends on the contrast and the algorithm. In the literature, ICA is often applied to find representations fulfilling the independence condition as well as possible, however care has to be taken; the strong uniqueness result is not valid any more, and the results may depend on the algorithm as illustrated in figure 1. This work aims at finding an ISA model that allows applicability to any random vector. After reviewing previous approaches, we will provide such a model together with a corresponding uniqueness result and a preliminary algorithm. 1.2 Previous approaches to ISA for dependent component analysis Generalizations of the ICA model that are to include dependencies of multiple one-dimensional components have been studied for quite some time. ISA in the terminology of multidimensional ICA has first been introduced by Cardoso [4] using geometrical motivations. His model as well as the related but independently proposed factorization of multivariate function classes [9] is quite general, however no identifiability results were presented, and applicability to an arbitrary random vector was unclear; later, in the special case of equal group sizes (k-ISA) uniqueness results have been extended from the ICA theory [11]. Algorithmic enhancements in this setting have been recently studied by [10]. Moreover, if the observation contain additional structures such as spatial or temporal structures, these may be used for the multidimensional separation [13]. Hyv¨arinen and Hoyer presented a special case of k-ISA by combining it with invariant feature subspace analysis [7]. They model the dependence within a k-tuple explicitly and are therefore able to propose more efficient algorithms without having to resort to the problematic multidimensional density estimation. A related relaxation of the ICA assumption is given by topographic ICA [8], where dependencies between all components are assumed and modelled along a topographic structure (e.g. a 2-dimensional grid). Bach and Jordan [2] formulate ISA as a component clustering problem, which necessitates a model for inter-cluster independence and intra-cluster dependence. For the latter, they propose to use a tree-structure as employed by their tree dependepent component analysis. Together with inter-cluster independence, this implies a search for a transformation of the mixtures into a forest i.e. a set of disjoint trees. However, the above models are all semi-parametric and hence not fully blind. In the following, no additional structures are necessary for the separation. 1.3 General ISA Definition 1.1. A random vector S is said to be irreducible if it contains no lower-dimensional independent component. An invertible matrix W is called a (general) independent subspace analysis of X if WX = (S1, . . . , Sk) with pairwise independent, irreducible random vectors Si. Note that in this case, the Si are independent components of X. The idea behind this definition is that in contrast to ICA and k-ISA, we do not fix the size of the groups Si in advance. Of course, some restriction is necessary, otherwise no decomposition would be enforced at all. This restriction is realized by allowing only irreducible components. The advantage of this formulation now is that it can clearly be applied to any random vector, although of course a trivial decomposition might be the result in the case of an irreducible random vector. Obvious indeterminacies of an ISA of X are, as mentioned above, scalings i.e. invertible transformations within each Si and permutation of Si of the same dimension1. These are already all indeterminacies as shown by the following theorem, which extends previous results in the case of ICA [6] and k-ISA [11], where also the additional slight assumptions on square-integrability i.e. on existing covariance have been made. Theorem 1.2. Given a random vector X with existing covariance and no Gaussian independent component, then an ISA of X exists and is unique except for scaling and permutation. Existence holds trivially but uniqueness is not obvious. Due to the limited space, we only give a short sketch of the proof in the following. The uniqueness result can easily be formulated as a subspace extraction problem, and theorem 1.2 follows readily from Lemma 1.3. Let S = (S1, . . . , Sk) be a square-integrable decomposition of S into irreducible independent components Si. If X is an irreducible component of S, then X ∼Si for some i. Here the equivalence relation ∼denotes equality except for an invertible transformation. The following two lemmata each give a simplification of lemma 1.3 by ordering the components Si according to their dimensions. Some care has to be taken when showing that lemma 1.5 implies lemma 1.4. Lemma 1.4. Let S and X be defined as in lemma 1.3. In addition assume that dim Si = dim X for i ≤l and dim Si < dim X for i > l. Then X ∼Si for some i ≤l. Lemma 1.5. Let S and X be defined as in lemma 1.4, and let l = 1 and k = 2. Then X ∼S1. In order to prove lemma 1.5 (and hence the theorem), it is sufficient to show the following lemma: Lemma 1.6. Let S = (S1, S2) with S1 irreducible and m := dim S1 > dim S2 =: n. If X = AS is again irreducible for some m × (m + n)-matrix A, then (i) the left m × m-submatrix of A is invertible, and (ii) if X is an independent component of S, the right m×n-submatrix of A vanishes. (i) follows after some linear algebra, and is necessary to show the more difficult part (ii). For this, we follow the ideas presented in [12] using factorization of the joint characteristic function of S. 1.4 Dealing with Gaussians In the previous section, Gaussians had to be excluded (or at most one was allowed) in order to avoid additional indeterminacies. Indeed, any orthogonal transformation of two decorrelated hence independent Gaussians is again independent, so clearly such a strong identification result would not be possible. Recently, a general decomposition model dealing with Gaussians was proposed in the form of the socalled non-Gaussian subspace analysis (NGSA) [3]. It tries to detect a whole non-Gaussian subspace within the data, and no assumption of independence within the subspace is made. More precisely, given a random vector X, a factorization X = AS with an invertible matrix A, S = (SN, SG) and SN a square-integrable m-dimensional random vector is called an m-decomposition of X if SN and SG are stochastically independent and SG is Gaussian. In this case, X is said to be mdecomposable. X is denoted to be minimally n-decomposable if X is not (n −1)-decomposable. According to our previous notation, SN and SG are independent components of X. It has been shown that the subspaces of such decompositions are unique [12]: 1Note that scaling here implies a basis change in the component Si, so for example in the case of a twodimensional source component, this might be rotation and sheering. In the example later in figure 3, these indeterminacies can easily be seen by comparing true and estimated sources. Theorem 1.7 (Uniqueness of NGSA). The mixing matrix A of a minimal decomposition is unique except for transformations in each of the two subspaces. Moreover, explicit algorithms can be constructed for identifying the subspaces [3]. This result enables us to generalize theorem 1.2and to get a general decomposition theorem, which characterizes solutions of ISA. Theorem 1.8 (Existence and Uniqueness of ISA). Given a random vector X with existing covariance, an ISA of X exists and is unique except for permutation of components of the same dimension and invertible transformations within each independent component and within the Gaussian part. Proof. Existence is obvious. Uniqueness follows after first applying theorem 1.7 to X and then theorem 1.2 to the non-Gaussian part. 2 Joint block diagonalization with unknown block-sizes Joint diagonalization has become an important tool in ICA-based BSS (used for example in JADE) or in BSS relying on second-order temporal decorrelation. The task of (real) joint diagonalization (JD) of a set of symmetric real n×n matrices M := {M1, . . . , MK} is to find an orthogonal matrix E such that E⊤MkE is diagonal for all k = 1, . . . , K i.e. to minimizef(ˆE) := PK k=1 ∥ˆE⊤Mk ˆE − diagM(ˆE⊤Mk ˆE)∥2 F with respect to the orthogonal matrix ˆE, where diagM(M) produces a matrix where all off-diagonal elements of M have been set to zero, and ∥M∥2 F := tr(MM⊤) denotes the squared Frobenius norm. The Frobenius norm is invariant under conjugation by an orthogonal matrix, so minimizing f is equivalent to maximizing g(ˆE) := PK k=1 ∥diag(ˆE⊤Mk ˆE)∥2, where now diag(M) := (mii)i denotes the diagonal of M. For the actual minimization of f respectively maximization of g, we will use the common approach of Jacobi-like optimization by iterative applications of Givens rotation in two coordinates [5]. 2.1 Generalization to blocks In the following we will use a generalization of JD in order to solve ISA problems. Instead of fully diagonalizing all n × n matrices Mk ∈M, in joint block diagonalization (JBD) of M we want to determine E such that E⊤MkE is block-diagonal. Depending on the application, we fix the blockstructure in advance or try to determine it from M. We are not interested in the order of the blocks, so the block-structure is uniquely specified by fixing a partition of n i.e. a way of writing n as a sum of positive integers, where the order of the addends is not significant. So let2 n = m1 + . . . + mr with m1 ≤m2 ≤. . . ≤mr and set m := (m1, . . . , mr) ∈Nr. An n × n matrix is said to be m-block diagonal if it is of the form    D1 · · · 0 ... ... ... 0 · · · Dr    with arbitrary mi × mi matrices Di. As generalization of JD in the case of known the block structure, we can formulate the joint mblock diagonalization (m-JBD) problem as the minimization of f m(ˆE) := PK k=1 ∥ˆE⊤Mk ˆE − diagMm(ˆE⊤Mk ˆE)∥2 F with respect to the orthogonal matrix ˆE, where diagMm(M) produces a m-block diagonal matrix by setting all other elements of M to zero. In practice due to estimation errors, such E will not exist, so we speak of approximate JBD and imply minimizing some errormeasure on non-block-diagonality. Indeterminacies of any m-JBD are m-scaling i.e. multiplication by an m-block diagonal matrix from the right, and m-permutation defined by a permutation matrix that only swaps blocks of the same size. Finally, we speak of general JBD if we search for a JBD but no block structure is given; instead it is to be determined from the matrix set. For this it is necessary to require a block 2We do not use the convention from Ferrers graphs of specifying partitions in decreasing order, as a visualization of increasing block-sizes seems to be preferable in our setting. structure of maximal length, otherwise trivial solutions or ‘in-between’ solutions could exist (and obviously contain high indeterminacies). Formally, E is said to be a (general) JBD of M if (E, m) = argmaxm | ∃E:f m(E)=0 |m|. In practice due to errors, a true JBD would always result in the trivial decomposition m = (n), so we define an approximate general JBD by requiring f m(E) < ǫ for some fixed constant ǫ > 0 instead of f m(E) = 0. 2.2 JBD by JD A few algorithms to actually perform JBD have been proposed, see [1] and references therein. In the following we will simply perform joint diagonalization and then permute the columns of E to achieve block-diagonality — in experiments this turns out to be an efficient solution to JBD [1]. This idea has been formulated in a conjecture [1] essentially claiming that a minimum of the JD cost function f already is a JBD i.e. a minimum of the function f m up to a permutation matrix. Indeed, in the conjecture it is required to use the Jacobi-update algorithm from [5], but this is not necessary, and we can prove the conjecture partially: We want to show that JD implies JBD up to permutation, i.e. if E is a minimum of f, then there exists a permutation P such that f m(EP) = 0 (given existence of a JBD of M). But of course f(EP) = f(E), so we will show why (certain) JBD solutions are minima of f. However, JD might have additional minima. First note that clearly not any JBD minimizes f, only those such that in each block of size mk, f(E) when restricted to the block is maximal over E ∈O(mk). We will call such a JBD block-optimal in the following. Theorem 2.1. Any block-optimal JBD of M (zero of f m) is a local minimum of f. Proof. Let E ∈O(n) be block-optimal with f m(E) = 0. We have to show that E is a local minimum of f or equivalently a local maximum of the squared diagonal sum g. After substituting each Mk by E⊤MkE, we may already assume that Mk is m-block diagonal, so we have to show that E = I is a local maximum of g. Consider the elementary Givens rotation Gij(ǫ) defined for i < j and ǫ ∈(−1, 1) as the orthogonal matrix, where all diagonal elements are 1 except for the two elements √ 1 −ǫ2 in rows i and j and with all off-diagonal elements equal to 0 except for the two elements ǫ and −ǫ at (i, j) and (j, i), respectively. It can be used to construct local coordinates of the d := n(n −1)/2-dimensional manifold O(n) at I, simply by ι(ǫ12, ǫ13, . . . , ǫn−1,n) := Q i<j Gij(ǫij) This is an embedding, and ι(0) = I, so we only have to show that h(ǫǫǫ) := g(ι(ǫǫǫ)) has a local maximum at ǫǫǫ = 0. We do this by considering h partially in each coordinate. Let i < j. If i, j are in the same block of m, then h is locally maximal i.e. negative semi-definite at 0 in the direction ǫij because of block-optimality. Now assume i and j are from different blocks. After possible permutation, we may assume that j = i + 1 so that each matrix Mk ∈M has (Mk)ij = (Mk)ji = 0, and ak := (Mk)ii, bk := (Mk)jj. Then Gij(ǫ)⊤MkGij(ǫ) can be easily calculated at coordinates (i, i) to (j, j), and indeed entries on the diagonal other than at indices (i, i) and (j, j) are not changed, so ∥diag(Gij(ǫ)⊤MkGij(ǫ))∥2 −∥diag(Mk)∥2 = = −2ak(ak −bk)ǫ2 + 2bk(ak −bk)ǫ2 + 2(ak −bk)2ǫ4 = −2(a2 k + b2 k)ǫ2 + 2(ak −bk)2ǫ4. Hence h(0, . . . , 0, ǫij, 0, . . . , 0) −h(0) = −cǫ2 ij + dǫ4 ij with c = 2 PK k=1(a2 k + b2 k) and d = 2 PK k=1(ak −bk)2. Now either c = 0, then also d = 0 and h is constant zero in the direction ǫij. Or, more interestingly, c ̸= 0, then c > 0 and therefore h is negative definite in the direction ǫij. Altogether we get a negative definite h at 0 except for ‘trivial directions’, and hence a local maximum at 0. 2.3 Recovering the permutation In order to perform JBD, we therefore only have to find a JD E of M. What is left according to the above theorem is to find a permutation matrix P such that EP block-diagonalizes M. In the case of known block-order m, we can employ similar techniques as used in [1, 10], which essentially find P by some combinatorial optimization. 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 (a) (unknown) block diagonal M1 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 (b) ˆE⊤E w/o recovered permutation 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 (c) ˆE⊤E Figure 2: Performance of the proposed general JBD algorithm in the case of the (unknown) blockpartition 40 = 1 + 2 + 2 + 3 + 3 + 5+6 +6+ 6+6 in the presence of noise with SNR of 5dB. The product ˆE⊤E of the inverse of the estimated block diagonalizer and the original one is an m-block diagonal matrix except for permutation within groups of the same sizes as claimed in section 2.2. In the case of unknown block-size, we propose to use the following simple permutation-recovery algorithm: consider the mean diagonalized matrix D := K−1 PK k=1 E⊤MkE. Due to the assumption that M is m-block-diagonalizable (with unknown m), each E⊤MkE and hence also D must be m-block-diagonal except for a permutation P, so it must have the corresponding number of zeros in each column and row. In the approximate JBD case, thresholding with a threshold θ is necessary, whose choice is non-trivial. We propose using algorithm 1 to recover the permutation; we denote its resulting permuted matrix by P(D) when applied to the input D. P(D) is constructed from possibly thresholded D by iteratively permuting columns and rows in order to guarantee that all non-zeros of D are clustered along the diagonal as closely as possible. This recovers the permutation as well as the partition m of n. Algorithm 1: Block-diagonality permutation finder Input: (n × n)-matrix D Output: block-diagonal matrix P(D) := D′ such that D′ = PDPT for a permutation matrix P D′ ←D for i ←1 to n do repeat if (j0 ←min{j|j ≥i and d′ ij = 0 and d′ ji = 0}) exists then if (k0 ←min{k|k > j0 and (d′ ik ̸= 0 or d′ ki ̸= 0)}) exists then swap column j0 of D′ with column k0 swap row j0 of D′ with row k0 until no swap has occurred ; We illustrate the performance of the proposed JBD algorithm as follows: we generate a set of K = 100 m-block-diagonal matrices Dk of dimension 40 × 40 with m = (1, 2, 2, 3, 3, 5, 6, 6, 6, 6). They have been generated in blocks of size m with coefficients chosen randomly uniform from [−1, 1], and symmetrized by Dk ←(Dk + D⊤ k )/2. After that, they have been mixed by a random orthogonal mixing matrix E ∈O(40), i.e. Mk := EDkE⊤+ N, where N is a noise matrix with independent Gaussian entries such that the resulting signal-to-noise ratio is 5dB. Application of the JBD algorithm from above to {M1, . . . , MK} with threshold θ = 0.1 correctly recovers the block sizes, and the estimated block diagonalizer ˆE equals E up to m-scaling and permutation, as illustrated in figure 2. 3 SJADE — a simple algorithm for general ISA As usual by preprocessing of the observations X by whitening we may assume that Cov(X) = I. The indeterminacies allow scaling transformations in the sources, so without loss of generality let 7 8 9 10 11 12 13 14 1.5 2 2.5 3 3.5 4 4.5 5 5.5 (a) S2 3 4 5 6 7 8 9 2 2.5 3 3.5 4 4.5 5 5.5 6 (b) S3 3 3.5 4 4.5 5 5.5 6 6.5 7 1.5 2 2.5 3 3.5 4 4.5 5 (c) S4 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 (d) S5 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 (e) ˆA−1A −6.5 −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 1.5 2 2.5 3 3.5 4 4.5 5 (f) ( ˆS1, ˆS2) −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 0 50 100 150 200 250 (g) histogram of ˆS3 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 8 9 10 11 12 13 14 (h) S4 −7.5 −7 −6.5 −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −8 −7.5 −7 −6.5 −6 −5.5 −5 −4.5 −4 −3.5 −3 (i) S5 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 −5 −4 −3 −2 −1 0 1 (j) S6 Figure 3: Example application of general ISA for unknown sizes m = (1, 2, 2, 2, 3). Shown are the scatter plots i.e. densities of the source components and the mixing-separating map ˆA−1A. also Cov(S) = I. Then I = Cov(X) = A Cov(S)A⊤= AA⊤so A is orthogonal. Due to the ISA assumptions, the fourth-order cross cumulants of the sources have to be trivial between different groups, and within the Gaussians. In order to find transformations of the mixtures fulfilling this property, we follow the idea of the JADE algorithmbut now in the ISA setting. We perform JBD of the (whitened) contracted quadricovariance matrices defined by Cij(X) := E X⊤EijXXX⊤ − Eij −E⊤ ij −tr(Eij)I. Here RX := Cov(X) and Eij is a set of eigen-matrices of Cij, 1 ≤i, j ≤n. One simple choice is to use n2 matrices Eij with zeros everywhere except 1 at index (i, j). More elaborate choices of eigen-matrices (with only n(n + 1)/2 or even n entries) are possible. The resulting algorithm, subspace-JADE (SJADE) not only performs NGCA by grouping Gaussians as one-dimensional components with trivial Cii’s, but also automatically finds the subspace partition m using the general JBD algorithm from section 2.3. 4 Experimental results In a first example, we consider a general ISA problem in dimension n = 10 with the unknown partition m = (1, 2, 2, 2, 3). In order to generate 2- and 3-dimensional irreducible random vectors, we decided to follow the nice visual ideas from [10] and to draw samples from a density following a known shape — in our case 2d-letters or 3d-geometrical shapes. The chosen sources densities are shown in figure 3(a-d). Another 1-dimensional source following a uniform distribution was constructed. Altogether 104 samples were used. The sources S were mixed by a mixing matrix A with coefficients uniformly randomly sampled from [−1, 1] to give mixtures X = AS. The recovered mixing matrix ˆA was then estimated using the above block-JADE algorithm with unknown block size; we observed that the method is quite sensitive to the choice of the threshold (here θ = 0.015). Figure 3(e) shows the composed mixing-separating system ˆA−1A; clearly the matrices are equal except for block permutation and scaling, which experimentally confirms theorem 1.8. The algorithm found a partition ˆm = (1, 1, 1, 2, 2, 3), so one 2d-source was misinterpreted as two 1d-sources, but by using previous knowledge combination of the correct two 1d-sources yields the original 2dsource. The resulting recovered sources ˆS := ˆA−1X, figures 3(f-j), then equal the original sources except for permutation and scaling within the sources — which in the higher-dimensional cases implies transformations such as rotation of the underlying images or shapes. When applying ICA (1-ISA) to the above mixtures, we cannot expect to recover the original sources as explained in figure 1; however, some algorithms might recover the sources up to permutation. Indeed, SJADE equals JADE with additional permutation recovery because the joint block diagonalization is performed using joint diagonalization. This explains why JADE retrieves meaningful components even in this non-ICA setting as observed in [4]. In a second example, we illustrate how the algorithm deals with Gaussian sources i.e. how the subspace JADE also includes NGCA. For this we consider the case n = 5, m = (1, 1, 1, 2) and sources with two Gaussians, one uniform and a 2-dimensional irreducible component as before; 105 samples were drawn. We perform 100 Monte-Carlo simulations with random mixing matrix A, and apply SJADE with θ = 0.01. The recovered mixing matrix ˆA is compared with A by taking the ad-hoc measure ι(P) := P3 i=1 P2 j=1(p2 ij + p2 ji) for P := ˆA−1A. Indeed, we get nearly perfect recovery in 99 out of 100 runs, the median of ι(P) is very low with 0.0083. A single run diverges with ι(P) = 3.48. In order to show that the algorithm really separates the Gaussian part from the other components, we compare the recovered source kurtoses. The median kurtoses are −0.0006 ± 0.02, −0.003 ± 0.3, −1.2 ± 0.3, −1.2 ± 0.2 and −1.6 ± 0.2. The first two components have kurtoses close to zero, so they are the two Gaussians, whereas the third component has kurtosis of around −1.2, which equals the kurtosis of a uniform density. This confirms the applicability of the algorithm in the general, noisy ISA setting. 5 Conclusion Previous approaches for independent subspace analysis were restricted either to fixed group sizes or semi-parametric models. In neither case, general applicability to any kind of mixture data set was guaranteed, so blind source separation might fail. In the present contribution we introduce the concept of irreducible independent components and give an identifiability result for this general, parameter-free model together with a novel arbitrary-subspace-size algorithm based on joint block diagonalization. As in ICA, the main uniqueness theorem is an asymptotic result (but includes noisy case via NGCA). However in practice in the finite sample case, due to estimation errors the general joint block diagonality only approximately holds. Our simple solution in this contribution was to choose appropriate thresholds. But this choice is non-trivial, and adaptive methods are to be developed in future works. References [1] K. Abed-Meraim and A. Belouchrani. Algorithms for joint block diagonalization. In Proc. EUSIPCO 2004, pages 209–212, Vienna, Austria, 2004. [2] F.R. Bach and M.I. Jordan. Finding clusters in independent component analysis. In Proc. ICA 2003, pages 891–896, 2003. [3] G. Blanchard, M. Kawanabe, M. Sugiyama, V. Spokoiny, and K.-R. M¨uller. In search of non-gaussian components of a high-dimensional distribution. JMLR, 7:247–282, 2006. [4] J.F. Cardoso. Multidimensional independent component analysis. In Proc. of ICASSP ’98, Seattle, 1998. [5] J.F. Cardoso and A. Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM J. Mat. Anal. Appl., 17(1):161–164, January 1995. [6] P. Comon. Independent component analysis - a new concept? Signal Processing, 36:287–314, 1994. [7] A. Hyv¨arinen and P.O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–1720, 2000. [8] A. Hyv¨arinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1525–1558, 2001. [9] J.K. Lin. Factorizing multivariate function classes. In Advances in Neural Information Processing Systems, volume 10, pages 563–569, 1998. [10] B. Poczos and A. L¨orincz. Independent subspace analysis using k-nearest neighborhood distances. In Proc. ICANN 2005, volume 3696 of LNCS, pages 163–168, Warsaw, Poland, 2005. Springer. [11] F.J. Theis. Uniqueness of complex and multidimensional independent component analysis. Signal Processing, 84(5):951–956, 2004. [12] F.J. Theis and M. Kawanabe. Uniqueness of non-gaussian subspace analysis. In Proc. ICA 2006, pages 917–925, Charleston, USA, 2006. [13] R. Vollgraf and K. Obermayer. Multi-dimensional ICA to separate correlated sources. In Proc. NIPS 2001, pages 993–1000, 2001.
2006
22
3,040
In-Network PCA and Anomaly Detection Ling Huang University of California Berkeley, CA 94720 hling@cs.berkeley.edu XuanLong Nguyen University of California Berkeley, CA 94720 xuanlong@cs.berkeley.edu Minos Garofalakis Intel Research Berkeley, CA 94704 minos.garofalakis@intel.com Michael I. Jordan University of California Berkeley, CA 94720 jordan@cs.berkeley.edu Anthony Joseph University of California Berkeley, CA 94720 adj@cs.berkeley.edu Nina Taft Intel Research Berkeley, CA 94704 nina.taft@intel.com Abstract We consider the problem of network anomaly detection in large distributed systems. In this setting, Principal Component Analysis (PCA) has been proposed as a method for discovering anomalies by continuously tracking the projection of the data onto a residual subspace. This method was shown to work well empirically in highly aggregated networks, that is, those with a limited number of large nodes and at coarse time scales. This approach, however, has scalability limitations. To overcome these limitations, we develop a PCA-based anomaly detector in which adaptive local data filters send to a coordinator just enough data to enable accurate global detection. Our method is based on a stochastic matrix perturbation analysis that characterizes the tradeoff between the accuracy of anomaly detection and the amount of data communicated over the network. 1 Introduction The area of distributed computing systems provides a promising domain for applications of machine learning methods. One of the most interesting aspects of such applications is that learning algorithms that are embedded in a distributed computing infrastructure are themselves part of that infrastructure and must respect its inherent local computing constraints (e.g., constraints on bandwidth, latency, reliability, etc.), while attempting to aggregate information across the infrastructure so as to improve system performance (or availability) in a global sense. Consider, for example, the problem of detecting anomalies in a wide-area network. While it is straightforward to embed learning algorithms at local nodes to attempt to detect node-level anomalies, these anomalies may not be indicative of network-level problems. Indeed, in recent work, [8] demonstrated a useful role for Principal Component Analysis (PCA) to detect network anomalies. They showed that the minor components of PCA (the subspace obtained after removing the components with largest eigenvalues) revealed anomalies that were not detectable in any single node-level trace. This work assumed an environment in which all the data is continuously pushed to a central site for off-line analysis. Such a solution cannot scale either for networks with a large number of monitors nor for networks seeking to track and detect anomalies at very small time scales. Designing scalable solutions presents several challenges. Viable solutions need to process data “innetwork” to intelligently control the frequency and size of data communications. The key underlying problem is that of developing a mathematical understanding of how to trade off quantization arising from local data filtering against fidelity of the detection analysis. We also need to understand how this tradeoff impacts overall detection accuracy. Finally, the implementation needs to be simple if it is to have impact on developers. In this paper, we present a simple algorithmic framework for network-wide anomaly detection that relies on distributed tracking combined with approximate PCA analysis, together with supporting theoretical analysis. In brief, the architecture involves a set of local monitors that maintain parameterized sliding filters. These sliding filters yield quantized data streams that are sent to a coordinator. The coordinator makes global decisions based on these quantized data streams. We use stochastic matrix perturbation theory to both assess the impact of quantization on the accuracy of anomaly detection, and to design a method that selects filter parameters in a way that bounds the detection error. The combination of our theoretical tools and local filtering strategies results in an in-network tracking algorithm that can achieve high detection accuracy with low communication overhead; for instance, our experiments show that, by choosing a relative eigen-error of 1.5% (yielding, approximately, a 4% missed detection rate and a 6% false alarm rate), we can filter out more than 90% of the traffic from the original signal. Prior Work. The original work on a PCA-based method by Lakhina et al. [8] has been extended by [17], who show how to infer network anomalies in both spatial and temporal domains. As with [8], this work is completely centralized. [14] and [1] propose distributed PCA algorithms distributed across blocks of rows or columns of the data matrix; however, these methods are not applicable to our case. Furthermore, neither [14] nor [1] address the issue of continuously tracking principal components within a given error tolerance or the issue of implementing a communication/accuracy tradeoff; issues which are the main focus of our work. Other initiatives in distributed monitoring, profiling and anomaly detection aim to share information and foster collaboration between widely distributed monitoring boxes to offer improvements over isolated systems [12, 16]. Work in [2, 10] posits the need for scalable detection of network attacks and intrusions. In the setting of simpler statistics such as sums and counts, in-network detection methods related to ours have been explored by [6]. Finally, recent work in the machine learning literature considers distributed constraints in learning algorithms such as kernel-based classification [11] and graphical model inference [7]. (See [13] for a survey). 2 Problem description and background We consider a monitoring system comprising a set of local monitor nodes M1, . . . , Mn, each of which collects a locally-observed time-series data stream (Fig. 1(a)). For instance, the monitors may collect information on the number of TCP connection requests per second, the number of DNS transactions per minute, or the volume of traffic at port 80 per second. A central coordinator node aims to continuously monitor the global collection of time series, and make global decisions such as those concerning matters of network-wide health. Although our methodology is generally applicable, in this paper we focus on the particular application of detecting volume anomalies. A volume anomaly refers to unusual traffic load levels in a network that are caused by anomalies such as worms, distributed denial of service attacks, device failures, misconfigurations, and so on. Each monitor collects a new data point at every time step and, assuming a naive, “continuous push” protocol, sends the new point to the coordinator. Based on these updates, the coordinator keeps track of a sliding time window of size m (i.e., the m most recent data points) for each monitor time series, organized into a matrix Y of size m × n (where the ith column Yi captures the data from monitor i, see Fig. 1(a)). The coordinator then makes its decisions based solely on this (global) Y matrix. In the network-wide volume anomaly detection algorithm of [8] the local monitors measure the total volume of traffic (in bytes) on each network link, and periodically (e.g., every 5 minutes) centralize the data by pushing all recent measurements to the coordinator. The coordinator then performs PCA on the assembled Y matrix to detect volume anomalies. This method has been shown to work remarkably well, presumably due to the inherently low-dimensional nature of the underlying data [9]. However, such a “periodic push” approach suffers from inherent limitations: To ensure fast detection, the update periods should be relatively small; unfortunately, small periods also imply increased monitoring communication overheads, which may very well be unnecessary (e.g., if there are no significant local changes across periods). Instead, in our work, we study how the monitors can effectively filter their time-series updates, sending as little data as possible, yet enough so as to allow the coordinator to make global decisions accurately. We provide analytical bounds on the errors that occur because decisions are made with incomplete data, and explore the tradeoff between reducing data transmissions (communication overhead) and decision accuracy. Anomaly 2 5 8 1 3 5 4 7 2 3 6 1 Data Flow Result M1 M3 M2 Mn ˆY = Y = (a) The system setup Mon Tue Wed Thu Fri Sat Sun 0 1 2 3 x 10 18 State Vector Mon Tue Wed Thu Fri Sat Sun 0 0.5 1 1.5 2 x 10 17 Residual Vector (b) Abilene network traffic data Figure 1: (a) The distributed monitoring system; (b) Data sample (∥y∥2) collected over one week (top); its projection in residual subspace (bottom). Dashed line represents a threshold for anomaly detection. Using PCA for centralized volume anomaly detection. As observed by Lakhina et al. [8], due to the high level of traffic aggregation on ISP backbone links, volume anomalies can often go unnoticed by being “buried” within normal traffic patterns (e.g., the circle dots shown in the top plot in Fig 1(b)). On the other hand, they observe that, although, the measured data is of seemingly high dimensionality (n = number of links), normal traffic patterns actually lie in a very low-dimensional subspace; furthermore, separating out this normal traffic subspace using PCA (to find the principal traffic components) makes it much easier to identify volume anomalies in the remaining subspace (bottom plot of Fig. 1(b)). As before, let Y be the global m × n time-series data matrix, centered to have zero mean, and let y = y(t) denote a n-dimensional vector of measurements (for all links) from a single time step t. Formally, PCA is a projection method that maps a given set of data points onto principal components ordered by the amount of data variance that they capture. The set of n principal components, {vi}n i=1, are defined as: vi = arg max ∥x∥=1 ∥(Y − i−1 X j=1 YvjvT j )x∥ and are the n eigenvectors of the estimated covariance matrix A := 1 mYT Y. As shown in [9], PCA reveals that the Origin-Destination (OD) flow matrices of ISP backbones have low intrinsic dimensionality: For the Abilene network with 41 links, most data variance can be captured by the first k = 4 principal components. Thus, the underlying normal OD flows effectively reside in a (low) k-dimensional subspace of Rn. This subspace is referred to as the normal traffic subspace Sno. The remaining (n −k) principal components constitute the abnormal traffic subspace Sab. Detecting volume anomalies relies on the decomposition of link traffic y = y(t) at any time step into normal and abnormal components, y = yno+yab, such that (a) yno corresponds to modeled normal traffic (the projection of y onto Sno), and (b) yab corresponds to residual traffic (the projection of y onto Sab). Mathematically, yno(t) and yab(t) can be computed as yno(t) = PPT y(t) = Cnoy(t) and yab(t) = (I −PPT )y(t) = Caby(t) where P = [v1, v2, . . . , vk] is formed by the first k principal components which capture the dominant variance in the data. The matrix Cno = PPT represents the linear operator that performs projection onto the normal subspace Sno, and Cab projects onto the abnormal subspace Sab. As observed in [8], a volume anomaly typically results in a large change to yab; thus, a useful metric for detecting abnormal traffic patterns is the squared prediction error (SPE): SPE ≡∥yab∥2 = ∥Caby∥2 (essentially, a quadratic residual function). More formally, their proposed algorithm signals a volume anomaly if SPE > Qα, where Qα denotes the threshold statistic for the SPE residual function at the 1 −α confidence level. Such a statistical test for the SPE residual function, known as the Q-statistic [4], can be computed as a function Qα = Qα(λk+1, . . . , λn) of the (n−k) non-principal eigenvalues of the covariance matrix A. ?Yn(t) ?Y2(t) ?Y1(t) w q  6 ? w  ? 6 Filter/ Predict Filter/ Predict Filter/ Predict δ1 δ2 δn Distr. Monitors R1(t) R2(t) Rn(t) Input: ϵ Coordinator δ1, . . ., δn Adaptive Subspace Method Anomaly Analysis Perturbation Figure 2: Our in-network tracking and detection framework. 3 In-network PCA for anomaly detection We now describe our version of an anomaly detector that uses distributed tracking and approximate PCA analysis. A key idea is to curtail the amount of data each monitor sends to the coordinator. Because our job is to catch anomalies, rather than to track ongoing state, we point out that the coordinator only needs to have a good approximation of the state when an anomaly is near. It need not track global state very precisely when conditions are normal. This observation makes it intuitive that a reduction in data sharing between monitors and the coordinator should be possible. We curtail the amount of data flow from monitors to the coordinator by installing local filters at each monitor. These filters maintain a local constraint, and a monitor only sends the coordinator an update of its data when the constraint is violated. The coordinator thus receives an approximate, or “perturbed,” view of the data stream at each monitor and hence of the global state. We use stochastic matrix perturbation theory to analyze the effect on our PCA-based anomaly detector of using a perturbed global matrix. Based on this, we can choose the filtering parameters (i.e., the local constraints) so as to limit the effect of the perturbation on the PCA analysis and on any deterioration in the anomaly detector’s performance. All of these ideas are combined into a simple, adaptive distributed protocol. 3.1 Overview of our approach Fig. 2 illustrates the overall architecture of our system. We now describe the functionality at the monitors and the coordinator. The goal of a monitor is to track its local raw time-series data, and to decide when the coordinator needs an update. Intuitively, if the time series does not change much, or doesn’t change in a way that affects the global condition being tracked, then the monitor does not send anything to the coordinator. The coordinator assumes that the most recently received update is still approximately valid. The update message can be either the current value of the time series, or a summary of the most recent values, or any function of the time series. The update serves as a prediction of the future data, because should the monitor send nothing in subsequent time intervals, then the coordinator uses the most recently received update to predict the missing values. For our anomaly detection application, we filter as follows. Each monitor i maintains a filtering window Fi(t) of size 2δi centered at a value Ri (i.e., Fi(t) = [Ri(t) −δi, Ri(t) + δi]). At each time t, the monitor sends both Yi(t) and Ri(t) to the coordinator only if Yi(t) /∈Fi, otherwise it sends nothing. The window parameter δi is called the slack; it captures the amount the time series can drift before an update to the coordinator needs to be sent. The center parameter Ri(t) denotes the approximate representation, or summary, of Yi(t). In our implementation, we set Ri(t) equal to the average of last five signal values observed locally at monitor i. Let t∗denote the time of the most recent update happens. The monitor needs to send both Yi(t∗) and Ri(t∗) to the coordinator when it does an update, because the coordinator will use Yi(t∗) at time t∗and Ri(t∗) for all t > t∗ until the next update arrives. For any subsequent t > t∗when the coordinator receives no update from that monitor, it will use Ri(t∗) as the prediction for Yi(t). The role of the coordinator is twofold. First, it makes global anomaly-detection decisions based upon the received updates from the monitors. Secondly, it computes the filtering parameters (i.e., the slacks δi) for all the monitors based on its view of the global state and the condition for triggering an anomaly. It gives the monitors their slacks initially and updates the value of their slack parameters when needed. Our protocol is thus adaptive. Due to lack of space we do not discuss here the method for deciding when slack updates are needed. The global detection task is the same as in the centralized scheme. In contrast to the centralized setting, however, the coordinator does not have an exact version of the raw data matrix Y; it has the approximation ˆY instead. The PCA analysis, including the computation of Sab is done on the perturbed covariance matrix ˆA := A −∆. The magnitude of the perturbation matrix ∆is determined by the slack variables δi (i = 1, . . . , M). 3.2 Selection of filtering parameters A key ingredient of our framework is a practical method for choosing the slack parameters δi. This choice is critical because these parameters balance the tradeoff between the savings in data communication and the loss of detection accuracy. Clearly, the larger the slack, the less the monitor needs to send, thus leading to both more reduction in communication overhead and potentially more information loss at the coordinator. We employ stochastic matrix perturbation theory to quantify the effects of the perturbation of a matrix on key quantities such as eigenvalues and the eigen-subspaces, which in turn affect the detection accuracy. Our approach is as follows. We measure the size of a perturbation using a norm on ∆. We derive an upper bound on the changes to the eigenvalues λi and the residual subspace Cab as a function of ∥∆∥. We choose δi to ensure that an approximation to this upper bound on ∆is not exceeded. This in turn ensures that λi and Cab do not exceed their upper bounds. Controlling these latter terms, we are able to bound the false alarm probability. Recall that the coordinator’s view of the global data matrix is the perturbed matrix ˆY = Y + W, where all elements of the column vector Wi are bounded within the interval [−δi, δi]. Let λi and ˆλi (i = 1, . . . , n) denote the eigenvalues of the covariance matrix A = 1 mYT Y and its perturbed version ˆA := 1 m ˆYT ˆY. Applying the classical theorems of Mirsky and Weyl [15], we obtain bounds on the eigenvalue perturbation in terms of the Frobenius norm ∥.∥F and the spectral norm ∥.∥2 of ∆:= A −ˆA, respectively: ϵeig := v u u t n X i=1 1 n(ˆλi −λi)2 ≤∥∆∥F/√n and max i |ˆλi −λi| ≤∥∆∥2 (1) Applying the sin theorem and results on bounding the angle of projections to subspaces [15] (see [3] for more details), we can bound the perturbation of the residual subspace Cab in terms of the Frobenius norm of ∆: ∥Cab −ˆCab∥F ≤ √ 2∥∆∥F ν (2) where ν denotes the eigengap between the kth and (k+1)th eigenvalues of the estimated covariance matrix ˆA. To obtain practical (i.e., computable) bound on the norms of ∆, we derive expectation bounds instead of worst case bounds. We make the following assumptions on the error matrix W: 1. The column vectors W1, . . . , Wn are independent and radially symmetric m-vectors. 2. For each i = 1, . . . , n, all elements of column vector Wi are i.i.d. random variables with mean 0, variance σ2 i := σ2 i (δi) and fourth moment µ4 i := µ4 i (δi). Note that the independence assumption is imposed only on the error—this by no means implies that the signals received by different monitors are statistically independent. Under the above assumption, we can show that ∥∆∥F/√n is upper bounded in expectation by the following quantity: T olF = 2 v u u t 1 mn n X i=1 λi · n X i=1 σ2 i + v u u t  1 m + 1 n  n X i=1 σ4 i + 1 mn n X i=1 (µ4 i −σ4 i ). (3) Similar results can be obtained for the spectral norm as well. In practice, these upper bounds are very tight because σ1, . . . , σn tend to be small compared to the top eigenvalues. Given the tolerable perturbation T olF, we can use Eqn. (3) to select the slack variables. For example, we can divide the overall tolerance across monitors either uniformly or in proportion to their observed local variance. 3.3 Guarantee on false alarm probability Because our approximation perturbs the eigenvalues, it also impacts the accuracy with which the trigger is fired. Since the trigger condition is ∥Caby∥2 > Qα, we must assess the impact on both of these terms. We can compute an upper bound on the perturbation of the SPE statistic, SPE = ∥Caby∥2, as follows. First, note that |∥ˆCabˆy∥−∥Caby∥| ≤ ∥(ˆCab −Cab)ˆy∥+ ∥Cab(y −ˆy)∥≤ √ 2∥∆∥F∥ˆy∥ ν + ∥Cab∥2 v u u t n X i=1 δ2 i ≤ √ 2∥∆∥F∥ˆy∥ ν + ∥ˆCab∥+ √ 2∥∆∥F ν ! v u u t n X i=1 δ2 i =: η1(ˆy). |∥ˆCabˆy∥2 −∥Caby∥2| ≤η1(ˆy)(2∥ˆCabˆy∥+ η1(ˆy)) =: η2(ˆy). (4) The dependency of the threshold Qα on the eigenvalues, λk+1, . . . , λn, can be expressed as [4]: Qα = φ1 " cα p 2φ2h2 0 φ1 + 1 + φ2h0(h0 −1) φ2 1 # 1 h0 , (5) where cα is the (1 −α)-percentile of the standard normal distribution, h0 = 1 −2φ1φ3 3φ2 2 , φi = Pn j=k+1 λi j for i = 1, 2, 3. To assess the perturbation in false alarm probability, we start by considering the following random variable c derived from Eqn. (5): c = φ1[(SPE/φ1)h0 −1 −φ2h0(h0 −1)/φ2 1] p 2φ2h2 0 . (6) The random variable c essentially normalizes the random quantity ∥Caby∥2 and is known to approximately follow a standard normal distribution [5]. The false alarm probability in the centralized system is expressed as Pr  ∥Caby∥2 > Qα  = Pr [c > cα] = α, where the lefthand term of this equation is conditioned upon the SPE statistics being inside the normal range. In our distributed setting, the anomaly detector fires a trigger if ∥ˆCabˆy∥2 > ˆQα. We thus only observe a perturbed version ˆc for the random variable c. Let ηc denote the bound on |ˆc −c|. The deviation of the false alarm probability in our approximate detection scheme can then be approximated as P(cα −ηc < U < cα + ηc), where U is a standard normal random variable. 4 Evaluation We implemented our algorithm and developed a trace-driven simulator to validate our methods. We used a one-week trace collected from the Abilene network1. The traces contains per-link traffic loads measured every 10 minutes, for all 41 links of the Abilene network. With a time unit of 10 minutes, data was collected for 1008 time units. This data was used to feed the simulator. There are 7 anomalies in the data that were detected by the centralized algorithm (and verified by hand to be true anomalies). We also injected 70 synthetic anomalies into this dataset using the method described in [8], so that we would have sufficient data to compute error rates. We used a threshold Qα corresponding to an 1 −α = 99.5% confidence level. Due to space limitations, we present results only for the case of uniform monitor slack, δi = δ. The input parameter for our algorithm is the tolerable relative error of the eigenvalues (“relative eigen-error” for short), which acts as a tuning knob. (Precisely, it is T olF / q 1 n P λ2 i , where T olF is defined in Eqn. (3).) Given this parameter and the input data we can compute the filtering slack δ for the monitors using Eqn. (3). We then feed in the data to run our protocol in the simulator with the 1Abilene is an Internet2 high-performance backbone network that interconnects a large number of universities as well as a few other research institutes. 0 0.005 0.01 0.015 0.02 0.025 0.03 0 2 4 6 x 10 7 Slack (a) 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.005 0.01 0.015 Rel. Eigen Error (b) 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.05 0.1 Rel. Threshold Error (c) 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.1 0.2 0.3 0.4 Fal. Alarm Rate (d) Upper Bound Actual Accrued 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.05 0.1 Missed Detec. Rate (e) 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.5 1 Comm. Overhead (f) Figure 3: In all plots the x-axis is the relative eigen-error. (a) The filtering slack. (b) Actual accrued eigenerror. (c) Relative error of detection threshold. (d) False alarm rates. (e) Missed detection rates. (f) Communication overhead. computed δ. The simulator outputs a set of results including: 1) the actual relative eigen errors and the relative errors on the detection threshold Qα; 2) the missed detection rate, false alarm rate and communication cost achieved by our method. The missed-detection rate is defined as the fraction of missed detections over the total number of real anomalies, and the false-alarm rate as the fraction of false alarms over the total number of detected anomalies by our protocol, which is α (defined in Sec. 3.3) rescaled as a rate rather than a probability. The communication cost is computed as the fraction of number of messages that actually get through the filtering window to the coordinator. The results are shown in Fig. 3. In all plots, the x-axis is the relative eigen-error. In Fig. 3(a) we plot the relationship between the relative eigen-error and the filtering slack δ when assuming filtering errors are uniformly distributed on interval [−δ, δ]. With this model, the relationship between the relative eigen-error and the slack is determined by a simplified version of Eqn. (3) (with all σ2 i = δ2 3 ). The results make intuitive sense. As we increase our error tolerance, we can filter more at the monitor and send less to the coordinator. The slack increases almost linearly with the relative eigen-error because the first term in the right hand side of Eqn. (3) dominates all other terms. In Fig. 3(b) we compare the relative eigen-error to the actual accrued relative eigen-error (defined as ϵeig/ q 1 n P λ2 i , where ϵeig is defined in Eqn (1)). These were computed using the slack parameters δ as computed by our coordinator. We can see that the real accrued eigen-errors are always less than the tolerable eigen errors. The plot shows a tight upper bound, indicating that it is safe to use our model’s derived filtering slack δ. In other words, the achieved eigen-error always remains below the requested tolerable error specified as input, and the slack chosen given the tolerable error is close to being optimal. Fig. 3(c) shows the relationship between the relative eigen-error and the relative error of detection threshold Qα2. We see that the threshold for detecting anomalies decreases as we tolerate more and more eigen-errors. In these experiments, an error of 2% in the eigenvalues leads to an error of approximately 6% in our estimate of the appropriate cutoff threshold. We now examine the false alarm rates achieved. In Fig. 3(d) the curve with triangles represents the upper bound on the false alarm rate as estimated by the coordinator. The curve with circles is the actual accrued false alarm rate achieved by our scheme. Note that the upper bound on the false alarm rate is fairly close to the true values, especially when the slack is small. The false alarm rate increases with increasing eigen-error because as the eigen-error increases, the corresponding detection threshold Qα will decrease, which in turn causes the protocol to raise an alarm more often. (If we had plotted ˆQ rather than the relative threshold difference, we would obviously see a 2Precisely, it is 1 −ˆQα/Qα, where ˆQα is computed from ˆλk+1, . . . , ˆλn. decreasing ˆQ with increasing eigen-error.) We see in Fig. 3(e) that the missed detection rates remain below 4% for various levels of communication overhead. The communication overhead is depicted in Fig. 3(f). Clearly, the larger the errors we can tolerate, the more overhead can be reduced. Considering these last three plots (d,e,f) together, we observe several tradeoffs. For example, when the relative eigen-error is 1.5%, our algorithm reduces the data sent through the network by more than 90%. This gain is achieved at the cost of approximately a 4% missed detection rate and a 6% false alarm rate. This is a large reduction in communication for a small increase in detection error. These initial results illustrate that our in-network solution can dramatically lower the communication overhead while still achieving high detection accuracy. 5 Conclusion We have presented a new algorithmic framework for network anomaly detection that combines distributed tracking with PCA analysis to detect anomalies with far less data than previous methods. The distributed tracking consists of local filters, installed at each monitoring site, whose parameters are selected based upon global criteria. The idea is to track the local monitoring data only enough so as to enable accurate detection. The local filtering reduces the amount of data transmitted through the network but also means that anomaly detection must be done with limited or partial views of the global state. Using methods from stochastic matrix perturbation theory, we provided an analysis for the tradeoff between the detection accuracy and the data communication overhead. We were able to control the amount of data overhead using the the relative eigen-error as a tuning knob. To the best of our knowledge, this is the first result in the literature that provides upper bounds on the false alarm rate of network anomaly detection. References [1] BAI, Z.-J., CHAN, R. AND LUK, F. Principal component analysis for distributed data sets with updating. In Proceedings of International workshop on Advanced Parallel Processing Technologies (APPT), 2005. [2] DREGER, H., FELDMANN, A., PAXSON, V. AND SOMMER, R. Operational experiences with highvolume network intrusion detection. In Proceedings of ACM Conference on Computer and Communications Security (CCS), 2004. [3] HUANG, L., NGUYEN, X., GAROFALAKIS, M., JORDAN, M., JOSEPH, A. AND TAFT, N. In-network PCA and anomaly detection. Technical Report No. UCB/EECS-2007-10, EECS Department, UC Berkeley. [4] JACKSON, J. E. AND MUDHOLKAR, G. S. Control procedures for residuals associated with principal component analysis. In Technometrics, 21(3):341-349, 1979. [5] JENSEN, D. R. AND SOLOMON, H. A Gaussian approximation for the distribution of definite quadratic forms. In Journal of the American Statistical Association, 67(340):898-902, 1972. [6] KERALAPURA, R., CORMODE, G. AND RAMAMIRTHAM, J. Communication-efficient distributed monitoring of thresholded counts. In Proceedings of ACM International Conference on Management of Data (SIGMOD), 2006. [7] KREIDL, P. O., WILLSKY, A. Inference with minimal communication: A decision-theoretic variational approach. In Proceedings of Neural Information Processing Systems (NIPS), 2006. [8] LAKHINA, A., CROVELLA, M. AND DIOT, C. Diagnosing network-wide traffic anomalies. In Proceedings of ACM Conference of the Special Interest Group on Data Communication (SIGCOMM), 2004. [9] LAKHINA, A., PAPAGIANNAKI, K., CROVELLA, M., DIOT, C., KOLACZYK, E. D. AND TAFT, N. Structural analysis of network traffic flows. In Proceedings of International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), 2004. [10] LEVCHENKO, K., PATURI, R. AND VARGHESE, G. On the difficulty of scalably detecting network attacks. In Proceedings of ACM Conference on Computer and Communications Security (CCS), 2004. [11] NGUYEN, X., WAINWRIGHT, M. AND JORDAN, M. Nonparametric decentralized detection using kernel methods. In IEEE Transactions on Signal Processing, 53(11):4053-4066, 2005. [12] PADMANABHAN, V. N., RAMABHADRAN, S., AND PADHYE, J. Netprofiler: Profiling wide-area networks using peer cooperation. In Proceedings of International Workshop on Peer-to-Peer Systems, 2005. [13] PREDD, J.B., KULKARNI, S.B., AND POOR, H.V. Distributed learning in wireless sensor networks. In IEEE Signal Processing Magazine, 23(4):56-69, 2006. [14] QU, Y., OSTROUCHOVZ, G., SAMATOVAZ, N AND GEIST, A. Principal component analysis for dimension reduction in massive distributed data sets. In Proceedings of IEEE International Conference on Data Mining (ICDM), 2002. [15] STEWART, G. W., AND SUN, J.-G. Matrix Perturbation Theory. Academic Press, 1990. [16] YEGNESWARAN, V., BARFORD, P., AND JHA, S. Global intrusion detection in the domino overlay system. In Proceedings of Network and Distributed System Security Symposium (NDSS), 2004. [17] ZHANG, Y., GE, Z.-H., GREENBERG, A., AND ROUGHAN, M. Network anomography. In Proceedings of Internet Measurement Conference (IMC), 2005.
2006
23
3,041
Robotic Grasping of Novel Objects Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Andrew Y. Ng Computer Science Department Stanford University, Stanford, CA 94305 {asaxena,jdriemeyer,jkearns,ang}@cs.stanford.edu Abstract We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. 1 Introduction In this paper, we address the problem of grasping novel objects that a robot is perceiving for the first time through vision. Modern-day robots can be carefully hand-programmed or “scripted” to carry out many complex manipulation tasks, ranging from using tools to assemble complex machinery, to balancing a spinning top on the edge of a sword. [15] However, autonomously grasping a previously unknown object still remains a challenging problem. If the object was previously known, or if we are able to obtain a full 3-d model of it, then various approaches, for example ones based on friction cones, [5] form- and force-closure, [1] pre-stored primitives, [7] or other methods can be applied. However, in practical scenarios it is often very difficult to obtain a full and accurate 3-d reconstruction of an object seen for the first time through vision. This is particularly true if we have only a single camera; for stereo systems, 3-d reconstruction is difficult for objects without texture, and even when stereopsis works well, it would typically reconstruct only the visible portions of the object. Finally, even if more specialized sensors such as laser scanners (or active stereo) are used to estimate the object’s shape, we would still have only a 3-d reconstruction of the front face of the object. In contrast to these approaches, we propose a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Informally, the algorithm takes two or more pictures of the object, and then tries to identify a point within each 2-d image that corresponds to a good point at which to grasp the object. (For example, if trying to grasp a coffee mug, it might try to identify the mid-point of the handle.) Given these 2-d points in each image, we use triangulation to obtain a 3-d position at which to actually attempt the grasp. Thus, rather than trying to triangulate every single point within each image in order to estimate depths—as in dense stereo—we only attempt to triangulate one (or at most a small number of) points corresponding to the 3-d point where we will grasp the object. This allows us to grasp an object without ever needing to obtain its full 3-d shape, and applies even to textureless, translucent or reflective objects on which standard stereo 3-d reconstruction fares poorly. To the best of our knowledge, our work represents the first algorithm capable of grasping novel objects (ones where a 3-d model is not available), including ones from novel object classes, that we are perceiving for the first time using vision. Figure 1: Examples of objects on which the grasping algorithm was tested. In prior work, a few others have also applied learning to robotic grasping. [1] For example, Jebara et al. [8] used a supervised learning algorithm to learn grasps, for settings where a full 3-d model of the object is known. Hsiao and Lozano-Perez [4] also apply learning to grasping, but again assuming a fully known 3-d model of the object. Piater’s algorithm [9] learned to position single fingers given a top-down view of an object, but considered only very simple objects (specifically, square, triangle and round “blocks”). Platt et al. [10] learned to sequence together manipulation gaits, but again assumed a specific, known, object. There is also extensive literature on recognition of known object classes (such as cups, mugs, etc.) [14], but this seems unlikely to apply directly to grasping objects from novel object classes. To pick up an object, we need to identify the grasping point—more formally, a position for the robot’s end-effector. This paper focuses on the task of grasp identification, and thus we will consider only objects that can be picked up without performing complex manipulation.1 We will attempt to grasp a number of common office and household objects such as toothbrushes, pens, books, mugs, martini glasses, jugs, keys, duct tape, and markers. (See Fig. 1.) The remainder of this paper is structured as follows. In Section 2, we describe our learning approach, as well as our probabilistic model for inferring the grasping point. In Section 3, we describe the motion planning/trajectory planning (on our 5 degree of freedom arm) for moving the manipulator to the grasping point. In Section 4, we report the results of extensive experiments performed to evaluate our algorithm, and Section 5 concludes. 2 Learning the Grasping Point Because even very different objects can have similar sub-parts, there are certain visual features that indicate good grasps, and that remain consistent across many different objects. For example, jugs, cups, and coffee mugs all have handles; and pens, white-board markers, toothbrushes, screw-drivers, etc. are all long objects that can be grasped roughly at their mid-point. We propose a learning approach that uses visual features to predict good grasping points across a large range of objects. Given two (or more) images of an object taken from different camera positions, we will predict the 3-d position of a grasping point. An image is a projection of the three-dimensional world into an image plane, and does not have depth information. In our approach, we will predict the 2-d location of the grasp in each image; more formally, we will try to identify the projection of a good grasping point onto the image plane. If each of these points can be perfectly identified in each image, we can then easily “triangulate” from these images to obtain the 3-d grasping point. (See Fig. 4a.) In practice it is difficult to identify the projection of a grasping point into the image plane (and, if there are multiple grasping points, then the correspondence problem—i.e., deciding which grasping point in one image corresponds to which point in another image—must also be solved). On our robotic platform, this problem is further exacerbated by uncertainty in the position of the camera when the 1For example, picking up a heavy book lying flat on table might require a sequence of complex manipulations, such as to first slide the book slightly past the edge of the table so that the manipulator can place its fingers around the book. Figure 2: Examples of different edge and texture filters used to calculate the features. Figure 3: Synthetic images of the objects used for training. The classes of objects used for training were martini glasses, mugs, teacups, pencils, whiteboard erasers, and books. images were taken. To address all of these issues, we develop a probabilistic model over possible grasping points, and apply it to infer a good position at which to grasp an object.2 2.1 Features In our approach, we begin by dividing the image into small rectangular patches, and for each patch predict if it is a projection of a grasping point onto the image plane. For this prediction problem, we chose features that represent three types of local cues: edges, textures, and color. [11, 13] We compute features representing edges by convolving the intensity channel3 with 6 oriented edge filters (Fig. 2). Texture information is mostly contained within the image intensity channel, so we apply 9 Laws masks to this channel to compute the texture energy. For the color channels, low frequency information is most useful to identify grasps; our color features are computed by applying a local averaging filter (the first Laws mask) to the 2 color channels. We then compute the sum-squared energy of each of these filter outputs. This gives us an initial feature vector of dimension 17. To predict if a patch contains a grasping point, local image features centered on the patch are insufficient, and one has to use more global properties of the object. We attempt to capture this information by using image features extracted at multiple spatial scales (3 in our experiments) for the patch. Objects exhibit different behaviors across different scales, and using multi-scale features allows us to capture these variations. In detail, we compute the 17 features described above from that patch as well as the 24 neighboring patches (in a 5x5 window centered around the patch of interest). This gives us a feature vector x of dimension 1 ∗17 ∗3 + 24 ∗17 = 459. 2.2 Synthetic Data for Training We apply supervised learning to learn to identify patches that contain grasping points. To do so, we require a labeled training set, i.e., a set of images of objects labeled with the 2-d location of the grasping point in each image. Collecting real-world data of this sort is cumbersome, and manual labeling is prone to errors. Thus, we instead chose to generate, and learn from, synthetic data that is automatically labeled with the correct grasps. In detail, we generate synthetic images along with correct grasp (Fig. 3) using a computer graphics ray tracer,4 as this produces more realistic images than other simpler rendering methods.5 The advantages of using synthetic images are multi-fold. First, once a synthetic model of an object has been created, a large number of training examples can be automatically generated by rendering the object under different (randomly chosen) lighting conditions, camera positions and orientations, etc. 2An earlier version of this work without the probabilistic model and using simpler learning/inference was described in [12]. 3We use YCbCr color space, where Y is the intensity channel, and Cb and Cr are color channels. 4Ray tracing [3] is a standard image rendering method from computer graphics. It handles many real-world optical phenomenon such as multiple specular reflections, textures, soft shadows, smooth curves, and caustics. We used PovRay, an open source ray tracer. 5There is a relation between the quality of the synthetically generated images and the accuracy of the algorithm. The better the quality of the synthetically generated images and graphical realism, the better the accuracy of the algorithm. Therefore, we use a ray tracer instead of faster, but cruder, openGL style graphics. Michels, Saxena and Ng [6] used synthetic openGL images to learn distances in natural scenes. However, because openGL style graphics have less realism, the learning performance sometimes decreased with added complexity in the scenes. In addition, to increase the diversity of the training data generated, we randomized different properties of the objects such as color, scale, and text (e.g., on the face of a book). The time-consuming part of synthetic data generation is the creation of the mesh models of the objects. However, there are many objects for which models are available on the internet, and can be used with only minor modifications. We generated 2500 examples from synthetic data, comprising objects from six object classes (see Figure 3). Using synthetic data also allows us to generate perfect labels for the training set with the exact location of a good grasp for each object. In contrast, collecting and manually labeling a comparably sized set of real images would have been extremely time-consuming. 2.3 Probabilistic Model On our manipulation platform, we have a camera mounted on the robotic arm. (See Fig. 6) When asked to grasp an object, we command the arm to move the camera to two or more positions, so as to acquire images of the object from these positions. However, there are inaccuracies in the physical positioning of the arm, and hence some slight uncertainty in the position of the camera when the images are acquired. We now describe how we model these position errors. Formally, let C be the image that would have been taken if the robot had moved exactly to the commanded position and orientation. However, due to robot positioning error, instead an image ˆC is taken from a slightly different location. Let (u, v) be a 2-d position in image C, and let (ˆu, ˆv) be the corresponding image position in ˆC. Thus C(u, v) = ˆC(ˆu, ˆv), where C(u, v) is the pixel value at (u, v) in image C. The errors in camera position/pose should usually be small,6 and we model the difference between (u, v) and (ˆu, ˆv) using an additive Gaussian model: ˆu = u + ϵu, ˆv = v + ϵv, where ϵu, ϵv ∼N(0, σ2). For each location (u, v) in an image C, we define the class label to be z(u, v) = 1{(u, v) is the projection of a grasping point into image plane}. (Here, 1{·} is the indicator function; 1{True} = 1, 1{False} = 0.) For a corresponding location (ˆu, ˆv) in image ˆC, we similarly define ˆz(ˆu, ˆv) to indicate whether position (ˆu, ˆv) represents a grasping point in the image ˆC. Since, (u, v) and (ˆu, ˆv) are corresponding pixels in C and ˆC, we assume ˆz(ˆu, ˆv) = z(u, v). Thus: P(z(u, v) = 1|C) = P(ˆz(ˆu, ˆv) = 1| ˆC) (1) = R ϵu R ϵv P(ϵu, ϵv)P(ˆz(u + ϵu, v + ϵv) = 1| ˆC)dϵudϵv (2) Here, P(ϵu, ϵv) is the (Gaussian) density over ϵu and ϵv. Further, we use logistic regression to model the probability of a 2-d position (u + ϵu, v + ϵv) in ˆC being a good grasping point: P(ˆz(u + ϵu, v + ϵv) = 1| ˆC) = P(ˆz(u + ϵu, v + ϵv) = 1|x; w) = 1/(1 + e−wT x) (3) where x ∈R459 are the features for the rectangular patch centered at (u + ϵu, v + ϵv) in image ˆC (described in Section 2.1). The parameter of this model w ∈R459 is learned using standard maximum likelihood for logistic regression: w = arg maxw′ Q i P(zi|xi; w′), where (xi, zi) are the synthetic training examples (image patches and labels), as described in Section 2.2. Fig. 5 shows the result of applying the learned logistic regression model to some real (non-synthetic) images. Given two or more images of a new object from different camera positions, we want to infer the 3-d position of the grasping point. (See Fig. 4.) Because logistic regression may have predicted multiple grasping points per image, there is usually ambiguity in the correspondence problem (i.e., which grasping point in one image corresponds to which graping point in another). To address this while also taking into account the uncertainty in camera position, we propose a probabilistic model over possible grasping points in 3-d space. In detail, we discretize the 3-d work-space of the robotic arm into a regular 3-d grid G ⊂R3, and associate with each grid element j a random variable yj = 1{grid cell j is a grasping point}. From each camera location i = 1, ..., N, one image is taken. In image Ci, let the ray passing through (u, v) be denoted Ri(u, v). Let Gi(u, v) ⊂G be the set of grid-cells through which the ray Ri(u, v) passes. Let r1, ...rK ∈Gi(u, v) be the indices of the grid-cells lying on the ray Ri(u, v) . 6The robot position/orientation error is typically small (position is usually accurate to within 1mm), but it is still important to model this error. From our experiments (see Section 4), if we set σ2 = 0, the triangulation is highly inaccurate, with average error in predicting grasping point being 15.40 cm, as compared to 1.85 cm when appropriate σ2 is chosen. Figure 4: (a) Diagram illustrating rays from two images C1 and C2 intersecting at a grasping point (shown in dark blue). (b) Actual plot in 3-d showing multiple rays from 4 images intersecting at the grasping point. All grid-cells with at least one ray passing nearby are colored using a light blue-green-dark blue colormap, where dark blue represents those grid-cells which have many rays passing near them. (Best viewed in color.) We know that if any of the grid-cells rj along the ray represent a grasping point, then its projection is a grasp point. More formally, zi(u, v) = 1 if and only if yr1 = 1 or yr2 = 1 or ... or yrK = 1. For simplicity, we use a (arguably unrealistic) naive Bayes-like assumption of independence, and model the relation between P(zi(u, v) = 1|Ci) and P(yr1 = 1 or . . . or yrK = 1|Ci) as P(zi(u, v) = 0|Ci) = P(yr1 = 0, ..., yrK = 0|Ci) = QK j=1 P(yrj = 0|Ci) (4) Assuming that any grid-cell along a ray is equally likely to be a grasping point, this therefore gives P(yrj = 1|Ci) = 1 −(1 −P(zi(u, v) = 1|Ci))1/K (5) Next, using another naive Bayes-like independence assumption, we estimate the probability of a particular grid-cell yj ∈G being a grasping point as: P(yj = 1|C1, ..., CN) = P (yj=1)P (C1,...,CN|yj=1) P (C1,...,CN) = P (yj=1) P (C1,...,CN) QN i=1 P(Ci|yj = 1) (6) = P (yj=1) P (C1,...,CN) QN i=1 P (yj=1|Ci)P (Ci) P (yj=1) ∝QN i=1 P(yj = 1|Ci) (7) where P(yj = 1) is the prior probability of a grid-cell being a grasping point (set to a constant value in our experiments). Using Equations 2, 3, 5, and 7, we can now compute (up to a constant of proportionality that does not depend on the grid-cell) the probability of any grid-cell yj being a valid grasping point, given the images. 2.4 MAP Inference We infer the best grasping point by choosing the 3-d position (grid-cell) that is most likely to be a valid grasping point. More formally, using Eq. 5 and 7, we will choose: arg maxj log P(yj = 1|C1, ..., CN) = arg maxj log QN i=1 P(yj = 1|Ci) (8) = arg maxj PN i=1 log 1 −(1 −P(zi(u, v) = 1|Ci))1/K (9) where P(zi(u, v) = 1|Ci) is given by Eq. 2 and 3. A straightforward implementation that explicitly computes the sum above for every single grid-cell would give good grasping performance, but be extremely inefficient (over 110 seconds). For real-time manipulation, we therefore used a more efficient search algorithm in which we explicitly consider only grid-cells yj that at least one ray Ri(u, v) intersects. Further, the counting operation in Eq. 9 is implemented using an efficient counting algorithm that accumulates the sums over all grid-cells by iterating over all the images N and rays Ri(u, v).7 This results in an algorithm that identifies a grasping position in 1.2 sec. 7Since there are only a few places in an image where P(zi(u, v) = 1|Ci) > 0, the counting algorithm is computationally much less expensive than enumerating over all grid-cells. In practice, we found that restricting attention to areas where P(zi(u, v) = 1|Ci) > 0.1 allows us to further reduce the number of rays to be considered, with no noticeable degradation in performance. Figure 5: Grasping point classification. The red points in each image show the most likely locations, predicted to be candidate grasping points by our logistic regression model. (Best viewed in color.) Figure 6: The robotic arm picking up various objects: box, screwdriver, duct-tape, wine glass, a solder tool holder, powerhorn, cellphone, and martini glass and cereal bowl from dishwasher. 3 Control Having identified a grasping point, we have to move the end-effector of the robotic arm to it, and pick up the object. In detail, our algorithm plans a trajectory in joint angle space [5] to take the endeffector to an approach position,8 and then moves the end-effecter in a straight line forward towards the grasping point. Our robotic arm uses two classes of grasps: downward grasps and outward grasps. These arise as a direct consequence of the shape of the workspace of our 5 dof robotic arm (Fig. 6). A “downward” grasp is used for objects that are close to the base of the arm, which the arm will grasp by reaching in a downward direction. An “outward” grasp is for objects further away from the base, for which the arm is unable to reach in a downward direction. The class is determined based on the position of the object and grasping point. 4 Experiments 4.1 Hardware Setup Our experiments used a mobile robotic platform called STAIR (STanford AI Robot) on which are mounted a robotic arm, as well as other equipment such as our web-camera, microphones, etc. STAIR was built as part of a project whose long-term goal is to create a robot that can navigate home and office environments, pick up and interact with objects and tools, and intelligently converse with and help people in these environments. Our algorithms for grasping novel objects represent a first step towards achieving some of these goals. The robotic arm we used is the Harmonic Arm made by Neuronics. This is a 4 kg, 5-dof arm equipped with a parallel plate gripper, and has a positioning accuracy of ±1 mm. Our vision system used a low-quality webcam mounted near the end-effector. 8The approach position is set to be a fixed distance away from the predicted grasp point. Table 1: Average absolute error in locating the grasp point for different objects, as well as grasp success rate for picking up the different objects using our robotic arm. (Although training was done on synthetic images, testing was done on the real robotic arm and real objects.) OBJECTS SIMILAR TO ONES TRAINED ON NOVEL OBJECTS TESTED ON MEAN GRASPTESTED ON MEAN GRASPERROR (CM) RATE ERROR (CM) RATE DUCT TAPE 1.8 100% MUGS 2.4 75% KEYS 1.0 100% PENS 0.9 100% MARKERS/SCREWDRIVER 1.1 100% WINE GLASS 1.2 100% TOOTHBRUSH/CUTTER 1.1 100% BOOKS 2.9 75% JUG 1.7 75% ERASER/ TRANSLUCENT BOX 3.1 75% CELLPHONE 1.6 100% POWERHORN 3.6 50% COILED WIRE 1.4 100% OVERALL 1.80 90% OVERALL 1.85 87.5% 4.2 Results and Discussion We first evaluated the predictive accuracy of the algorithm on synthetic images (not contained in the training set). (See Fig. 5.) The average accuracy for classifying whether a 2-d image patch is a projection of a grasping point was 94.2% (evaluated on a balanced test set), although the accuracy in predicting 3-d grasping points was higher because the probabilistic model for inferring a 3-d grasping point automatically aggregates data from multiple images, and therefore “fixes” some of the errors from individual classifiers. We then tested the algorithm on the physical robotic arm. Here, the task was to use input from a web-camera, mounted on the robot, to pick up an object placed in front of the robot. Recall that the parameters of the vision algorithm were trained from synthetic images of a small set of six object classes, namely books, martini glasses, white-board erasers, coffee mugs, tea cups and pencils. We performed experiments on coffee mugs, wine glasses (partially filled with water), pencils, books, and erasers—but all of different dimensions and appearance than the ones in the training set— as well as a large set of objects from novel object classes, such as rolls of duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, pens, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, etc. (See Fig. 1.) We note that many of these objects are translucent, textureless, and/or reflective, making 3-d reconstruction difficult for standard stereo systems. (Indeed, a carefully-calibrated Point Gray stereo system, the Bumblebee BB-COL20, —with higher quality cameras than our web-camera—fails to accurately reconstruct the visible portions of 9 out of 12 objects.) In extensive experiments, the algorithm for predicting grasps in images appeared to generalize very well. Despite being tested on images of real (rather than synthetic) objects, including many very different from ones in the training set, it was usually able to identify correct grasp points. We note that test set error (in terms of average absolute error in the predicted position of the grasp point) on the real images was only somewhat higher than the error on synthetic images; this shows that the algorithm trained on synthetic images transfers well to real images. (Over all 5 object types used in the synthetic data, average absolute error was 0.81cm in the synthetic images; and over all the 13 real test objects, average error was 1.83cm.) For comparison, neonate humans can grasp simple objects with an average accuracy of 1.5cm. [2] Table 1 shows the errors in the predicted grasping points on the test set. The table presents results separately for objects which were similar to those we trained on (e.g., coffee mugs) and those which were very dissimilar to the training objects (e.g., duct tape). In addition to reporting errors in grasp positions, we also report the grasp success rate, i.e., the fraction of times the robotic arm was able to physically pick up the object (out of 4 trials). On average, the robot picked up the novel objects 87.5% of the time. For simple objects such as cellphones, wine glasses, keys, toothbrushes, etc., the algorithm performed perfectly (100% grasp success rate). However, grasping objects such as mugs or jugs (by the handle) allows only a narrow trajectory of approach—where one “finger” is inserted into the handle—so that even a small error in the grasping point identification causes the arm to hit and move the object, resulting in a failed grasp attempt. Although it may be possible to improve the algorithm’s accuracy, we believe that these problems can best be solved by using a more advanced robotic arm that is capable of haptic (touch) feedback. In many instances, the algorithm was able to pick up completely novel objects (a strangely shaped power-horn, duct-tape, solder tool holder, etc.; see Fig. 1 and 6). Perceiving a transparent wine glass is a difficult problem for standard vision (e.g., stereopsis) algorithms because of reflections, etc. However, as shown in Table 1, our algorithm successfully picked it up 100% of the time. The same rate of success holds even if the glass is 2/3 filled with water. Videos showing the robot grasping the objects, are available at http://ai.stanford.edu/∼asaxena/learninggrasp/ We also applied our learning algorithm to the task of unloading items from a dishwasher.9 Fig. 5 demonstrates that the algorithm correctly identifies the grasp on multiple objects even in the presence of clutter and occlusion. Fig. 6 shows our robot unloading some items from a dishwasher. 5 Conclusions We proposed an algorithm to enable a robot to grasp an object that it has never seen before. Our learning algorithm neither tries to build, nor requires, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. In our experiments, the algorithm generalizes very well to novel objects and environments, and our robot successfully grasped a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. Acknowledgment We give warm thanks to Anya Petrovskaya, Morgan Quigley, and Jimmy Zhang for help with the robotic arm control driver software. This work was supported by the DARPA transfer learning program under contract number FA8750-05-2-0249. References [1] A. Bicchi and V. Kumar. Robotic grasping and contact: a review. In ICRA, 2000. [2] T. G. R. Bower, J. M. Broughton, and M. K. Moore. Demonstration of intention in the reaching behaviour of neonate humans. Nature, 228:679–681, 1970. [3] A. S. Glassner. An Introduction to Ray Tracing. Morgan Kaufmann Publishers, Inc., San Francisco, 1989. [4] K. Hsiao and T. Lozano-Perez. Imitation learning of whole-body grasps. In IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), 2006. [5] M. T. Mason and J. K. Salisbury. Manipulator grasping and pushing operations. In Robot Hands and the Mechanics of Manipulation. The MIT Press, Cambridge, MA, 1985. [6] J. Michels, A. Saxena, and A. Y. Ng. High speed obstacle avoidance using monocular vision and reinforcement learning. In ICML, 2005. [7] Miller and et. al. Automatic grasp planning using shape primitives. In ICRA, 2003. [8] R. Pelossof and et. al. An svm learning approach to robotic grasping. In ICRA, 2004. [9] J. H. Piater. Learning visual features to predict hand orientations. In ICML Workshop on Machine Learning of Spatial Knowledge, 2002. [10] R. Platt, A. H. Fagg, and R. Grupen. Reusing schematic grasping policies. In IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan, 2005. [11] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS 18, 2005. [12] A. Saxena, J. Driemeyer, J. Kearns, C. Osondu, and A. Y. Ng. Learning to grasp novel objects using vision. In 10th International Symposium of Experimental Robotics (ISER), 2006. [13] A. Saxena, J. Schulte, and A. Y. Ng. Depth estimation using monocular and stereo cues. In 20th International Joint Conference on Artificial Intelligence (IJCAI), 2007. [14] H. Schneiderman and T. Kanade. Probabilistic modeling of local appearance and spatial relationships for object recognition. In CVPR, 1998. [15] T. Shin-ichi and M. Satoshi. Living and working with robots. Nipponia, 2000. 9To improve performance, we also used depth-based features. More formally, we applied our texture based features to the depth image obtained from a stereo camera, and appended them to the feature vector used in classification. We also appended some hand-labeled real examples of dishwasher images to the training set to prevent the algorithm from identifying grasping points on background clutter, such as dishwasher prongs.
2006
24
3,042
Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure Jennifer Listgarten†, Radford M. Neal†, Sam T. Roweis† Rachel Puckrin‡ and Sean Cutler‡ † Department of Computer Science, ‡ Department of Botany, University of Toronto, Toronto, Ontario, M5S 3G4 {jenn,radford,roweis}@cs.toronto.edu rachel puckrin@hotmail.com, cutler@botany.utoronto.ca Abstract We present a hierarchical Bayesian model for sets of related, but different, classes of time series data. Our model performs alignment simultaneously across all classes, while detecting and characterizing class-specific differences. During inference the model produces, for each class, a distribution over a canonical representation of the class. These class-specific canonical representations are automatically aligned to one another — preserving common sub-structures, and highlighting differences. We apply our model to compare and contrast solenoid valve current data, and also, liquid-chromatography-ultraviolet-diode array data from a study of the plant Arabidopsis thaliana. 1 Aligning Time Series From Different Classes Many practical problems over a wide range of domains require synthesizing information from several noisy examples of one or more categories in order to build a model which captures common structure and also learns the patterns of variability between categories. In time series analysis, these modeling goals manifest themselves in the tasks of alignment and difference detection. These tasks have diverse applicability, spanning speech & music processing, equipment & industrial plant diagnosis/monitoring, and analysis of biological time series such as microarray & liquid/gas chromatography-based laboratory data (including mass spectrometry and ultraviolet diode arrays). Although alignment and difference detection have been extensively studied as separate problems in the signal processing and statistical pattern recognition communities, to our knowledge, no existing model performs both tasks in a unified way. Single class alignment algorithms attempt to align a set of time series all together, assuming that variability across different time series is attributable purely to noise. In many real-world situations, however, we have time series from multiple classes (categories) and our prior belief is that there is both substantial shared structure between the class distributions and, simultaneously, systematic (although often rare) differences between them. While in some circumstances (if differences are small and infrequent), single class alignment can be applied to multi-class data, it is much more desirable to have a model which performs true multi-class alignment in a principled way, allowing for more refined and accurate modeling of the data. In this paper, we introduce a novel hierarchical Bayesian model which simultaneously solves the multi-class alignment and difference detection tasks in a unified manner, as illustrated in Figure 1. The single-class alignment shown in this figure coerces the feature in region A for class 1 to be inappropriately collapsed in time, and the overall width of the main broad peak in class 2 to be inappropriately narrowed. In contrast, our multi-class model handles these features correctly. Furthermore, because our algorithm does inference for a fully probabilistic model, we are able to obtain quantitative measures of the posterior uncertainty in our results, which, unlike the point estimates produced by most current approaches, allow us to assess our relative confidence in differences learned by the model. Our basic setup for multi-class alignment assumes the class labels are known for each time series, as is the case for most difference detection problems. However, as we discuss at the end of the paper, our model can be extended to the completely unsupervised case. 1 3 1 3 0 50 100 150 200 250 1 3 normal abnormal A 1 3 −20 0 20 −20 0 20 0 50 100 150 200 250 1 3 common structure class−specific differences class−specific models Figure 1: Nine time series from the NASA valve solenoid current data set [4]. Four belong to a ‘normal’ class, and five to an ‘abnormal’ class. On all figures, the horizontal axis is time, or latent time for figures of latent traces and observed time series aligned to latent traces. The vertical axis is current amplitude. Top left: The raw, unaligned data. Middle left: Average of the unaligned data within each class in thick line, with the thin lines showing one standard deviation on either side. Bottom left: Average of the aligned data (over MCMC samples) within each class, using the single-class alignment version of the model (no child traces), again with one standard deviation lines shown in the thinner style line. Right: Mean and one standard deviation over MCMC samples using the HB-CPM. Top right: Parent trace. Middle right: Class-specific energy impulses with the topmost showing the class impulses for the less smooth class. Bottom right: Child traces superimposed. Note that if one generates more HB-CPM MCMC samples, the parent cycles between the two classes since the model has no preference for which class is seen as a modification of the other; the child classes remain stable however. 2 A Hierarchical Bayesian Continuous Profile Model Building on our previous Continuous Profile Model (CPM) [7], we propose a Hierarchical Bayesian Continuous Profile Model (HB-CPM) to address the problems of multi-class alignment and difference detection, together, for sets of sibling time series data — that is, replicate time series from several distinct, but related classes. The HB-CPM is a generative model that allows simultaneous alignment of time series and also provides aligned canonical representations of each class along with measures of uncertainty on these representations. Inference in the model can be used, for example, to detect and quantify similarities and differences in class composition. The HB-CPM extends the basic CPM in two significant ways: i) it addresses the multi-class rather than the single-class alignment problem, and ii) it uses a fully Bayesian framework rather than a maximum likelihood approach, allowing us to estimate uncertainty in both the alignments and the canonical representations. Our model, depicted in Figure 2, assumes that each observed time series is generated as a noisy transformation of a single, class-specific latent trace. Each latent trace is an underlying, noiseless representation of the set of replicated, observable time series belonging to a single class. An observed time series is generated from this latent trace exactly as in the original CPM, by moving through a sequence of hidden states in a Markovian manner and emitting an observable value at each step, as with an HMM. Each hidden state corresponds to a ‘latent time’ in the latent trace. Thus different choices of hidden state sequences result in different nonlinear transformations of the underlying trace. The HB-CPM uses a separate latent trace for each class, which we call child traces. Crucially, each of these child traces is generated from a single parent trace (also unobserved), which captures the common structure among all of the classes. The joint prior distribution for the child traces in the HB-CPM model can be realized by first sampling a parent trace, and then, for each class, sampling a sparse ‘difference vector’ which dictates how and where each child trace should differ from the common parent. z z2 r 1 r 2 1 z x 4x x 3 x 2 1 child impulse class 1 observed time series class 2 observed time series trace child trace impulse parent 5x 6x Figure 2: Core elements of the HB-CPM, illustrated with two-class data (hidden and observed) drawn from the model’s prior. 2.1 The Prior on Latent Traces Let the vector xk = (xk 1, xk 2, ..., xk N) represent the kth observed scalar time series, and wk ∈1..C be the class label of this time series. Also, let z = (z1, z2, ..., zM) be the parent trace, and zc = (zc 1, zc 2, ..., zc M) be the child trace for the cth class. During inference, posterior samples of zc form a canonical representation of the observed times series in class c, and z contains their common sub-structure. Ideally, the length of the latent traces, M, would be very large relative to N so that any experimental data could be mapped precisely to the correct underlying trace point. Aside from the computational impracticalities this would pose, great care to avoid overfitting would have to be taken. Thus in practice, we have used M = (2 + ϵ)N (double the resolution, plus some slack on each end) in our experiments, and found this to be sufficient with ϵ < 0.2. Because the resolution of the latent traces is higher than that of the observed time series, experimental time can be made to effectively speed up or slow down by advancing along the latent trace in larger or smaller jumps. As mentioned previously, the child traces in the HB-CPM inherit most of their structure from a common parent trace. The differences between child and parent are encoded in a difference vector for each class, dc = (dc 1, dc 2, ..., dc M); normally, most elements of dc are close to zero. Child traces are obtained by adding this difference vector to the parent trace: zc = z + dc. We model both the parent trace and class-specific difference vectors with what we call an energy impulse chain, which is an undirected Markov chain in which neighbouring nodes are encouraged to be similar (i.e., smooth), and where this smoothness is perturbed by a set of marginally independent energy impulse nodes, with one energy impulse node attached to each node in the chain. For the difference vector of the cth class, the corresponding energy impulses are denoted rc = (rc 1, rc 2, ..., rc M), and for the parent trace the energy impulses are denoted r = (r1, r2, ..., rM). Conditioned on the energy impulses, the probability of a difference vector is p(dc|rc, αc, ρc) = 1 Zrc " exp −1 2 M−1 X i=1 (dc i −dc i+1)2 αc + M X i=1 (dc i −rc i )2 ρc !!# . (1) Here, Zrc is the normalizing constant for this probability density, αc controls the smoothness of the chain, and ρc controls the influence of the energy impulses. Together, αc and ρc also control the overall tightness of the distribution for dc. Presently, we set all αc = α′, and similarly ρc = ρ′ — that is, these do not differ between classes. Similarly, the conditional probability of the parent trace is p(z|r, α, ρ) = 1 Zr " exp −1 2 M−1 X i=1 (zi −zi+1)2 α + M X i=1 (zi −ri)2 ρ !!# . (2) These probability densities are each multivariate Gaussian with tridiagonal precision matrixes (corresponding to the Markov nature of the interactions). Each component of each energy impulse for the parent, rj, is drawn independently from a single univariate Gaussian, N(ri|µpar, spar), whose mean and variance are in turn drawn from a Gaussian and inverse-gamma, respectively. The class-specific difference vector impulses, however, are drawn from a mixture of two zero-mean Gaussians — one ‘no difference’ (inlier) Gaussian, and one ‘classdifference’ (outlier) Gaussian. The means are zero so as to encourage difference vectors to be near zero (and thus child traces to be similar to the parent trace). Letting δc i denote the binary latent mixture component indicator variables for each rc j, p(δc j) = Multinomial(δc j|mc in, mc out) = (mc in)δc j (mc out)1−δc j (3) p(rc j|δc j) = N(rc j|0, s2 in), if δc j = 1 N(rc j|0, s2 out), if δc j = 0 . (4) Each Gaussian mixture variance has an Inverse-Gamma prior, which for the ‘no difference’ variance, s2 in, is set to have very low mean (and not overly dispersed) so that ‘no difference’ regions truly have little difference from the parent class, while for the ‘class-difference’ variance, s2 out, the prior is set to have a larger mean, so as to model our belief that substantial class-specific differences do occasionally exist. The priors for αc, ρc, α, ρ are each log-normal (inverse-gamma priors would not be conjugate in this model, so we use log-normals which are easier to specify). Additionally, the mixing proportions, mc in, mc out, have a Dirichlet prior, which typically encodes our belief that the proportion that are ‘class differences’ is likely to be small. 2.2 The HMM Portion of the Model Each observed xk is modeled as being generated by an HMM conditioned on the appropriate child trace, zwk. The probability of an observed time series conditioned on a path of hidden time states, τ k, and the child trace, is given by p(xk|zwk, τ k) = QN i=1 N(xk i |zwk τ k i uk, ξk), where ξk is the emission variance for time series k, and the scale factor, uk, allows for constant, global, multiplicative rescaling. The HMM transition probabilities T k(τ k i−1 →τ k i ) are multinomial within a limited range, with pk(τi = a|τi−1 = b) = κk (a−b) for (a −b) ∈[1, Jτ] and pk(τi = a|τi−1 = b) = 0 for (a −b) < 1 or (a −b) > Jτ where Jτ is the maximum allowable number of consecutive time states that can be advanced in a single transition. (Of course, PJτ i=1 κk i = 1.) This multinomial distribution, in turn, has a Dirichlet prior. The HMM emission variances, ξk, have an inverse-gamma prior. Additionally, the prior over the first hidden time state is a uniform distribution over a constant number of states, 1..Q, where Q defines how large a shift can exist between any two observed time series. The prior over each global scaling parameter, uk, is a log-normal with fixed variance and mean of zero, which encourages the scaling factors to remain near unity. 3 Posterior Inference of Alignments and Parameters by MCMC Given a set of observed time series (and their associated class labels), the main computational operation to be performed in the HB-CPM is inference of the latent traces, alignment state paths and other model parameters. Exact inference is analytically intractable, but we are able to use Markov Chain Monte Carlo (MCMC) methods to create an iterative algorithm which, if run for sufficiently long, produces samples from the correct posterior distribution. This posterior provides simultaneous alignments of all observed time series in all classes, and also, crucially, aligned canonical representations of each class, along with error bars on these representations, allowing for a principled approach to difference detection in time series data from different classes. We may also wish to obtain a posterior estimate of some of our parameters conditioned on the data, and marginalized over the other parameters. In particular, we might be interested in obtaining the posterior over hidden time state vectors for each time series, τ k, which together provide a simultaneous, multi-class alignment of our data. We may, in addition, or, alternatively, be interested in the posterior of the child traces, zc, which together characterize how the classes agree and disagree. The former may be more of interest for visualizing aligned observed time series, or in expanding out aligned scalar time series to a related vector time series, while the latter would be more of interest when looking to characterize differences in multi-class, scalar time series data. We group our parameters into blocks, and sample these blocks conditioned on the values of the other parameters (as in Gibbs sampling) — however, when certain conditional distributions are not amenable to direct sampling, we use slice sampling [8]. The scalar conditional distributions for each of µpar, spar, mc in, mc out, δc j, κk i are known distributions, amenable to direct sampling. The conditional distributions for the scalars αc, ρc, α, ρ and uk are not tractable, and for each of these we use slice sampling (doubling out and shrinking). The conditional distribution for each of r and rc is multivariate Gaussian, and we sample directly from each using a Cholesky decomposition of the covariance matrix. p(r|z, α, ρ) = 1 Z p(z|r, α, ρ)p(r) = N(r|c, C) (5) p(rc|dc, αc, ρc) = 1 Z p(dc|r, αc, ρc)p(r) = N(rc|b, B), (6) where, using I to denote the identity matrix, C =  S ρ2 + Ispar −1 −1 , c = C z ρ + I µpar spar  (7) B = S† (ρc)2 + vc−1 !−1 , b = B dc ρc . (8) The diagonal matrix vc consists of mixture component variances (s2 in or s2 out). S−1 [or S†−1] is the tridiagonal precision matrix of the multivariate normal distribution p(z|r, α, ρ) [or p(dc|rc, αc, ρc)], and has entries S−1 j,j = 2 α + 1 ρ for j = 2..(M −1), S−1 j,j = 1 α + 1 ρ for j = 1, M, and S−1 j,j+1 = S−1 j+1,j = −1 α [or analogously for S†−1]. The computation of C and B can be made more efficient by using the Sherman-Morrison-Woodbury matrix inversion lemma. For example, B = 1 (ρc)2 (S†−1 −S†−1(vc + S†−1)−1S†−1), and we have S−1 [or S†−1] almost for free, and no longer need to invert S [or S†] to obtain it. The conditional distributions of each of z, zc are also multivariate Gaussians. However, because of the underlying Markov dependencies, their precision matrixes are tridiagonal, and hence we can use belief propagation, in the style of Kalman filtering, followed by a stochastic traceback to sample from them efficiently. Thus each can be sampled in time proportional to M rather than M 3, as required for a general multivariate Gaussian. Lastly, to sample from the conditional distribution of the hidden time vectors for each sample, τ k, we run belief propagation (analogous to the HMM forward-backward algorithm) followed by a stochastic traceback. In our experiments, the parent trace was initialized by averaging one smoothed example from each class. The child traces were initialized to the initial parent trace. The HMM states were initialized by a Viterbi decoding with respect to the initial values of the other parameters. The scaling factors were initialized to unity, and the child energy impulses to zero. MCMC was run for 5000 iterations, with convergence generally realized in less than 1000 iterations. 4 Experiments and Results We demonstrate use of the HB-CPM on two data sets. The first data set is the part of the NASA shuttle valve data [4], which measures valve solenoid current against time for some ‘normal’ runs and some ‘abnormal’ runs. Measurements were taken at a rate of 1ms per sample, with 1000 samples per time series. We subsampled the data by a factor of 7 in time since it was extremely dense. The results of performing posterior inference in our model on this two-class data set are shown in Figure 1. They nicely match our intuition of what makes a good solution. In our experiments, we also compared our model to a simple “single-class” version of the HB-CPM in which we simply remove the child trace level of the model, letting all observed data in both classes depend directly on one single parent trace. The single-class alignment, while doing a reasonable job, does so by coercing the two classes to look more similar than they should. This is evident in one particular region labeled on the graph and discussed in the legend. Essentially a single class alignment causes us to lose class-specific fine detail — the precise information we seek to retain for difference detection. The second data set is from a botany study which uses reverse-phase HPLC (high performance liquid chromatography) as a high-throughput screening method to identify genes involved in xenobiotic uptake and metabolism in the model plant Arabidopsis thaliana. Liquid-chromatography (LC) techniques are currently being developed and refined with the aim of providing a robust platform with which to detect differences in biological organisms — be they plants, animals or humans. Detected differences can reveal new fundamental biological insight, or can be applied in more clinical settings. LC-mass spectrometry technology has recently undergone explosive growth in tackling the problem of biomarker discovery — for example, detecting biological markers that can predict treatment outcome or severity of disease, thereby providing the potential for improved health care and better understanding of the mechanisms of drug and disease. In botany, LC-UV data is used to help understand the uptake and metabolism of compounds in plants by looking for differences across experimental conditions, and it is this type of data that we use here. LC separates mixtures of analytes on the basis of some chemical property — hydrophobicity, for reverse-phase LC, used to generate our data. Components of the analyte in our data set were detected as they came off the LC column with a Diode Array Detector (DAD), yielding UV-visible spectra collected at 540 time points (we used the 280 nm band, which is informative for these experiments). We performed background subtraction [2] and then subsampled this data by a factor of four. This is a three-class data set, where the first class is untreated plant extract, followed by two classes consisting of this same plant treated with compounds that were identified as possessing robust uptake in vivo, and, hence, when metabolized, provide a differential LC-UV signal of interest. Figure 3 gives an overview of the LC-UV results, while Figure 4 zooms in on a particular area of interest to highlight how subtle differences can be detected by the HB-CPM, but not by a singleclass alignment scheme. As with the NASA data set, a single-class alignment coerces features across classes that are in fact different to look the same, thereby preventing us from detecting them. Recall that this data set consists of a ‘no treatment’ plant extract, and two ‘treatments’ of this same plant. Though our model was not informed of these special relationships, it nevertheless elegantly captures this structure by giving almost no energy impulses to the ‘no treatment’ class, meaning that this class is essentially the parent trace, and allowing the ‘treatment’ classes to diverge from it, thereby nicely matching the reality of the situation. All averaging over MCMC runs shown is over 4000 samples, after a 1000 burn in period, which took around 3 hours for the NASA data, and 5 hours for the LC data set, on machines with dual 3 GHz Pentium 4 processors. 5 Related Work While much work has been done on time series alignment, and on comparison/clustering of time series, none of this work, to our knowledge, directly addresses the problem presented in this paper — simultaneously aligning and comparing sets of related time series in order to characterize how they differ from one another. The classical algorithm for aligning time series is Dynamic Time Warping (DTW) [10]. DTW works on pairs of time series, aligning one time series to a specified reference time, in a non-probabilistic way, without explicit allowance for differences in related time series. More recently, Gaffney et al [5] jointly clustered and aligned time series data from different classes. However, their model does not attempt to put time series from different classes into correspondence with one another — only time series within a class are aligned to one another. Ziv Bar-Joseph et al [1] use a similar approach to cluster and align microarray time series data. Ramsay et al [9] have introduced a curve clustering model, in which a time warping function, h(t), for each time series is learned by way of learning its relative curvature, parameterized with order one B-spline coefficients. This model accounts for 3 5 3 5 0 50 100 150 200 250 300 3 5 3 5 −9 0 9 −9 0 9 −9 0 9 0 50 100 150 200 250 300 3 5 Figure 3: Seven time series from each of three classes of LC-UV data. On all figures, the horizontal axis is time, or latent time for figures of latent traces and observed time series aligned to latent traces. The vertical axis is log of UV absorbance. Top left: The raw, unaligned data. Middle left: Average of the unaligned data within each class in thick line, with the thin lines showing one standard deviation on either side. Bottom left: Average of the aligned data within each class, using the single-class alignment version of the model (no child traces), again with one standard deviation lines shown in the thinner style line. Right: Mean and one standard deviation over MCMC samples using the HB-CPM model. Top right: Parent trace. Middle right: Class-specific energy impulses, with the top-most showing the class impulses for the ‘no treatment’ class. Bottom right: Child traces superimposed. See Figure 4 for a zoom-in in around the arrow. systematic changes in the range and domain of time series in a way that aligns curves with the same fundamental shape. However, their method does not allow for class-specific differences between shapes to be taken into account. The anomaly detection (AD) literature deals with related, yet distinct problems. For example, Chan et al [3] build a model of one class of time series data (they use the same NASA valve data as in this paper), and then match test data, possibly belonging to another class (e.g. ‘abnormal’ shuttle valve data) to this model to obtain an anomaly score. Emphasis in the AD community is on detecting abnormal events relative to a normal baseline, in an on-line manner, rather than comparing and contrasting two or more classes from a dataset containing examples of all classes. The problem of ‘elastic curve matching‘ is addressed in [6], where a target time series that best matches a query series is found, by mapping the problem of finding the best matching subsequence to the problem of finding the cheapest path in a DAG (directed acyclic graph). 6 Discussion and Conclusion We have introduced a hierarchical, Bayesian model to perform detection of rare differences between sets of related time series, a problem which arises across a wide range of domains. By training our model, we obtain the posterior distribution over a set of class-specific canonical representations of each class, which are aligned in a way that preserves their common sub-structures, yet retains and highlights important differences. This model can be extended in several interesting and useful ways. One small modification could be useful for the LC-UV data set presented in this paper, in which one of the classes was ‘no treatment’, while the other two were each a different ‘treatment’. We might model the ‘no treatment’ as the parent trace, and each of the treatments as a child trace, so that the direct comparison of interest would be made more explicit. Another direction would be to apply the HB-CPM in a completely 3 4 3 4 100 105 110 115 120 125 130 135 140 145 150 155 3 4 −5 0 5 −5 0 5 100 105 110 115 120 125 130 135 140 145 150 155 −5 0 5 Figure 4: Left: A zoom in of data displayed in Figure 3, from the region of time 100-150 (labeled in that figure in latent time, not observed time). Top left: mean and standard deviation of the unaligned data. Middle left: mean and standard deviation of the single-class alignment. Bottom left: mean and standard deviation of the child traces from the HB-CPM. A case in point of a difference that could be detected with the HB-CPM and not in the raw or single-class aligned data, is the difference occurring at time point 127. Right: The mean and standard deviation of the child energy impulses, with dashed lines showing correspondences with the child traces in the bottom left panel. unsupervised setting where we learn not only the canonical class representations, but also obtain the posterior over the class labels by introducing a latent class indicator variable. Lastly, one could use a model with cyclical latent traces to model cyclic data such as electrocardiogram (ECG) and climate data. In such a model, an observed trace being generated by the model would be allowed to cycle back to the start of the latent trace, and the smoothness constraints on the trace would be extended to apply to beginning and end of the traces, coercing these to be similar. Such a model would allow one to do anomaly detection in cyclic data, as well as segmentation. Acknowledgments: Thanks to David Ross and Roland Memisevic for useful discussions, and Ben Marlin for his Matlab slice sampling code. References [1] Z. Bar-Joseph, G. Gerber, D. K. Gifford, T. Jaakkola, and I. Simon. A new approach to analyzing gene expression time series data. In RECOMB, pages 39–48, 2002. [2] H. Boelens, R. Dijkstra, P. Eilers, F. Fitzpatrick, and J. Westerhuis. New background correction method for liquid chromatography with diode array detection, infrared spectroscopic detection and raman spectroscopic detection. Journal of Chromatography A, 1057:21–30, 2004. [3] P. K. Chan and M. V. Mahoney. Modeling multiple time series for anomaly detection. In ICDM, 2005. [4] B. Ferrell and S. Santuro. NASA shuttle valve data. http://www.cs.fit.edu/∼pkc/nasa/data/, 2005. [5] S. J. Gaffney and P. Smyth. Joint probabilistic curve clustering and alignment. In Advances in Neural Information Processing Systems 17, 2005. [6] L. Latecki, V. Megalooikonomou, Q. Wang, R. Lakaemper, C. Ratanamahatana, and E. Keogh. Elastic partial matching of time series, 2005. [7] J. Listgarten, R. M. Neal, S. T. Roweis, and A. Emili. Multiple alignment of continuous time series. In Advances in Neural Information Processing Systems 17, 2005. [8] R. M. Neal. Slice sampling. Annals of Statistics, 31:705–767, 2003. [9] J. Ramsay and X. Li. Curve registration. Journal of the Royal Statistical Society(B), 60, 1998. [10] H. Sakoe and S. Chiba. Dynamic programming algorithm for spoken word recognition. Readings in Speech Recognition, pages 159–165, 1990.
2006
25
3,043
Training Conditional Random Fields for Maximum Labelwise Accuracy Samuel S. Gross Computer Science Department Stanford University Stanford, CA, USA ssgross@cs.stanford.edu Olga Russakovsky Computer Science Department Stanford University Stanford, CA, USA olga@cs.stanford.edu Chuong B. Do Computer Science Department Stanford University Stanford, CA, USA chuongdo@cs.stanford.edu Serafim Batzoglou Computer Science Department Stanford University Stanford, CA, USA serafim@cs.stanford.edu Abstract We consider the problem of training a conditional random field (CRF) to maximize per-label predictive accuracy on a training set, an approach motivated by the principle of empirical risk minimization. We give a gradient-based procedure for minimizing an arbitrarily accurate approximation of the empirical risk under a Hamming loss function. In experiments with both simulated and real data, our optimization procedure gives significantly better testing performance than several current approaches for CRF training, especially in situations of high label noise. 1 Introduction Sequence labeling, the task of assigning labels y = y1, ..., yL to an input sequence x = x1, ..., xL, is a machine learning problem of great theoretical and practical interest that arises in diverse fields such as computational biology, computer vision, and natural language processing. Conditional random fields (CRFs) are a class of discriminative probabilistic models designed specifically for sequence labeling tasks [1]. CRFs define the conditional distribution Pw(y | x) as a function of features relating labels to the input sequence. Ideally, training a CRF involves finding a parameter set w that gives high accuracy when labeling new sequences. In some cases, however, simply finding parameters that give the best possible accuracy on training data (known as empirical risk minimization [2]) can be difficult. In particular, if we wish to minimize Hamming loss, which measures the number of incorrect labels, gradient-based optimization methods cannot be applied directly.1 Consequently, surrogate optimization problems, such as maximum likelihood or maximum margin training, are solved instead. In this paper, we describe a training procedure that addresses the problem of minimizing empirical per-label risk for CRFs. Specifically, our technique attempts to minimize a smoothed approximation of the Hamming loss incurred by the maximum expected accuracy decoding (i.e., posterior decoding) algorithm on the training set. The degree of approximation is controlled by a parameterized function Q(·) which trades off between the accuracy of the approximation and the smoothness of the objective. In the limit as Q(·) approaches the step function, the optimization objective converges to the empirical risk minimization criterion for Hamming loss. 1The gradient of the optimization objective is everywhere zero (except at points where the objective is discontinuous), because a sufficiently small change in parameters will not change the predicted labeling. 2 Preliminaries 2.1 Definitions Let X L denote an input space of all possible input sequences, and let YL denote an output space of all possible output labels. Furthermore, for a pair of consecutive labels yj−1 and yj, an input sequence x, and a label position j, let f(yj−1, yj, x, j) ∈Rn be a vector-valued function; we call f the feature mapping of the CRF. A conditional random field (CRF) defines the conditional probability of a labeling (or parse) y given an input sequence x as Pw(y | x) = exp PL j=1 wT f(yj−1, yj, x, j)  P y′∈YL exp PL j=1 wT f(y′ j−1, y′ j, x, j)  = exp wT F1,L(x, y)  Z(x) , (1) where we define the summed feature mapping, Fa,b(x, y) = Pb j=a f(yj−1, yj, x, j), and where the partition function Z(x) = P y′ exp wT F1,L(x, y′)  ensures that the distribution is normalized for any set of model parameters w.2 2.2 Maximum a posteriori vs. maximum expected accuracy parsing Given a CRF with parameters w, the sequence labeling task is to determine values for the labels y of a new input sequence x. One approach is to choose the most likely, or maximum a posteriori, labeling, arg maxy Pw(y | x). This can be computed efficiently using the Viterbi algorithm. An alternative approach, which seeks to maximize the per-label accuracy of the prediction rather than the joint probability of the entire parse, chooses the most likely (i.e., highest posterior probability) value for each label separately. Note that arg max y L X j=1 Pw(yj | x) = arg max y Ey′   L X j=1 1{y′ j = yj}   (2) where 1{condition} denotes the usual indicator function whose value is 1 when condition is true and 0 otherwise, and where the expectation is taken with respect to the conditional distribution Pw(y′ | x). From this, we see that maximum expected accuracy parsing chooses the parse with the maximum expected number of correct labels. In practice, maximum expected accuracy parsing often yields more accurate results than Viterbi parsing (on a per-label basis) [3, 4, 5]. Here, we restrict our focus to maximum expected accuracy parsing procedures and seek training criteria which optimize the performance of a CRF-based maximum expected accuracy parser. 3 Training conditional random fields Usually, CRFs are trained in the batch setting, where a complete set D = {(x(t), y(t))}m t=1 of training examples is available up front. In this case, training amounts to numerical optimization of a fixed objective function R(w : D). A good objective function is one whose optimal value leads to parameters that perform well, in an application-dependent sense, on previously unseen testing examples. While this can be difficult to achieve without knowing the contents of the testing set, one can, under certain conditions, guarantee that the accuracy of a learned CRF on an unseen testing set is probably not much worse than its accuracy on the training set. In particular, when assuming independently and identically distributed (i.i.d.) training and testing examples, there exists a probabilistic bound on the difference between empirical risk and generalization error [2]. As long as enough training data are available (relative to model complexity), strong training set performance will imply, with high probability, similarly strong testing set performance. Unfortunately, minimizing empirical risk for a CRF is a very difficult task. Loss functions based on usual notions of per-label accuracy (such as Hamming loss) are typically not only nonconvex but also not amenable to optimization by methods that make use of gradient information. 2We assume for simplicity the existence of a special initial label y0. In this section, we briefly describe three previous approaches for CRF training which optimize surrogate loss functions in lieu of the empirical risk. Then, we consider a new method for gradient-based CRF training oriented more directly toward optimizing predictive performance on the training set. Our method minimizes an arbitrarily accurate approximation of empirical risk, where the loss function is defined as the number of labels predicted incorrectly by maximum expected accuracy parsing. 3.1 Previous objective functions 3.1.1 Conditional log-likelihood Conditional log-likelihood is the most commonly used objective function for training conditional random fields. In this criterion, the loss suffered for a training example (x(t), y(t)) is the negative log probability of the true parse according to the model, plus a regularization term: RCLL(w : D) = C||w||2 − m X t=1 log Pw(y(t) | x(t)) (3) The convexity and differentiability of conditional log-likelihood ensure that gradient-based optimization procedures (e.g., conjugate gradient or L-BFGS [6]) will not converge to suboptimal local minima of the objective function. However, there is no guarantee that the parameters obtained by conditional log-likelihood training will lead to the best per-label predictive accuracy, even on the training set. For one, maximum likelihood training explicitly considers only the probability of exact training parses. Other parses, even highly accurate ones, are ignored except insofar as they share common features with the exact parse. In addition, the log-likelihood of a parse is largely determined by the sections which are most difficult to correctly label. This can be a weakness in problems with significant label noise (i.e., incorrectly labeled training examples). 3.1.2 Pointwise conditional log likelihood Kakade et al. investigated an alternative nonconvex training objective for CRFs [7, 8] which considers separately the posterior label probabilities at each position of each training sequence. In this approach, one maximizes not the probability of an entire parse, but instead the product of the posterior probabilities (or equivalently, sum of log posteriors) for each predicted label: Rpointwise(w : D) = C||w||2 − m X t=1 L X j=1 log Pw(y(t) j | x(t)) (4) By using pointwise posterior probabilities, this objective function takes into account suboptimal parses and focuses on finding a model whose posteriors match well with the training labels, even though the model may not provide a good fit for the training data as a whole. Nevertheless, pointwise logloss is fundamentally quite different from Hamming loss. A training procedure based on pointwise log likelihood, for example, would prefer to reduce the posterior probability for a correct label from 0.6 to 0.4 in return for improving the posterior probability for a hopelessly incorrect label from 0.0001 to 0.01. Thus, the objective retains the difficulties of the regular conditional log likelihood when dealing with difficult-to-classify outlier labels. 3.1.3 Maximum margin training The notion of Hamming distance is incorporated directly in the maximum margin training procedures of Taskar et al. [9]: Rmax margin(w : D) = C||w||2 + m X t=1 max  0, max y∈YL  ∆(y, y(t)) −wT δF1,L(x(t), y)  , (5) and Tsochantaridis et al. [10]. Rmax margin(w : D) = C||w||2 + m X t=1 max  0, max y∈YL ∆(y, y(t))  1 −wT δF1,L(x(t), y)  . (6) Here, ∆(y, y(t)) denotes the Hamming distance between y and y(t), and δF1,L(x(t), y) = F1,L(x(t), y(t)) −F1,L(x(t), y). In the former formulation, loss is incurred when the Hamming distance between the correct parse y(t) and a candidate parse y exceeds the obtained classification margin between y(t) and y. In the latter formulation, the amount of loss for a margin violation scales linearly with the Hamming distance betweeen y(t) and y. Both cases lead to convex optimization problems in which the loss incurred for a particular training example is an upper bound on the Hamming loss between the correct parse and its highest scoring alternative. In practice, however, this upper bound can be quite loose; thus, parameters obtained via a maximum margin framework may be poor minimizers of empirical risk. 3.2 Training for maximum labelwise accuracy In each of the likelihood-based or margin-based objective functions introduced in the previous subsections, difficulties arose due to the mismatch between the chosen objective function and our notion of empirical risk as defined by Hamming loss. In this section, we demonstrate how to construct a smooth objective function for maximum expected accuracy parsing which more closely approximates our desired notion of empirical risk. 3.2.1 The labelwise accuracy objective function Consider the following objective function, R(w : D) = m X t=1 L X j=1 1 ( y(t) j = arg max yj Pw(yj | x(t)) ) . (7) Maximizing this objective is equivalent to minimizing empirical risk under the Hamming loss (i.e., the number of mispredicted labels). To obtain a smooth approximation to this objective function, we can express the condition that the algorithm predicts the correct label for y(t) j in terms of the posterior probabilities of correct and incorrect labels as Pw(y(t) j | x(t)) −max yj̸=y(t) j Pw(yj | x(t)) > 0. (8) Substituting equation (8) back into equation (7) and replacing the indicator function with a generic function Q(·), we obtain Rlabelwise(w) = m X t=1 L X j=1 Q Pw(y(t) j | x(t)) −max yj̸=y(t) j Pw(yj | x(t)) ! . (9) When Q(·) is chosen to be the indicator function, Q(x) = 1{x > 0}, we recover the original objective. By choosing a nicely behaved form for Q(·), however, we obtain a new objective that is easier to optimize. Specifically, we set Q(x) to be sigmoidal with parameter λ (see Figure 2a): Q(x; λ) = 1 1 + exp(−λx). (10) As λ →∞, Q(x; λ) →1{x > 0}, so Rlabelwise(w : D) approaches the objective function defined in (7). However, Rlabelwise(w : D) is smooth for any finite λ > 0. Because of this, we are free to use gradient-based optimization to maximize our new objective function. As λ get larger, the quality of our approximation of the ideal Hamming loss objective improves; however, the approximation itself also becomes less smooth and perhaps more difficult to optimize as a result. Thus, the value of λ controls the trade-off between the accuracy of the approximation and the ease of optimization.3 3.2.2 The labelwise accuracy objective gradient We now present an algorithm for efficiently calculating the gradient of the approximate accuracy objective. For a fixed parameter set w, let ˜y(t) j denote the label other than y(t) j that has the maximum posterior probability at position j. Also, for notational convenience, let y1:j denote the variables 3In particular, note that that the method of using Q(x; λ) to approximate the step function is analogous to the log-barrier method used in convex optimization for approximating inequality constraints using a smooth function as a surrogate for the infinite height barrier. As with log-barrier optimization, performing the maximization of Rlabelwise(w : D) using a small value of λ, and gradually increasing λ while using the previous solution as a starting point for the new optimization, provides a viable technique for maximizing the labelwise accuracy objective. y1, . . . , yj. Differentiating equation (9), we compute ∇wRlabelwise(w : D) to be4 m X t=1 L X j=1 Q′  Pw(y(t) j | x(t)) −Pw(˜y(t) j | x(t))  ∇w h Pw(y(t) j | x(t)) −Pw(˜y(t) j | x(t)) i . (11) Using equation (1), the inner term, Pw(y(t) j | x(t)) −Pw(˜y(t) j | x(t)), is equal to 1 Z(x(t)) · X y1:L  1{yj = y(t) j } −1{yj = ˜y(t) j }  · exp  wT F1,L(x(t), y)  . (12) Applying the quotient rule allows us to compute the gradient of equation (12), whose complete form we omit for lack of space. Most of the terms involved in the gradient are easy to compute using the standard forward and backward matrices used for regular CRF inference, which we define here as α(i, j) = P y1:j 1{yj = i} · exp wT F1,j(x(t), y)  (13) β(i, j) = P yj:L 1{yj = i} · exp wT Fj+1,L(x(t), y)  . (14) The two difficult terms that do not follow from the forward and backward matrices have the form, L X k=1 X y1:L Q′ k(w) · 1{yk = y⋆ k} · F1,L(x(t), y) · exp  wT F1,L(x(t), y)  , (15) where Q′ j(w) = Q′  Pw(y(t) j | x(t)) −Pw(˜y(t) j | x(t))  and y⋆is either y(t) or ˜y(t). To efficiently compute terms of this type, we define α⋆(i, j) = j X k=1 X y1:j 1{yk = y⋆ k ∧yj = i} · Q′ k(w) · exp  wT F1,j(x(t), y)  (16) β⋆(i, j) = L X k=j+1 X yj:L 1{yk = y⋆ k ∧yj = i} · Q′ k(w) · exp  wT Fj+1,L(x(t), y)  . (17) Like the forward and backward matrices, α⋆(i, j) and β⋆(i, j) may be calculated via dynamic programming. In particular, we have the base cases α⋆(i, 1) = 1{i = y⋆ 1} · α(i, 1) · Q′ 1(w) and β⋆(i, L) = 0. The remaining entries are given by the following recurrences: α⋆(i, j) = X i′  α⋆(i′, j −1) + 1{i = y⋆ j } · α(i′, j −1) · Q′ j(w)  · ewT f(i′,i,x(t),j) (18) β⋆(i, j) = X i′  β⋆(i′, j + 1) + 1{i′ = y⋆ j+1} · β(i′, j + 1) · Q′ j+1(w)  · ewT f(i,i′,x(t),j+1). (19) It follows that equation (15) is equal to L X j=1 X i′ X i f(i′, i, x(t), j) · exp  wT f(i′, i, x(t), j)  · (A + B), (20) where A = α⋆(i′, j −1) · β(i, j) + α(i′, j −1) · β⋆(i, j) (21) B = 1{i = y⋆ j } · α(i′, j −1) · β(i, j) · Q′ j(w). (22) Thus, the algorithm above computes the gradient in O(|Y|2 · L) time and O(|Y| · L) space. Since α⋆(i, j) and β⋆(i, j) must be computed for both y∗= y(t) and y∗= ˜y(t), the resulting total gradient computation takes approximately three times as long and uses twice the memory of the analogous computation for the log likelihood gradient.5 4Technically, the max function is not differentiable. One could replace the max with a softmax function, and assuming unique probabilities for each candidate label, the gradient approaches (11) as the softmax function approaches the max. As noted in [11], this approximation used here does not cause problems in practice. 5We note that the “trick” used in the formulation of approximate accuracy is applicable to a variety of other forms and arguments for Q(·). In particular, if we change its argument to Pw(y(t) j | x(t)), letting Q(x) = log(x) gives the pointwise logloss formulation of Kakade et al. (see section 3.1.2), while letting Q(x) = x gives an objective function equal to expected accuracy. Computing the gradient for these objectives involves straightforward modifications of the recurrences presented here. (a) (b) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.64 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 Noise Parameter, p Labelwise Accuracy Joint Log−Likelihood Conditional Log−Likelihood Maximum Margin Maximum Labelwise Accuracy Figure 1: Panel (a) shows the state diagram for the hidden Markov model used for the simulation experiments. The HMM consists of two states (‘C’ and ‘I’) with transition probabilities labeled on the arrows, and emission probabilities specified (over the alphabet {A, C, G, T}) written inside each state. Panel (b) shows the proportion of state labels correctly predicted by the learned models at varying levels of label noise. The error bars show 95% confidence intervals on the mean generalization performance. 4 Results 4.1 Simulation experiments To test the performance of the approximate labelwise accuracy objective function, we first ran simulation experiments in order to assess the robustness of several different learning algorithms in problems with a high degree of label noise. In particular, we generated sequences of length 1,000,000 from a simple two-state hidden Markov model (see Figure 1a). Given a fixed noise parameter p ∈[0, 1], we generated training sequence labels by flipping each run of consecutive ‘C’ hidden state labels to ‘I’ with probability p. After learning parameters, we then tested each algorithm on uncorrupted testing sequence generated by the original HMM. Figure 1b indicates the proportion of labels correctly identified by four different methods at varying noise levels: a generative model trained with joint log-likelihood, a CRF trained with conditional log-likelihood, the maximum-margin method of Taskar et al. [9] as implemented in the SVMstruct package [10]6, and a CRF trained with maximum labelwise accuracy. No method outperforms maximum labelwise accuracy at any noise level. For levels of noise above 0.05, maximum labelwise accuracy performs significantly better than the other methods. For each method, we used the decoding algorithm (Viterbi or MEA) that led to the best performance. The maximum margin method performed best when Viterbi decoding was used, while the other three methods had better performance with MEA decoding. Interestingly, with no noise present, maximum margin training with Viterbi decoding peformed significantly better than generative training with Viterbi decoding (0.749 vs. 0.710), but this was still much worse than generative training with MEA decoding (0.796). 4.2 Gene prediction experiments To test the performance of maximum labelwise accuracy training on a large-scale, real world problem, we trained a CRF to predict protein coding genes in the genome of the fruit fly Drosophila melanogaster. The CRF labeled each base pair of a DNA sequence according to its predicted functional category: intergenic, protein coding, or intronic. The features used in the model were of two types: transitions between labels and trimer composition. The CRF was trained on approximately 28 million base pairs labeled according to annotations from the FlyBase database [12]. The predictions were evaluated on a separate testing set of the same size. Three separate training runs were performed, using three different objective functions: maximum 6We were unable to get SVMstruct to converge on our test problem when using the Tsochantaridis et al. maximum margin formulation. (a) (b) (c) (d) −1 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 2 2.5 3 P(incorrect label) − P(correct label) per−label loss pointwise logloss zero−one loss Q(x; 15) 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0 10 20 30 40 50 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0.82 Accuracy Approximate Accuracy Iterations Objective Training Accuracy Testing Accuracy 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0 10 20 30 40 50 -0.004 -0.0035 -0.003 -0.0025 -0.002 Accuracy Log Likelihood / Length Iterations Objective Training Accuracy Testing Accuracy 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0 10 20 30 40 50 -3 -2.5 -2 -1.5 -1 Accuracy Pointwise Log Likelihood / Length Iterations Objective Training Accuracy Testing Accuracy Figure 2: Panel (a) compares three pointwise loss functions in the special case where a label has two possible values. The green curve (f(x) = −log( 1−x 2 )) depicts pointwise logloss; the red curve represents the ideal zero-one loss; and the blue curve gives the sigmoid approximation with parameter 15. Panels (b), (c), and (d) show gene prediction learning curves using three training objective functions: (b) maximum labelwise (approximate) accuracy, (c) maximum conditional log-likelihood, and (d) maximum pointwise conditional log-likelihood, respetively. In each case, parameters were initialized to their generative model estimates. likelihood, maximum pointwise likelihood, and maximum labelwise accuracy. Each run was started from an initial guess calculated using HMM-style generative parameter estimation.7 Figures 2b, 2c, and 2d show the value of the objective function and the average label accuracy at each iteration of the three training runs. Here, maximum accuracy training improves upon the accuracy of the original generative parameters and outperforms the other two training objectives. In contrast, maximum likelihood training and maximum pointwise likelihood training both give worse performance than the simple generative parameter estimates. Evidently, for this problem the likelihood-based functions are poor surrogate measures for per-label accuracy: Figures 2c and 2d show declines in training and testing set accuracy, despite increases in the objective function. 5 Discussion and related work In contrast to most previous work describing alternative objective functions for CRFs, the method described in this paper optimizes a direct approximation of the Hamming loss. A few notable papers have also dealt with the problem of minimizing empirical risk directly. For binary classifiers, Jansche showed that an algorithm designed to optimize F-measure performance of a logistic regression model for information extraction outperforms maximum likelihood training [14]. For parsing tasks, Och demonstrated that a statistical machine translation system choosing between a small finite collection of candidate parses achieves better accuracy when it is trained to minimize error rate instead 7We did not include maximum margin methods in this comparison; existing software packages for maximum margin training, based on the cutting plane algorithm [10] or decomposition techniques such as SMO [9, 13], are not easily parallelizable and scale poorly for large datasets, such as those encountered in gene prediction. of optimizing the more traditional maximum mutual information criterion [15]. Unlike Och’s algorithm, our method does not require one to provide a small set of candidate parses, instead relying on efficient dynamic programming recurrences for all computations. After this work was submitted for consideration, a Minimum Classification Error (MCE) method for training CRFs to minimize empirical risk was independently proposed by Suzuki et al. [11]. This technique minimizes the loss incurred by maximum a posteriori, rather than maximum expected accuracy, parsing on the training set. In practice, Viterbi parsers often achieve worse per-label accuracy than maximum expected accuracy parsers [3, 4, 5]; we are currently exploring whether a similar relationship also exists between MCE methods and our proposed training objective. The training method described in this work is theoretically attractive, as it addresses the goal of empirical risk minimization in a very direct way. In addition to its theoretical appeal, we have shown that it performs much better than maximum likelihood and maximum pointwise likelihood training on a large scale, real world problem. Furthermore, our method is efficient, having time complexity approximately three times that of maximum likelihood likelihood training, and easily parallelizable, as each training example can be considered independently when evaluating the objective function or its gradient. The chief disadvantage of our formulation is its nonconvexity. In practice, this can be combatted by initializing the optimization with a parameter vector obtained by a convex training method. At present, the extent of the effectiveness of our method and the characteristics of problems for which it performs well are not clear. Further work applying our method to a variety of sequence labeling tasks is needed to investigate these questions. 6 Acknowledgments SSG and CBD were supported by NDSEG fellowships. We thank Andrew Ng for useful discussions. References [1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [2] V. Vapnik. Statistical Learning Theory. Wiley, 1998. [3] C. B. Do, M. S. P. Mahabhashyam, M. Brudno, and S. Batzoglou. ProbCons: probabilistic consistencybased multiple sequence alignment. Genome Research, 15(2):330–340, 2005. [4] C. B. Do, D. A. Woods, and S. Batzoglou. CONTRAfold: RNA secondary structure prediction without physics-based models. Bioinformatics, 22(14):e90–e98, 2006. [5] P. Liang, B. Taskar, and D. Klein. Alignment by agreement. In HLT-NAACL, 2006. [6] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999. [7] S. Kakade, Y. W. Teh, and S. Roweis. An alternate objective function for Markovian fields. In ICML, 2002. [8] Y. Altun, M. Johnson, and T. Hofmann. Investigating loss functions and optimization methods for discriminative learning of label sequences. In EMNLP, 2003. [9] B. Taskar, C. Guestrin, and D. Koller. Max margin markov networks. In NIPS, 2003. [10] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004. [11] J. Suzuki, E. McDermott, and H. Isozaki. Training conditional random fields with multivariate evaluation measures. In ACL, 2006. [12] G. Grumbling, V. Strelets, and The Flybase Consortium. FlyBase: anatomical data, images and queries. Nucleic Acids Research, 34:D484–D488, 2006. [13] J. Platt. Using sparseness and analytic QP to speed training of support vector machines. In NIPS, 1999. [14] M. Jansche. Maximum expected F-measure training of logistic regression models. In EMNLP, 2005. [15] F. J. Och. Minimum error rate training in statistical machine translation. In ACL, 2003.
2006
26
3,044
Modeling Dyadic Data with Binary Latent Factors Edward Meeds Department of Computer Science University of Toronto ewm@cs.toronto.edu Zoubin Ghahramani Department of Engineering Cambridge University zoubin@eng.cam.ac.uk Radford Neal Department of Computer Science University of Toronto radford@cs.toronto.edu Sam Roweis Department of Computer Science University of Toronto roweis@cs.toronto.edu Abstract We introduce binary matrix factorization, a novel model for unsupervised matrix decomposition. The decomposition is learned by fitting a non-parametric Bayesian probabilistic model with binary latent variables to a matrix of dyadic data. Unlike bi-clustering models, which assign each row or column to a single cluster based on a categorical hidden feature, our binary feature model reflects the prior belief that items and attributes can be associated with more than one latent cluster at a time. We provide simple learning and inference rules for this new model and show how to extend it to an infinite model in which the number of features is not a priori fixed but is allowed to grow with the size of the data. 1 Distributed representations for dyadic data One of the major goals of probabilistic unsupervised learning is to discover underlying or hidden structure in a dataset by using latent variables to describe a complex data generation process. In this paper we focus on dyadic data: our domains have two finite sets of objects/entities and observations are made on dyads (pairs with one element from each set). Examples include sparse matrices of movie-viewer ratings, word-document counts or product-customer purchases. A simple way to capture structure in this kind of data is to do “bi-clustering” (possibly using mixture models) by grouping the rows and (independently or simultaneously) the columns[6, 13, 9]. The modelling assumption in such a case is that movies come in K types and viewers in L types and that knowing the type of movie and type of viewer is sufficient to predict the response. Clustering or mixture models are quite restrictive – their major disadvantage is that they do not admit a componential or distributed representation because items cannot simultaneously belong to several classes. (A movie, for example, might be explained as coming from a cluster of “dramas” or “comedies”; a viewer as a “single male” or as a “young mother”.) We might instead prefer a model (e.g. [10, 5]) in which objects can be assigned to multiple latent clusters: a movie might be a drama and have won an Oscar and have subtitles; a viewer might be single and female and a university graduate. Inference in such models falls under the broad area of factorial learning (e.g. [7, 1, 3, 12]), in which multiple interacting latent causes explain each observed datum. In this paper, we assume that both data items (rows) and attributes (columns) have this kind of componential structure: each item (row) has associated with it an unobserved vector of K binary features; similarly each attribute (column) has a hidden vector of L binary features. Knowing the features of the item and the features of the attribute are sufficient to generate (before noise) the response at that location in the matrix. In effect, we are factorizing a real-valued data (response) matrix X into (a distribution defined by) the product UWV >, where U and V are binary feature matrices, and W is a real-valued weight matrix. Below, we develop this binary matrix factorization K L J I x ij u ik v j l w k l  k  l  , w o  = X f U V > W (A) (B) Figure 1: (A) The graphical model representation of the linear-Gaussian BMF model. The concentration parameter and Beta weights for the columns of X are represented by the symbols  and  l. (B) BMF shown pictorally. (BMF) model using Bayesian non-parametric priors over the number and values of the unobserved binary features and the unknown weights. 2 BMF model description Binary matrix factorization is a model of an I  J dyadic data matrix X with exchangeable rows and columns. The entries of X can be real-valued, binary, or categorial; BMF models suitable for each type are described below. Associated with each row is a latent binary feature vector u i; similarly each column has an unobserved binary vector v j. The primary parameters are represented by a matrix W of interaction weights. X is generated by a fixed observation process f () applied (elementwise) to the linear inner product of the features and weights, which is the “factorization” or approximation of the data: X j U; V ; W  f (UWV > ; ) (1) where  are extra parameters specific to the model variant. Three possible parametric forms for the noise (observation) distribution f are: Gaussian, with mean UWV > and covariance (1= ) I; logistic, with mean 1= ( 1 + exp(UWV > )); and Poisson, with mean (and variance) UWV >. Other parametric forms are also possible. For illustrative purposes, we will use the linear-Gaussian model throughout this paper; this can be thought of as a two-sided version of the linear-Gaussian model found in [5]. To complete the description of the model, we need to specify prior distributions over the feature matrices U; V and the weights W. We adopt the same priors over binary matrices as previously described in [5]. For finite sized matrices U with I rows and K columns, we generate a bias  k independently for each column k using a Beta prior (denoted B) and then conditioned on this bias generate the entries in column k independently from a Bernoulli with mean  k.  k j ; K  B ( =K ; ) j a ; b  G (a ; b ) U j   I Y i=1 K Y k =1  u ik k (1  k ) 1u ik = K Y k =1  n k k (1  k ) I n k where n k = P i u ik. The hyperprior on the concentration is a Gamma distribution (denoted G), whose shape and scale hyperparameters control the expected fraction of zeros/ones in the matrix. The biases  are easily integrated out, which creates dependencies between the rows, although they remain exchangeable. The resulting prior depends only on the number n k of active features in each column. An identical prior is used on V, with J rows and L columns, but with different concentration prior . The variable was set to 1 for all experiments. The appropriate prior distribution over weights depends on the observation distribution f (). For the linear-Gaussian variant, a convenient prior on W is a matrix normal with prior mean W o and covariance (1=) I. The scale  of the weights and output precision  (if needed) have Gamma hyperpriors: W j W o ;   N ( W o ; (1=) I)  j a  ; b   G ( a  ; b  )  j a  ; b   G ( a  ; b  ) In certain cases, when the prior on the weights is conjugate to the output distribution model f, the weights may be analytically integrated out, expressing the marginal distribution of the data XjU; V only in terms of the binary features. This is true, for example, when we place a Gaussian prior on the weights and use a linear-Gaussian output process. Remarkably, the Beta-Bernoulli prior distribution over U (and similarly V) can easily be extended to the case where K ! 1, creating a distribution over binary matrices with a fixed number I of exchangeable rows and a potentially infinite number of columns (although the expected number of columns which are not entirely zero remains finite). Such a distribution, the Indian Buffet Process (IBP) was described by [5] and is analogous to the Dirichlet process and the associated Chinese restaurant process (CRP) [11]. Fortunately, as we will see, inference with this infinite prior is not only tractable, but is also nearly as efficient as the finite version. 3 Inference of features and parameters As with many other complex hierarchical Bayesian models, exact inference of the latent variables U and V in the BMF model is intractable (ie there is no efficient way to sample exactly from the posterior nor to compute its exact marginals). However, as with many other non-parametric Bayesian models, we can employ Markov Chain Monte Carlo (MCMC) methods to create an iterative procedure which, if run for sufficiently long, will produce correct posterior samples. 3.1 Finite binary latent feature matrices The posterior distribution of a single entry in U (or V) given all other model parameters is proportional to the product of the conditional prior and the data likelihood. The conditional prior comes from integrating out the biases  in the Beta-Bernoulli model and is proportional the number of active entries in other rows of the same column plus a term for new activations. Gibbs sampling for single entries of U (or V) can be done using the following updates: P (u ik = 1jU ik ; V ; W ; X) = C ( =K + n i;k ) P (XjU ik ; u ik = 1; V ; W ) (2) P (u ik = 0jU ik ; V ; W ; X) = C ( + (I 1) n i;k ) P (XjU ik ; u ik = 0; V ; W ) (3) where n i;k = P h6=i u hk, U ik excludes entry ik, and C is a normalizing constant. (Conditioning on ; K and  is implicit.) When conditioning on W, we only need to calculate the ratio of likelihoods corresponding to row i. (Note that this is not the case when the weights are integrated out.) This ratio is a simple function of the model’s predictions ^ x + ij = P hl u ih v j l w hl (when u ik = 1) and ^ x ij = P hl u ih v j l w hl (when u ik = 0). In the linear-Gaussian case: log P (u ik = 1jU ik ; V ; W ; X) P (u ik = 0jU ik ; V ; W ; X) = log ( =K + n i;k ) ( + (I 1) n i;k ) 1 2 X j  ij  (x ij ^ x + ij ) 2 (x ij ^ x ij ) 2  In the linear-Gaussian case, we can easily derive analogous Gibbs sampling updates for the weights W and hyperparameters. To simplify the presentation, we consider a “vectorized” representation of our variables. Let x be an I J column vector taken column-wise from X, w be a K L column vector taken column-wise from W and A be a I J  K L binary matrix which is the kronecker product V U. (In “Matlab notation”, x = X(:); w = W (:) and A = kron (V ; U).) In this notation, the data distribution is written as: xjA; w ;   N ( Aw ; (1= ) I). Given values for U and V, samples can be drawn for w, , and  using the following posterior distributions (where conditioning on w o ; ;  ; a  ; b  ; a  ; b  is implicit): w j x; A  N  ( A > A + I) 1 (  A > x + w o ) ; ( A > A + I) 1   j w  G  a  + K L=2;  b  + 1 2 (w w o ) > (w w o )   j x; A; w  G  a  + I J =2;  b  + 1 2 (x Aw ) > ( x Aw )  Note that we do not have to explicitly compute the matrix A. For computing the posterior of linearGaussian weights, the matrix A > A can be computed as A > A = kron (V > V ; U > U). Similarly, the expression A > x is constructed by computing U > XV and taking the elements column-wise. 3.2 Infinite binary latent feature matrices One of the most elegant aspects of non-parametric Bayesian modeling is the ability to use a prior which allows a countably infinite number of latent features. The number of instantiated features is automatically adjusted during inference and depends on the amount of data and how many features it supports. Remarkably, we can do MCMC sampling using such infinite priors with essentially no computational penalty over the finite case. To derive these updates (e.g. for row i of the matrix U), it is useful to consider partitioning the columns of U into two sets as shown below. Let set A have at least one non-zero entry in rows other than i. Let set B be all other columns, including the set of columns where the only non-zero entries are found in row i and the countably infinite number of all-zero columns. Sampling values for elements in row i of set A given everything else is straightforward, and involves Gibbs updates almost identical to those in the finite case handled by equations (2) and (3); as K ! 1 and k in set A we get: set A set B 0 1 0 0 1 0 0 0 0 0    0 0 1 0 0 0 0 0 0 0    1 1 0 0 1 0 0 0 0 0    1 0 0 1 1 0 0 0 0 0    1 1 0 0 1 0 1 0 1 0 row i 0 1 0 0 0 0 0 0 0 0    0 0 0 1 0 0 0 0 0 0    1 0 0 0 1 0 0 0 0 0    P (u ik = 1jU ik ; V ; W ) = C  n i;k P (XjU ik ; u ik = 1; V ; W ) (4) P (u ik = 0jU ik ; V ; W ) = C  ( + I 1 n i;k ) P ( XjU ik ; u ik = 0; V ; W ) (5) When sampling new values for set B, the columns are exchangeable, and so we are really only interested in the number of entries n ? B in set B which will be turned on in row i. Sampling the number of entries set to 1 can be done with Metropolis-Hastings updates. Let J (n ? B jn B ) = Poisson (n ? B j = ( + I 1)) be the proposal distribution for a move which replaces the current n B active entries with n ? B active entries in set B. The reverse proposal is J (n B jn ? B ). The acceptance probability is min 1; r n B !n ? B  , where r n B !n ? B is P (n ? B jX) J (n B jn ? B ) P (n B jX) J (n ? B jn B ) = P (Xjn ? B ) Poisson(n ? B j = ( + I 1) )J (n B jn ? B ) P (Xjn B ) Poisson(n B j = ( + I 1) )J (n ? B jn B ) = P ( Xjn ? B ) P ( Xjn B ) (6) This assumes a conjugate situation in which the weights W are explicitly integrated out of the model to compute the marginal likelihood P (Xjn ? B ). In the non-conjugate case, a more complicated proposal is required. Instead of proposing n ? B, we jointly propose n ? B and associated feature parameters w ? B from their prior distributions. In the linear-Gaussian model, where w ? B is a set of weights for features in set B, the proposal distribution is: J (n ? B ; w ? B jn B ; w B ) = Poisson (n ? B j = ( + I 1)) Normal (w ? B jn ? B ; ) (7) We need actually sample only the finite portion of w ? B where u ik = 1. As in the conjugate case, the acceptance ratio reduces to the ratio of data likelihoods: r n B ;w B !n ? B ;w ? B = P (Xjn ? B ; w ? B ) P (Xjn B ; w B ) (8) 3.3 Faster mixing transition proposals The Gibbs updates described above for the entries of U, V and W are the simplest moves we could make in a Markov Chain Monte Carlo inference procedure for the BMF model. However, these limited local updates may result in extremely slow mixing. In practice, we often implement larger moves in indicator space using, for example, Metropolis-Hastings proposals on multiple features for row i simultaneously. For example, we can propose new values for several columns in row i of matrix U by sampling feature values independently from their conditional priors. To compute the reverse proposal, we imagine forgetting the current configuration of those features for row i and compute the probability under the conditional prior of proposing the current configuration. The acceptance probability of such a proposal is (the maximum of unity and) the ratio of likelihoods between the new proposed configuration and the current configuration. Split-merge moves may also be useful for efficiently sampling from the posterior distribution of the binary feature matrices. Jain and Neal [8] describe split-merge algorithms for Dirichlet process mixture models with non-conjugate component distributions. We have developed and implemented similar split-merge proposals for binary matrices with IBP priors. Due to space limitations, we present here only a sketch of the procedure. Two nonzero entries in U are selected uniformly at random. If they are in the same column, we propose splitting that column; if they are in different columns, we propose merging their columns. The key difference between this algorithm and the Jain and Neal algorithm is that the binary features are not constrained to sum to unity in each row. Our split-merge algorithm also performs restricted Gibbs scans on columns of U to increase acceptance probability. 3.4 Predictions A major reason for building generative models of data is to be able to impute missing data values given some observations. In the linear-Gaussian model, the predictive distribution at each iteration of the Markov chain is a Gaussian distribution. The interaction weights can be analytically integrated out at each iteration, also resulting in a Gaussian posterior, removing sampling noise contributed by having the weights explicitly represented. Computing the exact predictive distribution, however, conditional only on the model hyperparameters, is analytically intractable: it requires integrating over all binary matrices U and V, and all other nuisance parameters (e.g., the weights and precisions). Instead we integrate over these parameters implicitly by averaging predictive distributions from many MCMC iterations. This posterior, which is conditional only on the observed data and hyperparameters, is highly complex, potentially multimodal, and non-linear function of the observed variables. By averaging predictive distributions, our algorithm implicitly integrates over U and V. In our experiments, we show samples from the posteriors of U and V to help explain what the model is doing, but we stress that the posterior may have significant mass on many possible binary matrices. The number of features and their degrees of overlap will vary over MCMC iterations. Such variation will depend, for example, on the current value of and  (higher values will result in more features) and precision values (higher weight precision results in less variation in weights). 4 Experiments 4.1 Modified “bars” problem A toy problem commonly used to illustrate additive feature or multiple cause models is the bars problem ([2, 12, 1]). Vertical and horizontal bars are combined in some way to generate data samples. The goal of the illustration is to show recovery of the latent structure in the form of bars. We have modified the typical usage of bars to accommodate the linear-Gaussian BMF with infinite features. Data consists of I vectors of size 8 2 where each vector can be reshaped into a square image. The generation process is as follows: since V has the same number of rows as the dimension of the images, V is fixed to be a set of vertical and horizontal bars (when reshaped into an image). U is sampled from the IBP, and global precisions  and  are set to 1=2. The weights W are sampled from zero mean Gaussians. Model estimates of U and V were initialized from an IBP prior. In Figure 2 we demonstrate the performance of the linear-Gaussian BMF on the bars data. We train the BMF with 200 training examples of the type shown in the top row in Figure 2. Some examples have their bottom halves labeled missing and are shown in the Figure with constant grey values. To handle this, we resample their values at each iteration of the Markov chain. The bottom row shows the expected reconstruction using MCMC samples of U, V, and W. Despite the relatively high noise levels in the data, the model is able to capture the complex relationships between bars and weights. The reconstruction of vertical bars is very good. The reconstruction of horizontal bars is good as well, considering that the model has no information regarding the existence of horizontal bars on the bottom half. (A) Data samples (B) Noise-free data (C) Initial reconstruction (D) Mean reconstruction (E) Nearest neighbour Figure 2: Bars reconstruction. (A) Bars randomly sampled from the complete dataset. The bottom half of these bars were removed and labeled missing during learning. (B) Noise-free versions of the same data. (C) The initial reconstruction. The missing values have been set to their expected value, 0, to highlight the missing region. (D) The average MCMC reconstruction of the entire image. (E) Based solely on the information in the top-half of the original data, these are the noise-free nearest neighbours in pixel space. V VW > Figure 3: Bars features. The top row shows values of V and WV > used to generate the data. The second row shows a sample of V and WV > from the Markov chain. WV > can be thought of as a set of basis images which can be added together with binary coefficients (U) to create images. By examining the features captured by the model, we can understand the performance just described. In Figure 3 we show the generating, or true, values of V and WV > along with one sample of those features from the Markov chain. Because the model is generated by adding multiple WV > basis images shown on the right of Figure 3, multiple bars are used in each image. This is reflected in the captured features. The learned WV > are fairly similar to the generating WV >, but the former are composed of overlapping bar structure (learned V). 4.2 Digits In Section 2 we briefly stated that BMF can be applied to data models other than the linear-Gaussian model. We demonstrate this with a logistic BMF applied to binarized images of handwritten digits. We train logistic BMF with 100 examples each of digits 1, 2, and 3 from the USPS dataset. In the first five rows of Figure 4 we again illustrate the ability of BMF to impute missing data values. The top row shows all 16 samples from the dataset which had their bottom halves labeled missing. Missing values are filled-in at each iteration of the Markov chain. In the third and fourth rows we show the mean and mode (P (x ij = 1) > 0:5) of the BMF reconstruction. In the bottom row we have shown the nearest neighbors, in pixel space, to the training examples based only on the top halves of the original digits. In the last three rows of Figure 4 we show the features captured by the model. In row F, we show the average image of the data which have each feature in U on. It is clear that some row features have distinct digit forms and others are overlapping. In row G, the basis images WV > are shown. By adjusting the features that are non-zero in each row of U, images are composed by adding basis images together. Finally, in row H we show V. These pixel features mask out different regions in pixel space, which are weighted together to create the basis images. Note that there are K features in rows F and G, and L features in row H. (A) (B) (C) (D) (E) (F) (G) (H) Figure 4: Digits reconstruction. (A) Digits randomly sampled from the complete dataset. The bottom half of these digits were removed and labeled missing during learning. (B) The data shown to the algorithm. The top half is the original data value. (C) The mean of the reconstruction for the bottom halves. (D) The mode reconstruction of the bottom halves. (E) The nearest neighbours of the original data are shown in the bottom half, and were found based solely on the information from the top halves of the images. (F) The average of all digits for each U feature. (G) The feature WV > reshaped in the form of digits. By adding these features together, which the U features do, reconstructions of the digits is possible. (H) V reshaped into the form of digits. The first image represents a bias feature. 4.3 Gene expression data Gene expression data is able to exhibit multiple and overlapping clusters simultaneously; finding models for such complex data is an interesting and active research area ([10], [13]). The plaid model[10], originally introduced for analysis of gene expression data, can be thought of as a nonBayesian special case of our model in which the matrix W is diagonal and the number of binary features is fixed. Our goal in this experiment is merely to illustrate qualitatively the ability of BMF to find multiple clusters in gene expression data, some of which are overlapping, others non-overlapping. The data in this experiment consists of rows corresponding to genes and columns corresponding to patients; the patients suffer from one of two types of acute Leukemia [4]. In Figure 5 we show the factorization produced by the final state in the Markov chain. The rows and columns of the data and its expected reconstruction are ordered such that contiguous regions in X were observable. Some of the many feature pairings are highlighted. The BMF clusters consist of broad, overlapping clusters, and small, non-overlapping clusters. One of the interesting possibilities of using BMF to model gene expression data would be to fix certain columns of U or V with knowledge gained from experiments or literature, and to allow the model to add new features that help explain the data in more detail. 5 Conclusion We have introduced a new model, binary matrix factorization, for unsupervised decomposition of dyadic data matrices. BMF makes use of non-parametric Bayesian methods to simultaneously discover binary distributed representations of both rows and columns of dyadic data. The model explains each row and column entity using a componential code composed of multiple binary latent features along with a set of parameters describing how the features interact to create the observed responses at each position in the matrix. BMF is based on a hierarchical Bayesian model and can be naturally extended to make use of a prior distribution which permits an infinite number of features, at very little extra computational cost. We have given MCMC algorithms for posterior inference of both the binary factors and the interaction parameters conditioned on some observed data, and (A) (B) Figure 5: Gene expression results. (A) The top-left is X sorted according to contiguous features in the final U and V in the Markov chain. The bottom-left is V > and the top-right is U. The bottomright is W. (B) The same as (A), but the expected value of X, ^ X = UWV >. We have highlighted regions that have both u ik and v j l on. For clarity, we have only shown the (at most) two largest contiguous regions for each feature pair. demonstrated the model’s ability to capture overlapping structure and model complex joint distributions on a variety of data. BMF is fundamentally different from bi-clustering algorithms because of its distributed latent representation and from factorial models with continuous latent variables which interact linearly to produce the observations. This allows a much richer latent structure, which we believe makes BMF useful for many applications beyond the ones we outlined in this paper. References [1] P. Dayan and R. S. Zemel. Competition and multiple cause models. Neural Computation, 7(3), 1995. [2] P. Foldiak. Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics, 64, 1990. [3] Z. Ghahramani. Factorial learning and the EM algorithm. In NIPS, volume 7. MIT Press, 1995. [4] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286(5439), 1999. [5] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, volume 18. MIT Press, 2005. [6] J. A. Hartigan. Direct clustering of a data matrix. Journal of the American Statistical Association, 67, 1972. [7] G. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In NIPS, volume 6. Morgan Kaufmann, 1994. [8] S. Jain and R. M. Neal. Splitting and merging for a nonconjugate Dirichlet process mixture model. To appear in Bayesian Analysis. [9] C. Kemp, J. B. Tenebaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. Proceedings of the Twenty-First National Conference on Artificial Intelligence, 2006. [10] L. Lazzeroni and A. Owen. Plaid models for gene expression data. Statistica Sinica, 12, 2002. [11] J. Pitman. Combinatorial stochastic processes. Lecture Notes for St. Flour Course, 2002. [12] E. Saund. A multiple cause mixture model for unsupervised learning. Neural Computation, 7(1), 1994. [13] R. Tibshirani, T. Hastie, M. Eisen, D. Ross, D. Botstein, and P. Brown. Clustering methods for the analysis of DNA microarray data. Technical report, Stanford University, 1999. Department of Statistics.
2006
27
3,045
Reducing Calibration Time For Brain-Computer Interfaces: A Clustering Approach Matthias Krauledat1,2, Michael Schröder2, Benjamin Blankertz2, Klaus-Robert Müller1,2 1Technical University Berlin, Str. des 17. Juni 135, 10 623 Berlin, Germany 2 Fraunhofer FIRST.IDA, Kekuléstr. 7, 12 489 Berlin, Germany {kraulem,schroedm,blanker,klaus}@first.fhg.de Abstract Up to now even subjects that are experts in the use of machine learning based BCI systems still have to undergo a calibration session of about 20-30 min. From this data their (movement) intentions are so far infered. We now propose a new paradigm that allows to completely omit such calibration and instead transfer knowledge from prior sessions. To achieve this goal we first define normalized CSP features and distances in-between. Second, we derive prototypical features across sessions: (a) by clustering or (b) by feature concatenation methods. Finally, we construct a classifier based on these individualized prototypes and show that, indeed, classifiers can be successfully transferred to a new session for a number of subjects. 1 Introduction BCI systems typically require training on the subject side and on the decoding side (e.g. [1, 2, 3, 4, 5, 6, 7]). While some approaches rely on operant conditioning with extensive subject training (e.g. [2, 1]), others, such as the Berlin Brain-Computer Interface (BBCI) put more emphasis on the machine side (e.g. [4, 8, 9]). But when following our philosophy of ’letting the machines learn’, a calibration session of approximately 20-30 min was so far required, even for subjects that are beyond the status of BCI novices. The present contribution studies to what extent we can omit this brief calibration period. In other words, is it possible to successfully transfer information from prior BCI sessions of the same subject that may have taken place days or even weeks ago? While this question is of high practical importance to the BCI field, it has so far only been addressed in [10] in the context of transfering channel selection results from subject to subject. In contrast to this prior approach, we will focus on the more general question of transfering whole classifiers, resp. individualized representations between sessions. Note that EEG (electroencephalogram) patterns typically vary strongly from one session to another, due to different psychological pre-conditions of the subject. A subject might for example show different states of fatigue and attention, or use diverse strategies for movement imagination across sessions. A successful session to session transfer should thus capture generic ’invariant’ discriminative features of the BCI task. For this we first transform the EEG feature set from each prior session into a ’standard’ format (section 2) and normalize it. This allows to define a consistent measure that can quantify the distance between representations. We use CSP-based classifiers (see section 3.1 and e.g. [11]) for the discrimination of brain states; note that the line of thought presented here can also be pursued for other feature sets resp. for classifiers. Once a distance function (section 3.2) is established in CSP filter space, we can cluster existing CSP filters in order to obtain the most salient prototypical CSP-type filters for a subject across sessions (section 3.3). To this end, we use the IBICA algorithm [12, 13] for computing prototypes by a robust ICA decomposition (section 3.3). We will show that these new CSP prototypes are physiologically meaningful and furthermore are highly robust representations which are less easily distorted by noise artifacts. 2 Experiments and Data Our BCI system uses Event-Related (De-)Synchronization (ERD/ERS) phenomena [3] in EEG signals related to hand and foot imagery as classes for control. The term refers to a de– or increasing band power in specific frequency bands of the EEG signal during the imagination of movements. These phenomena are well-studied and consistently reproducible features in EEG recordings, and are used as the basis of many BCI systems (e.g. [11, 14]). For the present study we investigate data from experiments with 6 healthy subjects: aw (13 sessions), al (8 sessions), cm (4 sessions), ie (4 sessions), ay (5 sessions) and ch (4 sessions). These are all the subjects that participated in at least 4 BCI sessions. Each session started with the recording of calibration data, followed by a machine learning phase and a feedback phase of varying duration. All following retrospective analyses were performed on the calibration data only. During the experiments the subjects were seated in a comfortable chair with arm rests. For the recording of the calibration data every 4.5–6 seconds one of 3 different visual stimuli was presented, indicating a motor imagery task the subject should perform during the following 3–3.5 seconds. The randomized and balanced motor imagery tasks investigated for all subjects except ay were left hand (l), right hand (r), and right foot (f). Subject ay only performed left- and right hand tasks. Between 120 and 200 trials were performed during the calibration phase of one session for each motor imagery class. Brain activity was recorded from the scalp with multi-channel EEG amplifiers using at least 64 channels. Besides EEG channels, we recorded the electromyogram (EMG) from both forearms and the right lower leg as well as horizontal and vertical electrooculogram (EOG) from the eyes. The EMG and EOG channels were exclusively used to ensure that the subjects performed no real limb or eye movements correlated with the mental tasks. As their activity can directly (via artifacts) or indirectly (via afferent signals from muscles and joint receptors) be reflected in the EEG channels they could be detected by the classifier. Controlling EMG and EOG ensured that the classifier operated on true EEG signals only. Data preprocessing and Classification The time series data of each trial was windowed from 0.5 seconds after cue to 3 seconds after cue. The data of the remaining interval was band pass filtered between either 9 Hz – 25 Hz or 10 Hz – 25 Hz, depending on the signal characteristics of the subject. In any case the chosen spectral interval comprised the subject specific frequency bands that contained motor-related activity. For each subject a subset of EEG channels was determined that had been recorded for all of the subject’s sessions. These subsets typically contained 40 to 45 channels which were densely located (according to the international 10-20 system) over the more central areas of the scalp (see scalp maps in following sections). The EEG channels of each subject were reduced to the determined subset before proceeding with the calculation of Common Spatial Patterns (CSP) for different (subject specific) binary classification tasks. After projection on the CSP filters, the bandpower was estimated by taking the logvariance over time. Finally, a linear discriminant analysis (LDA) classifier was applied to the best discriminable two-class combination. 3 A closer look at the CSP parameter space 3.1 Introduction of Common Spatial Patterns (CSP) The common spatial pattern (CSP) algorithm is very useful in calculating spatial filters for detecting ERD/ERS effects ([15]) and can be applied to ERD-based BCIs, see [11]. It has been extended to multi-class problems in [14], and further extensions and robustifications concerning a simultaneous optimization of spatial and frequency filters were presented in [16, 17, 18]. Given two distributions in a high-dimensional space, the (supervised) CSP algorithm finds directions (i.e., spatial filters) that maximize variance for one class and simultaneously minimize variance for the other class. After having band-pass filtered the EEG signals to the rhythms of interest, high variance reflects a strong rhythm and low variance a weak (or attenuated) rhythm. Let us take the example of discriminating left hand vs. right hand imagery. The filtered signal corresponding to the desynchronization of the left hand motor cortex is characterized by a strong motor rhythm during imagination of right hand movements (left hand is in idle state), and by an attenuated motor rhythm during left hand imagination. This criterion is exactly what the CSP algorithm optimizes: maximizing variance for Distance Matrix for 78 CSP Filters 50 60 70 0 0.5 1 1.5 Dimension 1 Dimension 2 Scatterplot MDS: CSP Filters and 6 Prototypes 10 20 30 40 10 20 30 40 50 60 70 Foot Left Hand Figure 1: Left: Non-euclidean distance matrix for 78 CSP filters of imagined left hand and foot movement. Right: Scatterplot of the first vs. second dimension of CSP filters after Multi-Dimensional Scaling (MDS). Filters that minimize the variance for the imagined left hand are plotted as red crosses, foot movement imagery filters are shown as blue dots. Cluster centers detected by IBICA are marked with magenta circles. Both figures show data from al. the class of right hand trials and at the same time minimizing variance for left hand trials. Furthermore the CSP algorithm calculates the dual filter that will focus on the area of the right hand and it will even calculate several filters for both optimizations by considering the remaining orthogonal subspaces. Let Σi be the covariance matrix of the trial-concatenated matrix of dimension [channels × concatenated time-points] belonging to the respective class i ∈{1,2}. The CSP analysis consists of calculating a matrix Q and diagonal matrix D with elements in [0,1] such that QΣ1Q⊤= D and QΣ2Q⊤= I −D. (1) This can be solved as a generalized eigenvalue problem. The projection that is given by the i-th row of matrix Q has a relative variance of di (i-th element of D) for trials of class 1 and relative variance 1 −di for trials of class 2. If di is near 1 the filter given by the i-th row of Q maximizes variance for class 1, and since 1−di is near 0, minimizes variance for class 2. Typically one would retain projections corresponding to the three highest eigenvalues di, i.e., CSP filters for class 1, and projections corresponding to the three lowest eigenvalues, i.e., CSP filters for class 2. 3.2 Comparison of CSP filters Since the results of the CSP algorithm are the solutions of a generalized eigenvalue problem, where every multiple of an eigenvector is again a solution to the eigenvalue problem. If we want to compare different CSP filters, we must therefore keep in mind that every point on the line through a CSP filter point and the origin can be identified (except for the origin itself). More precisely, it is sufficient to consider only normalized CSP vectors on the (#channels-1)-dimensionalhypersphere. This suggests that the CSP space is inherently non-euclidean. As a more appropriate metric between two points c1 and c2 in this space, we calculated the angle between the two lines corresponding to these points. m(c1,c2) = arccos( c1 ∗c2 |c1|∗|c2|) When applying this measure to a set of CSP filters (ci)i≤n, one can generate the distance matrix D = (m(ci,cj))i,j≤n, which can then be used to find prototypical examples of CSP filters. Fig.1 shows an example of a distance matrix for 78 CSP filters for the discrimination of the variance during imagined left hand movement and foot movement. Based on the left hand signals, three CSP filters showing the lowest Prototypes Dissimilarity Single Linkage Dendrogram 19 42 50 41 47 65 62 64 59 57 74 53 54 58 40 46 49 43 63 52 56 55 60 51 45 44 61 48 70 72 77 71 66 73 67 69 78 75 68 76 22 11 8 10 24 25 31 14 33 23 21 29 37 9 15 16 30 6 4 27 12 7 3 17 18 5 20 32 26 34 2 28 38 39 1 13 35 36 0 0.2 0.4 0.6 0.8 1 1.2 Figure 2: Dendrogram of a hierarchical cluster tree for the CSP filters of left hand movement imagery (dashed red lines) and foot movement imagery (solid blue lines). Cluster centers detected by IBICA are used as CSP prototypes. They are marked with magenta arrows. eigenvalues were chosen for each of the 13 sessions. The same number of 3×13 filters were chosen for the foot signals. The filters are arranged in groups according to their relative magnitude of the eigenvalues, i.e., filters with the largest eigenvalues are grouped together, then filters with the second largest eigenvalues etc. The distance matrix in Fig.1 shows a block structure which reveals that the filters of each group have low distances amongst each other as compared to the distances to members of other groups. This is especially true for filters for the minimization of variance in left hand trials. 3.3 Finding Clusters in CSP space The idea to find CSP filters that recur in the processing of different sessions of a single subject is very appealing, since these filters can be re-used for efficient classification of unseen data. As an example of clustered parameters, Fig.2 shows a hierarchical clustering tree (see [19]) of CSP filters of different sessions for subject al. Single branches of the tree form distinct clusters, which are also clearly visible in a projection of the first Multi-Dimensional Scaling-Components in Fig.1 (for MDS, see [20]). The proposed metric of section 3.2 coincides with the metric used for Inlier-Based Independent Component Analysis (IBICA, see [12, 13]). This method was originally intended to find estimators of the super-Gaussian source signals from a mixture of signals. By projecting the data onto the hypersphereand using the angle distance, it has been demonstrated that the correct source signals can be found even in high-dimensionaldata. The key ingredient of this method is the robust identification of inlier points as it can be done with the γ-index (see [21]), which is defined as follows: Let z ∈{c1,...,cn} be a point in CSP-space, and let nn1(z),...,nnk(z) be the k nearest neighbors of z, according to the distance m. We then call the average distance of z to its neighbors the γ-index of z, i.e. γ(z) = 1 k k ∑ j=1 m(z,nn j(z)). If z lies in a densely populated region of the hypersphere, then the average distance to its neighbors is small, whereas if it lies in a sparse region, the average distance is high. The data points with the smallest γ are good candidates for prototypical CSP filters since they are similar to other filters in the comparison set. This suggests that these filters are good solutions in a number of experiments and are therefore robust against changes in the data such as outliers, variations in background noise etc. 4 Competing analysis methods: How much training is needed? Fig.3 shows an overview of the validation methods used for the algorithms under study. The left part shows validation methods which mimick the following BCI scenario: a new session starts and no Session 1 Session 2 Session 3 Session 4 New LDA CSP Test LDA Test CSP LDA Test CSP-Prototypes LDA Test CSP and CSP-Prototypes Data: HIST-CSP: PROTO-CSP: CONCAT-CSP: Session 1 Session 2 Session 3 Session 4 Historical 10/20/30 Trials LDA CSP Test LDA Test CSP LDA CSP-Prototypes LDA CSP and CSP-Prototypes New Historical Test Test Ordinary CSP: Figure 3: Overview of the presented training and testing modes for the example of four available sessions. The left part shows a comparison of ordinary CSP with three methods that do not require calibration. The validation scheme in the right part compares CSP with three adaptive methods. See text for details. data has been collected yet. The top row represents data of all sessions in original order. Later rows describe different data splits for the training of the CSP filters and LDA (both depicted in blue solid lines) and for the testing of the trained algorithms on unseen data (green dashed lines). The ordinary CSP method does not take any historical data from prior sessions into account (second row). It uses training data only from the first half of the current session. This serves as a baseline to show the general quality of the data, since half of the session data is generally enough to train a classifier that is well adapted to the second half of the session. Note that this evaluation only corresponds to a real BCI scenario where many calibration trials of the same day are available. 4.1 Zero training methods This is contrasted to the following rows, which show the exclusive use of historic data in order to calculate LDA and one single set of CSP filters from the collected data of all prior sessions (third row), or calculate one set of CSP filters for each historic session and derive prototypical filters from this collection as described in section 3.3 (fourth row), or use a combination of row three and four that results in a concatenation of CSP filters and derived CSP prototypes (fifth row). Feature concatenation is an effective method that has been shown to improve CSP-based classifiers considerably (see [22]). 4.2 Adaptive training methods The right part of Fig.3 expands the training sets for rows three, four and five for the first 10, 20 or 30 trials per class of the data of the new session. In the methods of row 4 and 5, only LDA profits from the new data, whereas CSP prototypes are calculated exclusively on historic data as before. This approach is compared against the ordinary CSP approach that now only uses the same small amount of training data from the new session. This scheme, as well as the one presented in section 4.1, has been cross-validated such that each available session was used as a test session instead of the last one. 5 Results The underlying question of this paper is whether information gathered from previous experimental sessions can prove its value in a new session. In an ideal case existing CSP filters and LDA classifiers could be used to start the feedback phase of the new session immediately, without the need to collect new calibration data. Subjects aw al cm ie ay ch Classes LF RF LF LR LR LR Ordinary CSP 5.0 2.7 11.8 16.2 11.7 6.2 HIST 10.1 2.9 23.0 26.0 13.3 6.9 PROTO 9.9 3.1 21.5 26.2 10.0 11.4 CONCAT 8.9 2.7 19.5 23.7 12.4 7.4 Sessions 13 7 4 4 5 4 Table 1: Results of Zero-Training modes. All classification errors are given in %. While the ordinary CSP method uses half of the new session for training, the three methods HIST, PROTO and CONCAT exclusively use historic data for the calculation of CSP filters and LDA. (as described on the left side of Fig.3). Amongst them, CONCAT performs best in four of the six subjects. For subjects al, ay and ch its result is even comparable to that of ordinary CSP. 0 10 20 30 8 10 12 14 16 18 20 Number of trials Error [%] Ordinary CSP HIST−CSP PROTO−CSP CONCAT−CSP Figure 4: Incorporating more and more data from the current session (10, 20 or 30 trials per class), the classification error decreases for all of the four methods described on the right side of Fig.3. The three methods HIST, PROTO and CONCAT clearly outperform ordinary CSP. Interestingly the best zero-training method CONCAT is only outperformed by ordinary CSP if the latter has a head start of 30 trials per class. We checked for the validity of this scenario based on the data described in section 2. Table 1 shows the classification results for the different classification methods under the Zero-training validation scheme. For subjects al, ay and ch, the classification error of CONCAT is of the same magnitude as the ordinary (training-based) CSP-approach. For the other three subjects, CONCAT outperforms the methods HISTand PROTO. Although the ideal case is not reached for every subject, the table shows that our proposed methods provide a decent step towards the goal of Zero-training for BCI. Another way to at least reduce the necessary preparation time for a new experimental session is to record only very few new trials and combine them with data from previous sessions in order to get a quicker start. We simulate this strategy by allowing the new methods HIST, PROTO and CONCAT to take a look also on the first 10, 20 or 30 trials per class of the new session. The baseline to compare their performance would be a BCI system trained only on these initial trials. In Fig. 4, this comparison is depicted. Here the influence of the number of initial training trials becomes visible. If no new data is available, the ordinary classification approach of course can not produce any output, whereas the history-based methods, e.g. CONCAT already generates a stable estimation of the class labels. All methods gain performance in terms of smaller test errors as more and more trials are added. Only after training on at least 30 trials per class, ordinary CSP reaches the classification level that CONCAT had already shown without any training data of the current session. Fig.5 shows some prototypical CSP filters as detected by IBICA clustering for subject al and left hand vs. foot motor imagery. All filters have small support (i.e., many entries are close to 0), and the few large entries are located on neurophysiologically important areas: Filters 1–2 and 4–6 cover the motor cortices corresponding to imagined hand movements, while filter 3 focuses on the central foot area. This shows that the cluster centers are spatial filters that meet our neurophysiological ex-0.5 0 0.5 1 2 3 4 5 6 CSP Prototype Filters Figure 5: First six CSP prototype filters determined by IBICA for al. pectations, since they are able to capture the frequency power modulations over relevant electrodes, while masking out unimportant or noisy channels. 6 Discussion and Conclusion Advanced BCI systems (e.g. BBCI) recently aquired the ability to dispense with extensive subject training and now allow to infer a blueprint of the subject’s volition from a short calibration session of approximately 30 min. This became possible through the use of modern machine learning technology. The next step along this line to make BCI more practical is to strive for zero calibration time. Certainly it will not be realistic to achieve this goal for arbitrary BCI novices, rather in this study we have concentrated on experienced BCI users (with 4 and more sessions) and discussed algorithms to re-use their classifiers from prior sessions. Note that the construction of a classifier that is invariant against session to session changes, say, due to different vigilance, focus or motor imagination across sessions is a hard task. Our contribution shows that experienced BCI subjects do not necessarily need to perform a new calibration period in a new experiment. By analyzing the CSP parameter space, we could reveal an appropriate characterization of CSP filters. Finding clusters of CSP parameters for old sessions, novel prototypical CSP filters can be derived, for which the neurophysiological validity could be shown exemplarily. The concatenation of these prototype filters with some CSP filters trained on the same amount of data results in a classifier that not only performs comparable to the presented ordinary CSP approach (trained on a large amount of data from the same session) in half of the subjects, but also outperforms ordinary CSP considerably when only few data points are at hand. This means that experienced subjects are predictable to an extent that they do not require calibration anymore. We expect that these results can be even further optimized by e.g. hand selecting the filters for PROTO, by adjusting for the distribution changes in the new session, e.g. by adapting the LDA as presented in [23], or by applying advanced covariate-shift compensation methods like [24]. Future work will aim to extend the presented zero training idea towards BCI novices. References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Braincomputer interfaces for communication and control”, Clin. Neurophysiol., 113: 767–791, 2002. [2] N. Birbaumer, A. Kübler, N. Ghanayim, T. Hinterberger, J. Perelmouter, J. Kaiser, I. Iversen, B. Kotchoubey, N. Neumann, and H. Flor, “The Though translation device (TTD) for Completly Paralyzed Patients”, IEEE Trans. Rehab. Eng., 8(2): 190–193, 2000. [3] G. Pfurtscheller and F. H. L. da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles”, Clin. Neurophysiol., 110(11): 1842–1857, 1999. [4] B. Blankertz, G. Curio, and K.-R. Müller, “Classifying Single Trial EEG: Towards Brain Computer Interfacing”, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157–164, 2002. [5] L. Trejo, K. Wheeler, C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A. Hibbs, R. Matthews, and M. Krupka, “Multimodal Neuroelectric Interface Development”, IEEE Trans. Neural Sys. Rehab. Eng., (11): 199–204, 2003. [6] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, “Linear spatial integration for single trial detection in encephalography”, NeuroImage, 7(1): 223–230, 2002. [7] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, “EEG-Based Communication: A Pattern Recognition Approach”, IEEE Trans. Rehab. Eng., 8(2): 214–215, 2000. [8] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. Müller, V. Kunzmann, F. Losch, and G. Curio, “The Berlin Brain-Computer Interface: EEG-based communication without subject training”, IEEE Trans. Neural Sys. Rehab. Eng., 14(2), 2006, in press. [9] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schlögl, B. Obermaier, and M. Pregenzer, “Current Trends in Graz Brain-computer Interface (BCI)”, IEEE Trans. Rehab. Eng., 8(2): 216–219, 2000. [10] M. Schröder, T. N. Lal, T. Hinterberger, M. Bogdan, N. J. Hill, N. Birbaumer, W. Rosenstiel, and B. Schölkopf, “Robust EEG Channel Selection Across Subjects for Brain Computer Interfaces”, EURASIP Journal on Applied Signal Processing, Special Issue: Trends in Brain Computer Interfaces, 19: 3103–3112, 2005. [11] H. Ramoser, J. Müller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement”, IEEE Trans. Rehab. Eng., 8(4): 441–446, 2000. [12] F. C. Meinecke, S. Harmeling, and K.-R. Müller, “Robust ICA for Super-Gaussian Sources”, in: C. G. Puntonet and A. Prieto, eds., Proc. Int. Workshop on IndependentComponent Analysis and Blind Signal Separation (ICA2004), 2004. [13] F. C. Meinecke, S. Harmeling, and K.-R. Müller, “Inlier-based ICA with an application to super-imposed images”, Int. J. of Imaging Systems and Technology, 2005. [14] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Boosting bit rates in non-invasive EEG single-trial classifications by feature combination and multi-class paradigms”, IEEE Trans. Biomed. Eng., 51(6): 993–1002, 2004. [15] Z. J. Koles and A. C. K. Soong, “EEG source localization: implementing the spatio-temporal decomposition approach”, Electroencephalogr. Clin. Neurophysiol., 107: 343–352, 1998. [16] G. Dornhege, B. Blankertz, M. Krauledat, F. Losch, G. Curio, and K.-R. Müller, “Combined optimization of spatial and temporal filters for improving Brain-Computer Interfacing”, IEEE Trans. Biomed. Eng., 2006, accepted. [17] S. Lemm, B. Blankertz, G. Curio, and K.-R. Müller, “Spatio-Spectral Filters for Improved Classification of Single Trial EEG”, IEEE Trans. Biomed. Eng., 52(9): 1541–1548, 2005. [18] R. Tomioka, G. Dornhege, G. Nolte, K. Aihara, and K.-R. Müller, “Optimizing Spectral Filter for Single Trial EEG Classification”, in: Lecture Notes in Computer Science, Springer-Verlag Heidelberg, 2006, to be presented at 28th Annual Symposium of the German Association for Pattern Recognition (DAGM 2006). [19] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, Wiley & Sons, 2nd edn., 2001. [20] T. Cox and M. Cox, Multidimensional Scaling, Chapman & Hall, London, 2001. [21] S. Harmeling, G. Dornhege, D. Tax, F. C. Meinecke, and K.-R. Müller, “From outliers to prototypes: ordering data”, Neurocomputing, 2006, in press. [22] G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Combining Features for BCI”, in: S. Becker, S. Thrun, and K. Obermayer, eds., Advances in Neural Inf. Proc. Systems (NIPS 02), vol. 15, 1115–1122, 2003. [23] P. Shenoy, M. Krauledat, B. Blankertz, R. P. N. Rao, and K.-R. Müller, “Towards Adaptive Classification for BCI”, J. Neural Eng., 3: R13–R23, 2006. [24] S. Sugiyama and K.-R. Müller, “Input-Dependent Estimation of Generalization Error under Covariate Shift”, Statistics and Decisions, 2006, to appear.
2006
28
3,046
Theory and Dynamics of Perceptual Bistability Paul R. Schrater∗ Departments of Psychology and Computer Sci. & Eng. University of Minnesota Minneapolis, MN 55455 schrater@umn.edu Rashmi Sundareswara Department of Computer Sci. & Eng. University of Minnesota sundares@cs.umn.edu Abstract Perceptual Bistability refers to the phenomenon of spontaneously switching between two or more interpretations of an image under continuous viewing. Although switching behavior is increasingly well characterized, the origins remain elusive. We propose that perceptual switching naturally arises from the brain’s search for best interpretations while performing Bayesian inference. In particular, we propose that the brain explores a posterior distribution over image interpretations at a rapid time scale via a sampling-like process and updates its interpretation when a sampled interpretation is better than the discounted value of its current interpretation. We formalize the theory, explicitly derive switching rate distributions and discuss qualitative properties of the theory including the effect of changes in the posterior distribution on switching rates. Finally, predictions of the theory are shown to be consistent with measured changes in human switching dynamics to Necker cube stimuli induced by context. 1 Introduction Our visual system is remarkably good at producing consistent, crisp percepts of the world around us, in the process hiding interpretation uncertainty. Perceptual bistability is one of the few circumstances where ambiguity in the visual processing is exposed to conscious awareness. Spontaneous switching of perceptual states frequently occurs during continuously viewing an ambiguous image, and when a new interpretation of a previously stable stimuli is revealed (as in the sax/girl in figure ??a), spontaneous switching begins to occur[?]. Moreover, although perceptual switching can be modulated by conscious effort[?, ?], it cannot be completely controlled. (a) (b) Figure 1: Examples of ambiguous figures: (a) can be interpreted as a woman’s face or a saxophone player. (b) can be interpreted as a cube viewed from two different viewpoints. Stimuli that produce bistability are characterized by having several distinct interpretations that are in some sense equally plausible. Given the successes of Bayesian inference as a model of perception ∗http://www.schrater.org (for instance [?, ?, ?]), these observations suggest that bistability is intimately connected with making perceptual decisions in the presence of a multi-modal posterior distribution, as previously noted by several authors[?, ?]. However, typical Bayesian models of perceptual inference have no dynamics, and probabilistic inference per se provides no reason for spontaneous switching, raising the possibility that switching stems from idiosyncracies in the brain’s implementation of probabilistic inference, rather than from general principles. In fact, most explanations of bistability have been historically rooted in proposals about the nature of neural processing of visual stimuli, involving low-level visual processes like retinal adaptation and neural fatigue[?, ?, ?]. However, the abundance of behavioral and brain imaging data that show high level influences on switching (like intentional control which can produce 3-fold changes in alternation rate[?]) have revised current views toward neural hypotheses involving combinations of both sensory and higher order cortical processing[?]. The goal of this paper is to provide a simple explanation for the origins of bistability based on general principles that can potentially handle both top-down and bottom-up effects. 2 Basic theory The basic ideas that constitute our theory are simple and partly form standard assumptions about perceptual processing. The core assumptions are: 1. Perception performs Bayesian inference by exploring and updating the posterior distribution across time by a kind of sampling process (e.g. [?]). 2. Conscious percepts result from a decision process that picks the interpretations by finding sample interpretations with the highest posterior probability (possibly weighted by the cost of making errors). 3. The results of these decisions and their associated posterior probabilities are stored in memory until a better interpretation is sampled. 4. The posterior probability associated with the interpretation in memory decays with time. The intuition behind the model is that most percepts of objects in a scene are built up across a series of fixations. When an object previously fixated is eccentrically viewed or occluded, the brain should store the previous interpretation in memory until better data comes along or the memory becomes too old to be trusted. Finally, the interpretation space required for direct Bayesian inference is too large for even simple images, but sampling schemes may provide a simple way to perform approximate inference. The theory provides a natural interface to interpret both high-level and low-level effects on bistability, because any event that has an impact on the relative heights or positions of the modes in the posterior can potentially influence durations. For example, patterns of eye fixations have long been known to influence the dominant percept[?]. Because eye movement events create sudden changes in image information, it is natural that they should be associated with changes in the dominant mode. Similarly, control of information via selective attention and changes in decision thresholds offer concrete loci for intentional effects on bistability. 3 Analysis To analyze the proposed theory, we need to develop temporal distributions for the maxima of a multimodal posterior based on a sampling process and describe circumstances under which a current sample will produce an interpretation better than the one in memory. We proceed as follows. First we develop a general approximation to multi-modal posterior distributions that can vary over time, and analyze the probability that a sample from the posterior are close to maximal. We then describe how the samples close to the max interact with a sample in memory with decay. A tractable approximation to a multi-modal distribution can be formed using a mixture of uni-modal distributions centered at each maxima. P(θt|D0:t) = P(Dt|θt)P(θt|D0:t−∆t) P(Dt|D0:t−∆t) ≈ #maxima X i=1 pi(Dt|θt; θ∗ t,i)Pi(θ|D0:t−∆t; θ∗ t,i) (1) where θt is the vector of unknown parameters (e.g. shape for Necker Cube) at time t, θ∗ t,i is the location of the maxima of the ith mode, Dt is the most recent data, D0:t−∆t is the data history, and Pi(θ|D0:t−∆t; θ∗ t,i) is the predictive distribution (prior) for the current data based on recent experience1. Near the maxima, the negative log of the uni-modal distributions can be expanded into a secondorder Taylor series: −Li(θt|Dt) ≈ d2 i + ki (2) = (θt −θ∗ t,i)T Ii(θ∗ t,i|D0:t)(θ −θ∗ t,i) + 1/2 log(|I−1 i |) + ci (3) where Ii(θ∗ t,i|D0:t) = ∂2 log(P (θt|D0:t)) ∂θ∂θT |θ∗ t,i is the observed information matrix and ci = log P(θ∗ t,i|D0:t)  represents the effect of the predictive prior on the posterior height at the ith mode. Thus, samples from a posterior mode will be approximately χ2 distributed near the maximum with effective degrees of freedom n given by the number of significant eigenvalues of I−1 i . Essentially n encodes the effective degrees of freedom in interpretation space. 2 3.1 Distribution of transition times We assume that the perceptual interpretation is selected by a decision process that updates the interpretation in memory mθ(t) whenever the posterior probability of the most recent sample both exceeds a decision threshold and the discounted probability of the sample in memory. Given these assumptions, we can approximate the probability distribution for update events. Assuming the sampling forms a locally stationary process d2 i (t), update events involving entry into mode i are first passage times Ti of d2 i (t) below both the minimum of the current memory sample ωt and the decision threshold ξ: Ti(ξ, ωt) = min{t : δi t ≤min{ξ, ωt}} where δi t = d2 i (t) −ki, time t is the duration since the last update event and ωt = log(P(mθ(t)|D0:t)) is the log posterior of the sample in memory at time t. Let M i t = inf0≤s≤t δi s. The probability of waiting at least t for an update event is related to the minima of the process by: P(Ti(ξ, ωt) < t) = P(M i t < min{ξ, ωt}) This probability can be expressed as: P(M i t < min{ξ, ωt}) = (4) Z t 0 p(δi τ < ωt) p(ωt < ξ)P(i|τ)dτ + R t 0 p(δi τ < ξ) (1 −P(ωt < ξ)) P(i|τ)dτ where P(i|t) = P(θt ∈Si) denotes the probability that a sample drawn between times 0 and t is in the support Si of the ith mode. To generate tractable expressions from equation ??, we make the following assumptions. Memory distribution Assume that the memory decay process is slow relative to the sampling events, and that the decay process can be modeled as a random walk in the interpretation space mθ(t) = mθ(0) + P 0≤τi≤t ǫθ(τi), where τi are sample times, and ǫθ are small disturbances with zero mean and variance σ we assume to be small. Because variances add, the average effect on the distance ωt is a linear increase: ωt = ω0 + ρσt, where ρ is the sampling rate. These disturbances could represent changes in the local of the maxima of the posterior due to the incorporation of new data, neural noise, or even discounting (note that linearly increasing ωt corresponds to exponential or multiplicative discounting in probability). 1Because time is critical to our arguments, we assume that the posterior is updated across time (and hence new data) using a process that resembles Bayesian updating. 2For the Necker cube, the interpretation space can be thought of as the depths of the vertices. A strong prior assumption that world angles between vertices are close to 90deg produces two dominate modes in the posterior that correspond to the typical interpretations. Within a mode, the brain must still decide whether the vertices conform exactly to a cube. Thus for the Necker cube, n might be as high as 8 (one depth value per vertex) or as low as 1 (all vertices fixed once the front corner depth is determined). To understand the behavior of this memory process, notice that every mθ(0) must be within distance ξ of the maximum of the posterior for an update to occur. Due to the properties of extrema of distributions of χ2 random variables, an mθ(0) will be (in expectation) a characteristic distance µm(ξ) below ξ and for t > 0 drifts with linear dynamics3. This suggests the approximation, p(ωt < ξ) ≈δ(µm + ρσt −ωt), which can be formally justified because p(ωt < ξ) will be highly peaked with respect to the distribution of the sampling process p(δi τ). Finally assuming slow drift means (1 −P(ωt < ξ)) ≈0 on the time scale that transitions occur4. Under these assumptions, equation ?? reduces to: P(M i t < min{ξ, ωt}) = Z t 0 p(δi τ < ωt)δ(µm + ρσt −ωt)P(i|τ)dτ (5) ≈ P(M i t < µm + ρσt)P(i) (6) where P(i) is the average frequency of sampling from the ith mode. Extrema of the posterior sampling process If the sampling process has no long-range temporal dependence, then under mild assumptions the distribution of extrema converge in distribution5 to one of three characteristic forms that depend only on the domain of the random variable[?]. For χ2 samples, the distribution of minima converges to P(M i t ≤b) = 1 −exp(−cNba−1) where N is the number of samples, c(n) = 2−a+1 Γ(a) , a(n) = n 2  . Set N = ρt and let ρ = 1 for convenience, where ρ is the effective sampling rate, and equation ?? can be written as: P(Ti < t) = P(M i t ≤min{ξ, ωt}) ≈ P(M i t < µm + σt)P(i) (7) = 1 −exp −c t(µm + σt)a−1 P(i) (8) The probability distribution shows a range of behavior depending on the values of a = n/2 and µm(ξ). Note that the time scale for switching. In particular, for n > 4 and µm(ξ) relatively small, the distribution has a gamma-like behavior, where new memory update transitions are suppressed near recent transitions. For n = 2, or for µm(ξ) large, the above equation reduces to exponential. This behavior shows the effect of the decision threshold, as without a decision threshold the asymptotic behavior of simple sampling schemes will generate approximately exponentially distributed update event times, as a consequence of extreme value theory. Finally, for n = 1and small µm(ξ), the distribution becomes Cauchy-like with extremely long tails. See figure ?? for example distributions. Note that the time scale of events can be arbitrarily controlled by appropriately selecting ρ (controls the time scale of the sampling process) and σ (controls the time scale of the memory decay process). Effects of posterior parameters on update events The memory update distributions are effected primarily by two factors, the log posterior heights and their difference ∆kij = ki −kj, and the effective number of degrees of freedom per mode n. Effect of ki, ∆kij The variable ∆kij has possible effects both on the probability that a mode is sampled, and the temporal distributions. When the modes are strongly peaked (and the sampling procedure is unbiased) log P(i) ≈∆kij. Secondly, ∆kij effectively sets different thresholds for each mode, because memory update events occur when: δi t = d2 i (t) −ki > min{ωt, ξ} Increasing the effective threshold for mode i makes updates of type i more frequent, and should drive the temporal dynamics of the dominant mode toward exponential. Finally, if the posterior becomes more peaked while the threshold remains fixed, the update rates should increase and the temporal distributions will move toward exponential. If we assume increased viewing time makes the posterior more peaked, then our model predicts the common finding of increased transition rates with viewing duration. 3In the simulations, µm is chosen as the expected value of the set of events below the threshold xi 4Conversely fast drift in the limit means P(ωt < ξ) ≈0, which results in transitions entirely determined by the minima of the sampling process and ξ. 5Corresponds to the limit assertion supb |P(Mt ≥b) −exp(−λ(b, t)t)| →0 as t →∞ 0 500 1000 1500 2000 2500 3000 3500 4000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 n=8 n=4 n=2 Effective degrees of freedom Time(t) (in number of samples) Prob. Time to next update > t (P(T>t)) Figure 2: Examples of cumulative distribution functions of memory update times. Solid curves are generated by simulating the sampling with decision process described in the text. Dashed lines represent theoretical curves based on the approximation in equation ??, showing the quality of the approximation. Effect of n One of the surprising aspects of the theory above is the strong dependence on the effective number of degrees of freedom. The theory makes a strong prediction that stimuli that have more interpretation degrees of freedom will have longer durations between transitions, which appears to be qualitatively true across both rivalry and bistability experiments[?, ?] (of course, depending how you interpret the number of degrees of freedom). Relating theory to behavioral data via induced Semi-Markov Renewal Process Assuming that memory update events involving transitions to the same mode are not perceptually accessible, only update events that switch modes are potentially measurable. However, the process described above fits the description of a generator for a semi-Markov renewal process. A semi-Markov renewal process involves Markov transitions between discrete states i and j determined by a matrix with entries Pij, coupled with random durations spent in that state sampled from time distributions Fij(t). The product of these distributions Qij(t) = PijFij(t) is the generator of the process, that describes the conditional probability of first transition between states i →j in time less than t, given first entry into state i occurs at time t = 0. In the theory above, Fij(t) = Fii(t) = P(Ti < t), while Pij = Pjj = P(j)6 The main reason for introducing the notion of a renewal process is that they can be used to express the relationship between the theoretical distributions and observable quantities. The most commonly collected data are times between transitions and (possibly contingent) percept frequencies. Here we present results found in Ross[?]. Let the state s(t) = i refer to when the memory process is in the support of mode i: mθ(t) ∈Si at time t. The distribution of first transition times from state s = i can be expressed formally as a cumulative probability of first transition: Gij(t) = P(Nj(t) > 0|s(0) = i) = P(Tj < t|s(0) = i) where Nj(t) is the number of transitions into state j in time <= t, Tj is the time until first memory update of type j. For two state processes, only G01(t) and G10(t) are measurable. Let P(0), denote the probability of sampling from mode 0. The relationship between the generating process and the distribution of first transitions is given by: G01(t) = Z t 0 G01(t −τ)dQ00(τ) + Q01(t) (9) G01(t) = P(0) Z t 0 G01(t −τ)dP(T0 < τ) dt dτ + P(1)P(T0 < t) (10) which appears only to be solvable numerically for the general form of our memory update transition functions, however, for the case in which P(T0 < t) is exponential, G01(t) is as well. Moreover, 6The independence relations are a consequence of an assumption of independence in the sampling procedure, and relaxing that assumption can produce state contingencies in Qij(t). Therefore, we do not consider this to be a prediction of the theory. For example, mild temporal dependence (e.g. MCMC-like sampling with large steps) can create contingencies in the frequency of sampling from the ith mode that will produce a non-independent transition matrix Pij = P(θt ∈Si|θt−ρ∆t ∈Sj). for gamma-like distributions, the convolution integral tends to increase the shape parameter, which means that gamma parameter estimates produced by fitting transition durations will overestimate the amount of ’memory’ in the process7. Finally note the limiting behavior as P(0) →0, G01(t) = P(T0 < t), so that direct measurement of the temporal distributions is possible but only for the (almost) supressed perceptual state. Similar relationships exist for survival probabilities, defined as Sij(t) = P(s(t) = j|s(0) = i) 4 Experiments In this section we investigate simple qualitative predictions of the theory, that biasing perception toward one of the interpretations will produce a coupled set of changes in both percept frequencies and durations, under the assumption that perceptual biases result from differences in posterior heights . To bias perception of a bistable stimuli, we had observers view a Necker cube flanked with ’fields of cubes’ that are perceptually unambiguous and match one of the two percepts (see figure ??). Subjects are typically biased toward seeing the Necker cube in the “looking down” state (65-70% response rates), and the context stimuli shown in figure ??a) have little effect on Necker cube reversals. We found that the looking up context, boosts “looking up” response rates from 30% to 55%. 4.1 Methods Subject’s perceptual state while viewing the stimuli in fig. ?? were collected using the methods described in[?]. Eye movement effects[?] were controlled by having observers focus on a tiny sphere in the center of the Necker cube, and attention was controlled using catch trials. Base rates for reversals were established for each observer (18 total) in a training phase. Each observer viewed 100 randomly generated context stimuli and each stimulus was viewed long enough to acquire 10 responses (taking 10-12 sec on average). For ease of notation, we represent the “Looking down” condition as state 0 and the “Looking Up” as state 1. (a) An instance of the “Looking down” context with the Necker cube in the middle (b) An instance of the “Looking up” context with the Necker cube in the middle Figure 3: The two figures are examples of the “Looking down” and “Looking up” context conditions. 4.2 Results We measured the effect of context on estimates of perceptual switching rates, Ri = P(s(t) = i), first transition durations Gij, and survival probabilities Pii = P(s(t) = i|s(0) = i) by counting the number of events of each type. Additionally, we fit a semi-Markov renewal process Qij(t) = PijFij(t) to the data using a sampling based procedure. The procedure is too complex to fully describe in this paper, so a brief description follows. For ease of sampling, Fij(t) were gamma with separate parameters for each of the four conditionals {00, 01, 10, 11}, resulting in 10 parameters 7gamma shape parameters are frequently interpreted as the number of events in some abstract Poisson process that must occur before transition overall. The process was fit by iteratively choosing parameter values for Qij(t), simulating response data and measuring the mismatch between the simulated and human Gij and Pii distributions. The effect of context on Gij and Pii is shown in Fig.?? and Fig.?? for the contexts “Looking Down” and “Looking Up” respectively. The figures also show the maximum likelihood fitted gamma functions. Testable predictions generated by simulating the memory process described above were verified, including changes in mean durations of about 2sec, coupling of the duration distributions, and an increase in the underlying renewal process shape parameters when the percepts are closer to equally probable. 0 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Max. Likelihood fit Human data Max. Likelihood fit Human data P( 1st transition|S(O)="Down") P(1st transition|S(O)=”Up”) Probability Shown "Down" context Time Time P(Surv. |S(0)="Down") P(Survival |S(0)="Up") Figure 4: Data pooled across subjects for the “Looking Down” context condition. (a) Prob. of first transition and the survival probability of the “Looking down” percept. (b)Prob. of first transition and conditional survival probability of the “Looking Up” percept. A semi-Markov renewal process with transition paramters Pij, gamma means mij and gamma variances vij was fit to all the data via max. likelihood. The best fit curves are superimposed on the data. 0 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Max. Likelihood fit Human data Max. Likelihood fit Human data Shown "Up" context P( 1st transition|S(O)="Down") P( 1st transition|S(O)="Up" ) P(Survival |S(0)="Down") P(Survival |S(0)="Up") Probability Figure 5: Same as figure ??, but for the “Looking Up” context condition. 5 Discussion/Conclusions Although [?] also presents a theory for average transitions times in bistability based on random processes and a multi-modal posterior distribution, their theory is fundamentally different as it derives switching events from tunneling probabilities that arise from input noise. Moreover, their theory predicts increasing transition times as the posterior becomes increasingly peaked, exactly opposite our predictions. In conclusion, we have presented a novel theory of perceptual bistability based on simple assumptions about how the brain makes perceptual decisions. In addition, results from a simple experiment show that manipulations which change the dominance of a percept produce coupled changes in the probability of transition events as predicted by theory. However, we do not regard the experiment as a strong test of the theory. We believe the strength of the theory is that it can make a large set of qualitative predictions about the distribution of transition events by coupling transition times to simple properties of the posterior distribution. Our theory suggests that the basic descriptive model sufficient to capture perceptual bistability is a semi-Markov renewal process, which we showed could successfully simulate the temporal dynamics of human data for the Necker cube. References [1] Aldous, D .(1989) Probability approximations via the Poisson clumping heuristic. Applied Math. Sci, 77. Springer-Verlag, New York. [2] Bialek, W., DeWeese, M. (1995) Random Switching and Optimal Processing in the Perception of Ambiguous Signals. Physics Review Letters 74(15) 3077-80. [3] Brascamp, J. W., van Ee, R., Pestman, W. R., & van den Berg, A. V. (2005). Distributions of alternation rates in various forms of bistable perception. J. of Vision 5(4), 287-298. [4] Einhauser, W., Martin, K. A., & Konig, P. (2004). Are switches in perception of the Necker cube related to eye position? Eur J Neuroscience 20(10), 2811-2818. [5] Freeman, W.T (1994) The generic viewpoint assumption in a framework for visual perception Nature vol. 368, April 1994. [6] von Grunau, M. W., Wiggin, S. & Reed, M. (1984). The local character of perspective organization. Perception and Psychophysics 35(4), 319-324. [7] Kersten, D., Mamassian, P. & Yuille, A. (2004) Object Perception as Bayesian Inference Annual Review of Psychology Vol. 55, 271-304. [8] Lee, T.S. & Mumford, D. (2003) Hierarchical Bayesian Inference in the Visual Cortex Journal of the Optical Society of America Vol. 20, No. 7. [9] Leopold, D. and Logothetis, N.(1999) Multistable phenomena: Changing views in Perception. Trends in Cognitive Sciences. Vol.3, No.7, 254-264. [10] Long, G., Toppino, T. & Mondin, G. (1992) Prime Time: Fatigue and set effects in the perception of reversible figures. Perception and Psychophysics Vol.52, No.6, 609-616. [11] Mamassian, P. & Goutcher, R. (2005) Temporal dynamics in Bistable Perception. Journal of Vision. No. 5, 361-375. [12] Rock, I. and Mitchener, K.(1992) Further evidence of the failure of reversal of ambiguous figures by uninformed subjects. Perception 21, 39-45. [13] Ross, S. M. (1970) Applied Probability Models with Optimization Applications. Holden-Day. [14] Stocker, A. & Simoncelli, E. (2006) Noise characteristics and prior expectations in human visual speed perception Nature Neuroscience vol.9, no.4, 578-585. [15] Toppino, T. C. (2003). Reversible-figure perception: mechanisms of intentional control. Perception and Psychophysics 65(8), 1285-1295. [16] Toppino, T. C. & Long, G. M. (1987). Selective adaptation with reversible figures: don’t change that channel. Perception and Psychophysics 42(1), 37-48. [17] van Ee, R., Adams, W. J., & Mamassian, P. (2003). Bayesian modeling of cue interaction: Bi-stability in stereo-scopic slant perception. J.of the Opt. Soc. of Am. A, 20, 1398-1406. [18] van Ee, R., van Dam, L.C.J., Brouwer,G.J. (2005) Dynamics of perceptual bi-stability for stereoscopic slant rivalry. Vision Res., 45, 29-40.
2006
29
3,047
Game theoretic algorithms for Protein-DNA binding Luis P´erez-Breva CSAIL-MIT lpbreva@csail.mit.edu Luis E. Ortiz CSAIL - MIT leortiz@csail.mit.edu Chen-Hsiang Yeang UCSC chyeang@soe.ucsc.edu Tommi Jaakkola CSAIL - MIT tommi@csail.mit.edu Abstract We develop and analyze game-theoretic algorithms for predicting coordinate binding of multiple DNA binding regulators. The allocation of proteins to local neighborhoods and to sites is carried out with resource constraints while explicating competing and coordinate binding relations among proteins with affinity to the site or region. The focus of this paper is on mathematical foundations of the approach. We also briefly demonstrate the approach in the context of the λ-phage switch. 1 Introduction Transcriptional control relies in part on coordinate operation of DNA binding regulators and their interactions with various co-factors. We believe game theory and economic models provide an appropriate modeling framework for understanding interacting regulatory processes. In particular, the problem of understanding coordinate binding of regulatory proteins has many game theoretic properties. Resource constraints, for example, are critical to understanding who binds where. At low nuclear concentrations, regulatory proteins may occupy only high affinity sites, while filling weaker sites with increasing concentration. Overlapping or close binding sites create explicit competition for the sites, the resolution of which is guided by the available concentrations around the binding sites. Similarly, explicit coordination such as formation of larger protein complexes may be required for binding or, alternatively, binding may be facilitated by the presence of another protein. The key advantage of games as models of binding is that they can provide causally meaningful predictions (binding arrangements) in response to various experimental perturbations or disruptions. Our approach deviates from an already substantial body of computational methods used for resolving transcriptional regulation (see, e.g., [3, 10]). From a biological perspective our work is closest in spirit to more detailed reaction equation models [5, 1], while narrower in scope. The mathematical approach is nevertheless substantially different. 2 Protein-DNA binding We decompose the binding problem into transport and local binding. By transport, we refer to the mechanism that transports proteins to the neighborhood of sites to which they have affinity. The biological processes underlying the transport are not well-understood although several hypotheses exist[12, 4]. We abstract the process initially by assuming separate affinities for proteins to explore neighborhoods of specific sites, modulated by whether the sites are available. This abstraction does not address the dynamics of the transport process and therefore does not distinguish (nor stand in contradiction to) underlying mechanisms that may or may not involve diffusion as a major component. We aim to capture the differentiated manner in which proteins may accumulate in the neighborhoods of sites depending on the overall nuclear concentrations and regardless of the time involved. Local binding, on the other hand, captures which proteins bind to each site as a consequence of local accumulations or concentrations around the site or a larger region. In a steady state, the local environment of the site is assumed to be closed and well-mixed. We therefore model the binding as being governed by chemical equilibria: for a type of protein i around site j, {free protein i} + {free site j} ⇌{bound ij}, where concentrations involving the site should be thought of as time averages or averages across a population of cells depending on the type of predictions sought. The concentrations of various molecular species around and bound to the sites as well as the rate at which the sites are occupied are then governed by the law of mass action at chemical equilibrium: [bound ij]/([free protein i][free site j]) = Kij, where i ranges over proteins with affinity to site j and Kij is a positive equilibrium constant characterizing protein i’s ability to bind to site j in the absence of other proteins. Broadly speaking, the combination of transport and local binding results in an arrangement of proteins along the possible DNA binding sites. This is what we aim to predict with our game-theoretic models, not how such arrangements are reached. The predictions should be viewed as functions of the overall (nuclear) concentrations of proteins, the affinities of proteins to explore neighborhoods of individual sites, as well as the equilibrium constants characterizing the ability of proteins to bind to specific sites when in close proximity. Any perturbation of such parameters leads to a potentially different arrangement that we can predict. 3 Game Theoretic formulation There are two types of players in our game, proteins and sites. A protein-player refers to a type of protein, not an individual protein, and decides how its nuclear concentration is allocated to the proximity of sites (transport process). The protein-players are assumed non-cooperative and rational. In other words, their allocations are based on the transport affinities and the availability of sites rather than through some negotiation process involving multiple proteins. The non-coopeative nature of the protein allocations does not, however, preclude the formation of protein complexes or binding facilitated by other proteins. Such extensions can be incorporated at the sites. Each possible binding site is associated with a site-player. Site-players choose the fraction of time (or fraction of cells in a population) a specific type of protein is bound to the site. The site may also remain empty. The strategies of the site-players are guided by local chemical equilibria. Indeed, the site-players are introduced merely to reproduce this physical understanding of the binding process in a game theoretic context. The site-players are non-cooperative and self-interested, always aiming and succeeding at reproducing the local chemical equilibria. The binding game has no global objective function that serves to guide how the players choose their strategies. The players choices are instead guided by their own utilities that depend on the choices of other players. For example, the protein-player allocates its nuclear concentration to the proximity of the sites based on how occupied the sites are, i.e., in a manner that depends on the strategies of the site-players. Similarly, the site-players reproduce the chemical equilibrium at the sites on the basis of the available local protein concentrations, i.e., depending on the choices of the protein-players. The predictions we can make based on the game theoretic formulation are equilibria of the game (not to be confused with the local chemical equilibria at the sites). At an equilibrium, no reallocation of proteins to sites is required and, conversely, the sites have reproduced the local chemical equilibria based on the current allocations of proteins. While games need not have equilibria in pure strategies (actions available to the players), our game will always have one. 4 The binding game To specify the game more formally we proceed to define players’ strategies, their utilities, and the notion of an equilibrium of the game. To this end, let f i represent the (nuclear) concentration of protein i. This is the amount of protein available to be allocated to the neighborhoods of sites. The fraction of protein i allocated to site j is specified by pi j, where P j pi j = 1. The numerical values of pi j, where j ranges over the possible sites, define a possible strategy for the ith protein player. The set of such strategies is denoted by Pi. The choices of which strategies to play are guided by parameters Eij, the affinity of protein i to explore the neighborhood of site j (we will generally index proteins with i and sites with j). The utility for protein i, defined below, provides a numerical ranking of possible strategy choices and is parameterized by Eij. Each player aims to maximize its own utility over the set of possible strategy choices. The strategy for site-player j specifies the fraction of time that each type of protein is actually bound to the site. The strategy is denoted by sj i, where i ranges over proteins with affinity to the site. Note that the values of sj i are in principle observable from binding assays (cf. [9]). P i sj i ≤1 since there is only one site and it may remain empty part of the time. The availability of site j is 1−P i sj i ≤1, i.e., the fraction of time that nothing is bound. We will also use αj = P i sj i to denote how occupied the site is. The utilities of the site players will depend on Kij, the chemical equilibrium constants characterizing the local binding reaction between protein i and site j. Utilities The utility function for protein-player i is formally defined as ui(pi, s) ≡ X j pi jEij(1 − X i′ sj i′) + βH(pi) (1) where H(pi) = −P j pi j log pi j is the Shannon entropy of the strategy pi j and j ranges over possible sites. The utility of the protein-player essentially states that protein i “prefers” to be around sites that are unbound and for which it has high affinity. The parameter β ≥0 balances how much protein allocations are guided by the differentiated process, characterized by the exploration affinities Eij, as opposed to allocated uniformly (maximizing the entropy function). Since the overall scaling of the utilities is immaterial, only the ratios Eij/β are relevant for guiding the protein-players. Note that since the utility depends on the strategies of site-players through (1 −P i′ sj i′), one cannot find the equilibrium strategy for proteins by considering sj i to be fixed; the sites will respond to any pi j chosen by the protein-player. As discussed earlier, the site-players always reproduce the chemical equilibrium between the site and the protein species allocated to the neighborhood of the site. The utility for site-player i is defined such that the maximizing strategy corresponds to the chemical equilibrium: sj i / h (pi jf i −sj i)(1 − X i′ sj i′) i = Kij (2) where sj i specifies how much protein i is bound, the first term in the denominator (pi jf i −sj i) specifies the amount of free protein i, and the second term (1 −P i′ sj i′), the fraction of time the site is available. The equilibrium equation holds for all protein species around the site and for the same strategy {sj i} of the site-player. The units of each “concentration” in the above equation should be interpreted as numbers of available molecules (e.g., there’s only one site). The utility function that reproduces this chemical equilibrium when maximized over possible strategies is given by vj(sj, p) ≡ X i sj i −Kij(pi jf i −sj i)  1 − X i′ sj i′  (3) subject to sj i ≤Kij(pi jf i −sj i)(1 −P i′ sj i′), sj i ≤pi jf i, and P i′ sj i′ ≤1. These constraints guarantee that the utility is always non-positive and zero exactly when the chemical equilibrium holds. sj i ≤pi jf i ensures that we cannot have more protein bound than is allocated to the proximity of the site. These constraints define the set of strategies available for site-player j or Sj(p). Note that the available strategies for the site-player depend on the current strategies for protein-players. The set of strategies Sj(p) is not convex. 4.1 The game and equilibria The protein-DNA binding game is now fully specified by the set of parameters {Eij/β}, {Kij} and {f i}, along with the utility functions {ui} and {vj} and the allocation constraints {Pi} and {Sj}. We assume that the biological system being modeled reaches a steady state, at least momentarily, preserving the average allocations. In terms of our game theoretic model, this corresponds to what we call an equilibrium of the game. Informally, an equilibrium of a game is a strategy for each player such that no individual has any incentive to unilaterally deviate from their strategy. Formally, if the allocations (¯p, ¯s) are such that for each protein i and each site j, ¯pi ∈arg max pi∈Pi ui(pi, ¯s), and ¯sj ∈arg max sj∈Sj(¯pj) vj(sj, ¯pj), (4) then we call (¯p, ¯s) an equilibrium of the protein-DNA binding game. Put another way, at an equilibrium, the current strategies of the players must be among the strategies that maximize their utilities assuming the strategies of other players are held fixed. Does the protein-DNA binding game always have an equilibrium? While we have already stated this in the affirmative, we emphasize that there is no reason a priori to believe that there exists an equilibrium in the pure strategies, especially since the sets of possible strategies for the site-players are non-convex (cf. [2]). The existence is guaranteed by the following theorem: Theorem 1. Every protein-DNA binding game has an equilibrium. A constructive proof is provided by the algorithm discussed below. The theorem guarantees that at least one equilibrium exists but there may be more than one. At any such equilibrium of the game, all the protein species around each site are at a chemical equilibrium; that is, if (¯p, ¯s) is an equilibrium of the game, then for all sites j and proteins i, ¯sj and ¯pi j satisfy (2). Consequently, the site utilities vj(¯sj, ¯pj) are all zero for the equilibrium strategies. 4.2 Computing equilibria The equilibria of the binding game represent predicted binding arrangements. Our game has special structure and properties that permit us to find an equilibrium efficiently through a simple iterative algorithm. The algorithm monotonically fills the sites up to the equilibrium levels, starting with all sites empty. We begin by first expressing any joint equilibrium strategy of the game as a function of how filled the sites are, and reduce the problem of finding equilibria to finding fixed points of a monotone function. To this end, let αj = P i′ sj i′ denote site j occupancy, the fraction of time it is bound by any protein. αj’s are real numbers in the interval [0, 1]. If we fix α = (α1, . . . , αm), i.e., the occupancies for all the m sites, then we can readily obtain the maximizing strategies for proteins expressed as a function of site occupancies: pi j(α) ∝exp(Eij(1 −αj)/β), where the maximizing strategies are functions of α. Similarly, at the equilibrium, each site-player achieves a local chemical equilibrium specified in (2). By replacing αj = P i′ sj i′, and solving for sj i in (2), we get sj i(α) = Kij(1 −αj) 1 + Kij(1 −αj) pi j(α) f i (5) So, for example, the fraction of time the site is bound by a specific protein is proportional to the amount of that protein in the neighborhood of the site, modulated by the equilibrium constant. Note that sj i(α) depends not only on how filled site j is but also on how occupied the other sites are through pi j(α). The equilibrium condition can be now expressed solely in terms of α and reduces to a simple consistency constraint: overall occupancy should equal the fraction of time any protein is bound or αj = X i sj i(α) = X i Kij(1 −αj) 1 + Kij(1 −αj) pi j(α) f i = Gj(α) (6) We have therefore reduced the problem of finding equilibria of the game to finding fixed points of the mapping Gj(α) = P i sj i(α). This mapping, written explicitly as has a simple but powerful monotonicity property that forms the basis for our iterative algorithm. Specifically, Lemma 1. Let α−j denote all components αk except αj. Then for each j, Gj(α) ≡Gj(αj, α−j) is a strictly decreasing function of αj for any fixed α−j. We omit the proof as it is straightforward. This lemma, together with the fact that Gj(1, α−j) = 0, immediately guarantees that there is a unique solution to αj = Gj(αj, α−j) for any fixed and valid α−j. The solution αj also lies in the interval [0, 1] and can be found efficiently via binary search. The algorithm Let α(t) denote the site occupancies at the tth iteration of the algorithm. αj(t) specifies the jth component of this vector, while α−j(t) contains all but the jth component. The algorithm proceeds as follows: • Set αj(0) = 0 for all j = 1, . . . , m. • Find each new component αj(t + 1), j = 1, . . . , m, on the basis of the corresponding α−j(t) such that αj(t + 1) = Gj(αj(t + 1), α−j(t)) • Stop when αj(t + 1) ≈αj(t) for all j = 1, . . . , m. Note that the inner loop of the algorithm, i.e., finding αj(t + 1) on the basis of α−j(t) reduces to a simple binary search as discussed earlier. The algorithm generates a monotonically increasing sequence of α’s that converge to a fixed point (equilibrium) solution. We also provide a formal convergence analysis of the algorithm. To this end, we begin with the following critical lemma. Lemma 2. Let α1 and α2 be two possible assignments to α. If for all k ̸= j, αk 1 ≤αk 2, then Gj(αj, α−j 1 ) ≤Gj(αj, α−j 2 ) for all αj. The proof is straightforward and essentially based on the fact that α−j 1 and α−j 2 appear only in the normalization terms for the protein allocations. We omit further details for brevity. On the basis of this lemma, we can show that the algorithm indeed generates a monotonically increasing sequence of α’s Theorem 2. αj(t + 1) ≥αj(t) for all j and t. Proof. By induction. Since αj(0) = 0 and the range of Gj(αj, α−j(0)) lies in [0, 1], clearly αj(1) ≥αj(0) for all j. Assume then that αj(t) ≥αj(t −1) for all j. We extend the induction step by contradiction. Suppose αj(t + 1) < αj(t) for some j. Then αj(t + 1) < αj(t) = Gj(αj(t), α−j(t −1)) ≤Gj(αj(t), α−j(t)) < Gj(αj(t + 1), α−j(t)) = αj(t + 1) which is a contradiction. The first “≤” follows from the induction hypothesis and lemma 2, and the last “<” derives from lemma 1 and αj(t + 1) < αj(t). Since αj(t) for any t will always lie in the interval [0, 1], and because of the continuity of Gj(αj, α−j) in the two arguments, the algorithm is guaranteed to converge to a fixed point solution. More formally, the Monotone Convergence Theorem for sequences and the continuity of Gj’s imply that Theorem 3. The algorithm converges to a fixed point ¯α such that ¯αj = Gj(¯αj, ¯α−j) for all j. 4.3 The λ-phage binding game We use the well-known λ-phage viral infection [11, 1] to illustrate the game theoretic approach. A genetic two-state control switch specifies whether the infection remains dormant (lysogeny) or whether the viral DNA is aggressively replicated (lysis). The components of the λ−switch are 1) two adjacent genes cI and Cro that encode cI2 and Cro proteins, respectively; 2) the promoter regions PRM and PR of these genes, and 3) an operator (OR) with three binding sites OR1, OR2, and OR3. We focus on lysogeny, in which cI2 dominates over Cro. There are two relevant protein-players, RNA-polymerase and cI2, and three sites, OR1, OR2, and OR3 (arranged close together in this order). Since the presence of cI2 in either OR1 or OR3 blocks the access of RNA-polymerase to the promoter region PR, or PRM respectively, we can safely restrict ourselves to operator sites as the site-players. There are three phases of operation depending on the concentration of cI2: 1. cI2 binds to OR1 first and blocks the Cro promoter PR 2. Slightly higher concentrations of cI2 lead to binding at OR2 which in turn facilitates RNApolymerase to initiate transcription at PRM 3. At sufficiently high levels cI2 also binds to OR3 and inhibits its own transcription 10 −1 10 0 10 1 10 2 10 3 0 0.25 0.5 0.75 1 Binding in OR3 f cI 2 /fRNA− p probability of binding cI2 RNA− polym. (a) OR3 10 −1 10 0 10 1 10 2 10 3 0 0.25 0.5 0.75 1 Binding in OR2 f cI 2 /fRNA− p probability of binding cI2 RNA− polym. (b) OR2 10 −1 10 0 10 1 10 2 0 0.25 0.5 0.75 1 Binding in OR1 f cI2 /fRNA− p probability of binding cI2 RNA− polym. (c) OR1 Figure 1: Predicted protein binding to sites OR3, OR2, and OR1 for increasing amounts of cI2. The rightmost figure illustrates a comparison with [1]. The shaded area indicates the range of concentrations of cI2 at which stochastic simulation predicts a decline in transcription from OR1. Our model predicts that cI2 begins to occupy OR1 at the same concentration. Game parameters The game requires three sets of parameters: chemical equilibrium constants, affinities, and protein concentrations. To use constants derived from experiment we assign units to these quantities. We define f i as the total number of proteins i available, and arrange the units of Kij accordingly: f i ≡ef i VT NA, Kij ≡eKij/(NAVS) eKij = e−∆G/RT (7) where VT and VS are the volumes of cell and site neighborhood, respectively, NA is the Avogadro number, R is the universal gas constant, T is temperature, ef i is the concentration of protein i in the cell, and eKij is the equilibrium constant in units of ℓ/mol. As we show in [6] these definitions are consistent with our previous derivation. Note that when game parameters are learned from data any dependence on the volumes will be implicit. For a typical Escherichia coli ( 2µm length) at room temperature, the Gibbs’ Free energies ∆G tabulated by [11] yield the equilibrium constants shown below; in addition, we set transport affinities in accordance with the qualitative description in [7, 8], Kij OR3 OR2 OR1 cI2 .0020 .0020 .0296 RNA-p .0212 0 .1134 Eij OR3 OR2 OR1 cI2 .1 .1 1 RNA-p .2 .01 1 Note that the overall scaling of the affinities is immaterial; only their relative values will guide the protein-players. Note also that we have chosen not to incorporate any protein-protein interactions in the affinities. Finally, we set efRNA−p = 30nM (cf. [11]) (around fRNA−p ≃340 copies for a typical E. coli). And varied fcI2 from 1 to 10, 000 copies to study the dynamical behavior of the lysogeny cycle. The results are reported as a function of the ratio fcI2/fRNA−p. We set β = 10−5. Simulation Results The predictions from the game theoretic model exactly mirror the known behavior. Here we summarize the main results and refer the reader to [6] for a thorough analysis. Figure 1 illustrates how the binding at different sites changes as a function of increasing fcI2. The simulation mirrors the behavior of the lysogeny cycle discussed earlier. Although our model does not capture dynamics, and figure 1 does not involve time, it is nevertheless useful for assessing quantitative changes and the order of events as a function of increasing fcI2. Note, for example, that the levels at which cI2 occupies OR1 and OR2 rise much faster than at OR3. While the result is expected, the behavior is attributed to protein-protein interactions which are not encoded in our model. Similarly, RNA-polymerase occupation at OR3 bumps up as the probability that OR2 is bound by cI2 increases. In [6] we further discuss the implications of the simultaneous occupancy of OR1 and OR2, via simulation of OR1 knockout experiments. Finally, figure 1(c) shows a comparison with stochastic simulation (v. [1]). Our model predicts that cI2 begins binding OR1 at the same level as [1] predicts a decline in the transcription of Cro. While consistent, we emphasize that the methods differ in their goals; stochastic simulation focuses on the dynamics of transcription while we study the strategic allocation of proteins as a function of their concentration. 4.4 A structured extension The game theoretic formulation of the binding problem described previously involves a transport mechanism that is specific to individual sites. In other words, proteins are allocated to the proximity of sites based on parameters Eij and occupancies αj associated with individual sites. We generalize the game further here by assuming that the transport mechanism has a coarser spatial structure, e.g., specific to promoters (regulatory regions of genes) rather than sites. In this extension the amount of protein allocated to any promoter is shared by the sites it contains. The sharing creates specific challenges to the algorithms for finding the equilibria and we will address those challenges here. Let R represent possible promoter regions each of which may be bound by multiple proteins (at distinct or overlapping sites). Let pi = {pi r}r∈R represent an allocation of protein i into these regions in a manner that is not specific to the possible sites within each promoter. The utility for protein i is given by ui(pi) = X r∈R pi r Eir(ar) + βH(pi) where N(r) is the set of possible binding sites within promoter region r and ar = P j∈N(r) αj is the overall occupancy of the promoter (how many proteins bound). As before, αj = P i∈P sj i, where the summation is over proteins. N(r) ∩N(r′) = ∅whenever r ̸= r′ (promoters don’t share sites). We assume only that Eir(ar) is a decreasing and a differentiable function of ar. The protein utility is based on the assumption that the attraction to the promoter decreases based on the number of proteins already bound at the promoter. The maximizing strategy for protein i given ar = P j∈N(r) αj for all r, is pi r(a) ∝exp(Eir(ar)/β), where a = {ar}r∈R. Sites j ∈N(r) within a promoter region r reproduce the following chemical equilibrium sj i / h (f ipi r(a) −P k∈N(r) sk i )(1 −αj) i = Kij for all proteins i ∈P. Note the shared protein resource within the promoter. We can find this chemical equilibrium by solving the following fixed point equations αj = X i∈P Kij(1 −αj) 1 + P k∈N(r) Kik(1 −αk) f ipi r(a) = Gj r(α, a−r) The site occupancies αj are now tied within the promoter as well as influencing the overall allocation of proteins across different promoters through a = {ar}r∈R. The following theorem provides the basis for solving the coupled fixed point equations: Theorem 4. Let {ˆαj 1} be the fixed point solution αj 1 = Gj r(α1, a−r 1 ) and {ˆαj 2} the solution to αj 2 = Gj r(α2, a−r 2 ). If al 1 ≤al 2 for all l ̸= r then ˆar 1 ≤ˆar 2. The proof is not straightforward but we omit it for brevity (two pages). The result guarantees that if we can solve the fixed point equations within each promoter then the overall occupancies {ar}r∈R have the same monotonicity property as in the simpler version of the game where ar consisted of a single site. In other words, any algorithm that successively solves the fixed point equations within promoters will result in a monotone and therefore convergent filling of the promoters, beginning with all empty promoters. We will redefine the notation slightly to illustrate the algorithm for finding the solution αj = Gj r(α, a−r) for j ∈N(r) where a−r is fixed. Specifically, let Gj r(αj, α−j, ¯αj, a−r) = X i∈P Kij(1 −αj) 1 + Kij(1 −αj) + P k̸=j Kik(1 −αk) f ipi r(αj, ¯α−j, a−r) In other words, the first argument refers to αj anywhere on the right hand side, the second argument refers to α−j in the denominator of the first expression in the sum, and the third argument refers to α−j in pi r(·). The algorithm is now defined as follows: initialize by setting αj(0) = 0 and ¯αj(0) = 1 for all j ∈N(r), then Iteration t, upper bounds: Find ˆαj = Gj r(ˆαj, ¯α−j(t), α−j(t), a−r) separately for each j ∈N(t). Update ¯αj(t + 1) = ˆαj, j ∈N(r) Iteration t, lower bounds: Find ˆαj = Gj r(ˆαj, α−j(t), ¯α−j(t + 1), a−r) separately for each j ∈N(r). Update αj(t + 1) = ˆαj, j ∈N(r) The iterative optimization proceeds until1 ¯αj(t) −αj(t) ≤ϵ for all j ∈N(r). The algorithm successively narrows down the gap between upper and lower bounds. Specifically, ¯αj(t + 1) ≤ ¯αj(t) and αj(t + 1) ≥αj(t). The fact that these indeed remain upper and lower bounds follows directly from the fact that Gj r(·, α−j, ¯αj, a−r), viewed as a function of the first argument, increases uniformly as we increase the components of the second argument. Similarly, it uniformly decreases as a function of the third argument. 5 Discussion We have presented a game theoretic approach to predicting protein arrangements along the DNA. The model is complete with convergent algorithms for finding equilibria on a genome-wide scale. The results from the small scale application are encouraging. Our model successfully reproduces known behavior of the λ−switch on the basis of molecular level competition and resource constraints, without the need to assume protein-protein interactions between cI2 dimers and cI2 and RNA-polymerase. Even in the context of this well-known sub-system, however, few quantitative experimental results are available about binding (see the comparison). Proper validation and use of our model therefore relies on estimating the game parameters from available protein-DNA binding data. This will be addressed in subsequent work. This work was supported in part by NIH grant GM68762 and by NSF ITR grant 0428715. Luis P´erez-Breva is a “Fundaci´on Rafael del Pino” Fellow. References [1] Adam Arkin, John Ross, and Harley H. McAdams. Stochastic kinetic analysis of developmental pathway bifurcation in phage λ-infected escherichia coli cells. Genetics, 149:1633–1648, August 1998. [2] Kenneth J. Arrow and Gerard Debreu. Existence of an equilibrium for a competitive economy. Econometrica, 22(3):265–290, July 1954. [3] Z. Bar-Joseph, G. Gerber, T. Lee, N. Rinaldi, J. Yoo, B. Gordon F. Robert, E. Fraenkel, T. Jaakkola, R. Young, and D. Gifford. Computational discovery of gene modules and regulatory networks. Nature Biotechnology, 21(11):1337–1342, 2003. [4] Otto G. Berg, Robert B. Winter, and Peter H. von Hippel. Diffusion- driven mechanisms of protein translocation on nucleic acids. 1. models and theory. Biochemistry, 20(24):6929–48, November 1981. [5] HarleyH. McAdams and Adam Arkin. Stochastic mechanisms in geneexpression. PNAS, 94(3):814–819, 1997. [6] Luis P´erez-Breva, Luis Ortiz, Chen-Hsiang Yeang, and Tommi Jaakkola. DNA binding and games. Technical Report MIT-CSAIL-TR-2006-018, Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, March 2006. [7] Mark Ptashne. A Genetic Switch: Gene control and phage λ. Cell Press AND Blackwell Scientific Publications, 3rd edition, 1987. [8] Mark Ptashne and Alexander Gann. Genes and Signals. Cold Spring Harbor Laboratory press, 1st edition, 2002. [9] Bing Ren, Franois Robert, John J. Wyrick, Oscar Aparicio, Ezra G. Jennings, Itamar Simon, Julia Zeitlinger, Jrg Schreiber, Nancy Hannett, Elenita Kanin, Thomas L. Volkert, Christopher J. Wilson, Stephen P. Bell, , and Richard A. Young. Genome-wide location and function of DNA-binding proteins. Science, 290(2306), December 2000. [10] E. Segal, M. Shapira, A. Regev, D. Pe’er, D. Botstein, D. Koller, and N. Friedman. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nature Genetics, 34(2):166–76, 2003. [11] Madeline A. Shea and Gary K. Ackers. The or control system of bacteriophage lambda. a physicalchemical model for gene regulation. Journal of Molecular Biology, 181:211–230, 1985. [12] Neil P. Stanford, Mark D. Szczelkun, John F. Marko, and Stephen E. Halford. One- and three-dimensional pathways for proteins to reach specific DNA sites. EMBO, 19(23):6546–6557, December 2000. 1In the case of multiple equilibria the bounds might converge but leave a finite gap. The algorithm will identify those cases as the monotone convergence of the bounds can be assessed separately.
2006
3
3,048
Aggregating Classification Accuracy across Time: Application to Single Trial EEG Steven Lemm ∗ Intelligent Data Analysis Group, Fraunhofer Institute FIRST, Kekulestr. 7 12489 Berlin, Germany Christin Sch¨afer Intelligent Data Analysis Group, Fraunhofer Institute FIRST, Kekulestr. 7 12489 Berlin, Germany Gabriel Curio Neurophysics Group, Dept. of Neurology, Campus Benjamin Franklin, Charit´e,University Medicine Berlin, Hindenburgdamm 20, 12200 Berlin, Germany Abstract We present a method for binary on-line classification of triggered but temporally blurred events that are embedded in noisy time series in the context of on-line discrimination between left and right imaginary hand-movement. In particular the goal of the binary classification problem is to obtain the decision, as fast and as reliably as possible from the recorded EEG single trials. To provide a probabilistic decision at every time-point t the presented method gathers information from two distinct sequences of features across time. In order to incorporate decisions from prior time-points we suggest an appropriate weighting scheme, that emphasizes time instances, providing a higher discriminatory power between the instantaneous class distributions of each feature, where the discriminatory power is quantified in terms of the Bayes error of misclassification. The effectiveness of this procedure is verified by its successful application in the 3rd BCI competition. Disclosure of the data after the competition revealed this approach to be superior with single trial error rates as low as 10.7, 11.5 and 16.7% for the three different subjects under study. 1 Introduction The ultimate goal of brain-computer interfacing (BCI) is to translate human intentions into a control signal for a device, such as a computer application, a wheelchair or a neuroprosthesis (e.g. [20]). Most pursued approaches utilize the accompanying EEG-rhythm perturbation in order to distinguish between single trials (STs) of left and right hand imaginary movements e.g. [8,11,14,21]. Up to now there are just a few published approaches utilizing additional features, such as slow cortical potential, e.g. [3,4,9] This paper describes the algorithm that has been successfully applied in the 2005 international data analysis competition on BCI-tasks [2] (data set IIIb) for the on-line discrimina∗steven.lemm@first.fhg.de tion between imagined left and right hand movement. The objective of the competition was to detect the respective motor intention as early and as reliably as possible. Consequently, the competing algorithms have to solve the on-line discrimination task as based on information on the event onset. Thus it is not within the scope of the competition to solve the problem of detecting the event onset itself. We approach this problem by applying an algorithm that combines the different characteristics of two features: the modulations of the ongoing rhythmic activity and the slow cortical Movement Related Potential (MRP). Both features are differently pronounced over time and exhibit a large trial to trial variability and can therefore be considered as temporally blurred. Consequently, the proposed method combines at one hand the MRP with the oscillatory feature and on the other hand gather information across time as introduced in [8,16]. More precisely, at each time point we estimate probabilistic models on the labeled training data - one for each class and feature - yielding a sequence of weak instantaneous classifiers, i.e. posterior class distributions. The classification of an unlabeled ST is then derived by weighted combination of these weak probabilistic classifiers using linear combination according to their instantaneous discriminatory power. The paper is organized as follows: section II describes the feature and its extraction, In section III introduces the probabilistic model as well as the combinatorial framework to gather information from the different features across time. In section III the results on the competition data are given, followed by a brief conclusion. 2 Feature 2.1 Neurophysiology The human perirolandic sensorimotor cortices show rhythmic macroscopic EEG oscillations (µ-rhythm) [6], with spectral peak energies around 10 Hz (localized predominantly over the postcentral somatosensory cortex) and 20 Hz (over the precentral motor cortex). Modulations of the µ-rhythm have been reported for different physiological manipulations, e.g., by motor activity, both actual and imagined [7, 13, 18], as well as by somatosensory stimulation [12]. Standard trial averages of µ-rhythm power show a sequence of attenuation, termed event-related desynchronization (ERD) [13], followed by a rebound (event-related synchronization: ERS) which often overshoots the pre-event baseline level [15]. In case of sensorimotor cortical processes accompanying finger movements Babiloni et al. [1] demonstrated that movement related potentials (MRPs) and ERD indeed show up with different spatio-temporal activation patterns across primary (sensori-)motor cortex (M1), supplementary motor area (SMA) and the posterior parietal cortex (PP). Most importantly, the ERD response magnitude did not correlate with the amplitude of the negative MRPs slope. In the subsequent we will combine both features. Thus, in order to extract the rhythmic information we map the EEG to the time-frequency domain by means of Morlet wavelets [19], whereas the slow cortical MRP are extracted by the application of a low pass filter, in form of a simple moving average filter. 2.2 Extraction Let X = [x[1], . . . , x[T]] denote the EEG signal of one single trial (ST) of length T, recorded from the two bipolar channels C3 and C4, i.e. x[t] = [C3[t], C4[t]] T. The label information about the corresponding motor intention of a ST is denoted by Y ∈{L, R}. For information obtain from observations until time s ≤T, we will make use of subscript |s throughout this paper, e.g. X|s refers to [x[1], . . . , x[s]]. This observational horizon becomes important with respect to the causality of the feature extraction process, especially in order to ensure the causality of filter operations we have to restrict the algorithm to a certain observational horizon. Note that X|T denotes a completely observed ST. However for notational convenience we will omit the index |T in case of complete observations. Considering ERD as a feature for ST classifications we model the hand-specific time course of absolute µ-rhythm amplitudes over both sensorimotor cortices. Therefore we utilize the time-frequency representations of the ST at two different frequency bands (α, β), obtained by convolution of the EEG signal with complex Morlet wavelets [19]. Using the notation Ψα, and Ψβ for a wavelet centered at the individual spectral peak in the alpha (8-12Hz) and the beta (16-24Hz) frequency domain, the ERD feature of a ST, observed until time s is calculated as: ERD|s =  erd|s[1], . . . , erd|s[s]  , with erd|s[t] =   |(C3|s ∗Ψα)[t]| |(C4|s ∗Ψα)[t]| |(C3|s ∗Ψβ)[t]| |(C4|s ∗Ψβ)[t]|  . (1) In a similar manner we define the ST feature for the MRP by convolution with a moving average filter of length 11, abbreviated as MA(11). MRP|s = h mrp|s[1], . . . , mrp|s[s] i , with mrp|s[t] =  (C3|s ∗MA(11))[t] (C4|s ∗MA(11))[t]  . (2) According to (1) and (2) the k-th labeled, observed STs for training, i.e. X(k), Y (k) maps to a STs in feature space, namely (MRP(k), ERD(k)). 3 Probabilistic Classification Model Before we start with the model description, we briefly introduce two concepts from Bayesian decision theory. Therefore let p(x|µy, Σy), y ∈{L, R} denote the PDFs of two multivariate Gaussian distributions with different means and covariance matrices (µy, Σy) for two classes, denoted by L and R. Given the two class-conditional distribution models, and under the assumption of a class prior of P(y) = 1 2, y ∈{L, R}, and given an observation x, the posterior class distribution according to Bayes formula is given by p(y|x, µL, ΣL, µR, ΣR) = p(x|µy, Σy) p(x|µL, ΣL) + p(x|µR, ΣR). (3) Furthermore the discriminative power between these two distributions can be estimated using the Bayes error of misclassification [5]. In case of distinct class covariance matrices, the Bayes error cannot be calculated directly. However, by using the Chernoffbound [5] we can derive an upper bound and finally approximate the discriminative power w between the two distributions by 2w ∼= 1 −min 0≤γ≤1 Z p(x|µL, ΣL)γp(x|µR, ΣR)1−γdx. (4) In case of Gaussian distributions the above integral can be expressed in a closed form [5], such that the minimum solution can be easily obtained (see also [16]). Based on these two necessary concepts, we will now introduce our probabilistic classification method. Therefore we first model the class-conditional distribution of each feature at each time instance as multivariate Gaussian distribution. Hence at each time instance we estimate the class means and the class covariance matrices in the feature space, based on the mapped training STs, i.e. ERD(k), MRP(k). Thus from erd(k)[t] we obtain the following two classconditional sets of parameters: µy[t] = E h erd(k)[t] i Y (k)=y (5) Σy[t] = Cov h erd(k)[t] i Y (k)=y , y ∈{L, R}. (6) For convenience we summarize the estimated model parameters for the ERD feature as Θ[t] := (µL[t], ΣL[t], µR[t], ΣR[t]), whereas Ξ[t] := (ηL[t], ΓL[t]), ηR[t], ΓR[t]) denote the class means and the covariance matrices obtained in the similar manner from mrp(k)[t]. Given an arbitrary observation x from the appropriate domain, applying Bayes formula as introduced in (3), yields a posterior distribution for each feature: p y erd, Θ[t]  , erd ∈R4 (7) p y mrp, Ξ[t]  , mrp ∈R2. (8) Additionally, according to (4) we get approximations of the discriminative power w[t] and v[t] of the ERP resp. MRP feature at every time instance. In order to finally derive the classification of an unlabeled single trial at a certain time s ≤T, we incorporate knowledge from all preceding samples t ≤s, i.e. we make the classification based on the causally extracted features: ERD|s and MRP|s. Therefore we first apply (7) and (8) given the observations erd|s[t] resp. mrp|s[t] in order to obtain the class posteriors based on observations until s ≤T. Secondly we combine these class posteriors with one another across time by taking the expectation under the distributions w and v, i.e. c(y, s) = X t≤s w[t] · p y erd|s[t], Θ[t]  + v[t] · p y mrp|s[t], Ξ[t]  P t≤s w[t] + v[t] . (9) As described in [16] this yields an evidence accumulation over time about the decision process. Strictly speaking Eq. (9) gives the expectation value that the ST, observed until time s, is generated by either one of the class models (L or R), until time s. Due to the submission requirements of the competition the final decision at time s is C[s] = 1 −2 · c(L, s), (10) where a positive or negative sign refers to right or left movement, while the magnitude indicates the confidence in the decision on a scale between 0 and 1. 4 Application 4.1 Competition data The EEG from two bipolar channels (C3, C4) was provided with bandfilter settings of 0.5 to 30 Hz and sampled at 128 Hz. The data consist of recordings from three different healthy subjects. Except for the first data set, each contains 540 labeled (for training) and 540 unlabeled trials (for competition) of imaginary hand movements, with an equal number of left and right hand trials (first data set provides just 320 trials each). Each trial has a duration of 7 s: after a 3 s preparation period a visual cue is presented for one second, indicating the demanded motor intention. This is followed by another 3 s for performing the imagination task (for details see [2]). The particular competition data was provided by the Dept. of Med. Informatics, Inst. for Biomed. Eng., Univ. of Techn. Graz. The specific competition task is to provide an on-line discrimination between left and right movements for the unlabeled STs for each subject based on the information obtained from the labeled trials. More precisely, at every time instance in the interval from 3 to 7 seconds a strictly causal decision about the intended motor action and its confidence must be supplied. After competition deadline, based on the disclosure of the labels Y (k) for the previously unlabeled STs the output C(k)[t] of the methods were evaluated using the time course of the mutual information (MI) [17], i.e. MI[t] = 1 2 log2 (SNR[t] + 1) (11) SNR[t] = E  C(k)[t]  Y (k)=L −E  C(k)[t]  Y (k)=R 2 2 Var  C(k)[t]  Y (k)=L + Var  C(k)[t]  Y (k)=R  (12) More precisely, since the general objective of the competition was to obtain the single trial classification as fast and as accurate as possible, the maximum steepness of the MI was considered as final evaluation criterion, i.e. max t≥3.5 MI[t] t −3s. (13) Note, that the feature extraction relies on a few hyperparameters, i.e. the center frequency and the width of the wavelets, as well as the length of the MA filter. All those parameters were obtained by model selection using a leave-one-out cross-validation scheme of the classification performance on the training data. 4.2 Results and Discussion As proposed in section 3 we estimated the class-conditional Gaussian distributions cf. (5) – (8). The resulting posterior distributions were then combined according to (9) in order to obtain the final classification of the unlabeled STs. After disclosure of the label information our method turned out to succeed with a MI steepness (cf. (13)) of 0.17, 0.44 and 0.35 for the individual subjects. Table 4.2 summarizes the results in terms of the achieved minimum binary classification error, the maximum MI, and the maximum steepness of MI for each subject and each competitor in the competition. min. error rate[%] max. MI [bit] max. MI/t [bit/s] O3 S4 X11 O3 S4 X11 O3 S4 X11 1. 10.69 11.48 16.67 0.6027 0.6079 0.4861 0.1698 0.4382 0.3489 2. 14.47 22.96 22.22 0.4470 0.2316 0.3074 0.1626 0.4174 0.1719 3. 13.21 17.59 16.48 0.5509 0.3752 0.4675 0.2030 0.0936 0.1173 4. 23.90 24.44 24.07 0.2177 0.2387 0.2173 0.1153 0.1218 0.1181 5. 11.95 21.48 18.70 0.4319 0.3497 0.3854 0.1039 0.1490 0.0948 6. 10.69 13.52 25.19 0.5975 0.5668 0.2437 0.1184 0.1516 0.0612 7. 34.28 38.52 28.70 0.0431 0.0464 0.1571 0.0704 0.0229 0.0489 Table 1: Overall ranked results of the competing algorithms (first row corresponds to the proposed method) on the competition test data. For three different subjects (O3, S4 and X11) the table states different performance measures of classification accuracy (min. Error rate, max MI, steepness of MI), where the steepness of the MI was used as the objective in the competition. For a description of the 2.–7. algorithm please refer to [2]. The resulting time courses for the MI and the steepness of the MI are presented in the left panel of Fig. 1. For subject two and three, during the first 3.5 seconds (0.5 seconds after cue presentation) the classification is rather by chance, after 3.5 seconds a steep ascent in the classification accuracy can be observed, reflected by the raising MI. The maximum steepness for these two subjects is obtained quite early, between 3.6 −3.8s. In opposite, for subject one the maximum is achieved at 4.9 seconds, yielding a low steepness value. However, a low value is also found for the submission of all other competitors. Nevertheless, the MI constantly increases up to 0.64 Bit per trial at 7 seconds, which might indicate a delayed performance of subject one. The right panel in Fig. 1 provides the weights w[t] and v[t], reflecting the Bayes error of misclassification cf. (4), that were used for the temporal integration process. For subject two one can clearly observe a switch in the regime between the ERP and the MRP feature at 5 seconds, as indicated by a crossing of the two weighting functions. From this we conclude that the steep increase in MI for this subject between 3 and 5 seconds is mainly due to the MRP feature, whereas the further improvement in the MI relies primarily on the ERD feature. Subject one provides nearly no discriminative MRP and the classification is almost exclusively based on the ERD feature. For subject three the constant low weights at all time instances, reveal the weak discriminative power of the estimated class-conditional distributions. However in Fig. 1 the advantage of the integration process across time can clearly be observed, as the MI is continuously increasing and the steepness of the MI is surprisingly high even for this subject. Figure 1: Left panel: time courses of the mutual information (light, dashed) and the competition criterion - steepness of mutual information (thin solid) cf. (13)- for the classification of the unlabeled STs is presented. Right panel: the time course of the weights reflecting the discriminative power (cf. (4)) at every time instance for the two different features (ERD dark, solid; MRP - light dashed). In each panel the subjects O3, S4, X11 are arranged top down. A comprehensive comparison of all submitted techniques to solve the specific task for data set IIIb of the BCI-competition is provided in [2] or available on the web 1. Basically this evaluation reveals that the proposed algorithm outperforms all competing approaches. 5 Conclusion We proposed a general Bayesian framework for temporal combination of sets of simple classifiers based on different features, which is applicable to any kind of sequential data providing a binary classification problems. Moreover, any arbitrary number of features can be combined in the proposed way of temporal weighting, by utilizing the estimated discriminative power over time. Furthermore the estimation of the Bayes error of misclassification is not strictly linked to the chosen parametric form of the class-conditional distributions. For arbitrary distributions the Bayes error can be obtained for instance by statistical resampling approaches, such as Monte Carlo methods. However for the successful application in the BCI-competition 2005 we chose Gaussian distribution for the sake of simplicity concerning two issues: estimating their parameters and obtaining their Bayes error. Note that although the combination of the classifiers across time is linear, the final classification model is non-linear, as the individual classifiers at each time instance are non-linear.For a discussion about linear vs. non-linear methods in the context of BCI see [10]. More precisely due to the distinct covariance matrices of the Gaussian distributions the individual decision boundaries are of quadratic form. In particular to solve the competition task we combined classifiers based on the temporal evolution of different neuro-physiological features, i.e. ERD and MRP. The resulting on-line classification model finally turned out to succeed for the single trial on-line classification of imagined hand movement in the BCI competition 2005. Acknowledgement: This work was supported in part by the Bundesministerium f¨ur Bildung und Forschung (BMBF) under grant FKZ 01GQ0415 and by the DFG under grant SFB 618-B4. S. Lemm thanks Stefan Harmeling for valuable discussions. 1ida.first.fhg.de/projects/bci/competition_iii/ References [1] C. Babiloni, F. Carducci, F. Cincotti, P. M. Rossini, C. Neuper, Gert Pfurtscheller, and F. Babiloni. Human movement-related potentials vs desynchronization of EEG alpha rhythm: A high-resolution EEG study. NeuroImage, 10:658–665, 1999. [2] Benjamin Blankertz, Klaus-Robert M¨uller, Dean Krusienski, Gerwin Schalk, Jonathan R. Wolpaw, Alois Schl¨ogl, Gert Pfurtscheller, Jos´e del R. Mill´an, Michael Schr¨oder, and Niels Birbaumer. The BCI competition III: Validating alternative approachs to actual BCI problems. IEEE Trans. Neural Sys. Rehab. Eng., 14(2):153–159, 2006. [3] Guido Dornhege, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert M¨uller. Combining features for BCI. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Inf. Proc. Systems (NIPS 02), volume 15, pages 1115–1122, 2003. [4] Guido Dornhege, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert M¨uller. Increase information transfer rates in BCI by CSP extension to multi-class. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch¨olkopf, editors, Advances in Neural Information Processing Systems, volume 16, pages 733–740. MIT Press, Cambridge, MA, 2004. [5] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification. John Wiley & Sons, New York, 2nd edition, 2001. [6] R. Hari and R. Salmelin. Human cortical oscillations: a neuromagnetic view through the skull. Trends in Neuroscience, 20:44–9, 1997. [7] H. Jasper and W. Penfield. Electrocorticograms in man: Effect of voluntary movement upon the electrical activity of the precentral gyrus. Arch. Psychiatrie Zeitschrift Neurol., 183:163–74, 1949. [8] Steven Lemm, Christin Sch¨afer, and Gabriel Curio. Probabilistic modeling of sensorimotor µ rhythms for classification of imaginary hand movements. IEEE Trans. Biomed. Eng., 51(6):1077–1080, 2004. [9] B.D. Mensh, J. Werfer, and H.S. Seung. Combining gamma-band power with slow cortical potentials to improve single-trial classification of electroencephalographic signals. IEEE Trans. Biomed. Eng., 51(6):1052–6, 2004. [10] Klaus-Robert M¨uller, Charles W. Anderson, and Gary E. Birch. Linear and nonlinear methods for brain-computer interfaces. IEEE Trans. Neural Sys. Rehab. Eng., 11(2):165–169, 2003. [11] C. Neuper, A. Schl¨ogl, and G. Pfurtscheller. Enhancement of left-right sensorimotor EEG differences during feedback-regulated motor imagery. Journal Clin. Neurophysiol., 16:373–82, 1999. [12] V. Nikouline, K. Linkenkaer-Hansen, Wikstr¨om; H., M. Kes¨aniemi, E. Antonova, R. Ilmoniemi, and J. Huttunen. Dynamics of mu-rhythm suppression caused by median nerve stimulation: a magnetoencephalographic study in human subjects. Neurosci. Lett., 294, 2000. [13] G. Pfurtscheller and A. Arabibar. Evaluation of event-related desynchronization preceding and following voluntary self-paced movement. Electroencephalogr. Clin. Neurophysiol., 46:138–46, 1979. [14] G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer. EEG-based discrimination between imagination of right and left hand movement. Electroenceph. clin. Neurophysiol., 103:642–51, 1997. [15] S. Salenius, A. Schnitzler, R. Salmelin, V. Jousm¨aki, and R. Hari. Modulation of human cortical rolandic rhythms during natural sensorimotor tasks. NeuroImage, 5:221–8, 1997. [16] Christin Sch¨afer, Steven Lemm, and Gabriel Curio. Binary on-line classification based on temporally integrated information. In Claus Weihs and Wolfgang Gaul, editors, Proceedings of the 28th annual conference of the Gesellschaft f¨ur Klassifikation, pages 216–223, 2005. [17] A. Schl¨ogl, R. Scherer C. Keinrath, and G. Pfurtscheller. Information transfer of an EEG-based brain-computer interface. In Proc. First Int. IEEE EMBS Conference on Neural Engineering, pages 641–644, 2003. [18] A. Schnitzler, S. Salenius, R. Salmelin, V. Jousm¨aki, and R. Hari. Involvement of primary motor cortex in motor imagery: a neuromagnetic study. NeuroImage, 6:201–8, 1997. [19] C. Torrence and G.P. Compo. A practical guide to wavelet analysis. Bull. Am. Meterol., 79:61–78, 1998. [20] Jonathan R. Wolpaw, Niels Birbaumer, Dennis J. McFarland, Gert Pfurtscheller, and Theresa M. Vaughan. Brain-computer interfaces for communication and control. Clin. Neurophysiol., 113:767–791, 2002. [21] J.R. Wolpaw and D.J. McFarland. Multichannel EEG-based brain-computer communication. Electroenceph. clin. Neurophysiol., 90:444–9, 1994.
2006
30
3,049
Blind Motion Deblurring Using Image Statistics Anat Levin∗ School of Computer Science and Engineering The Hebrew University of Jerusalem Abstract We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture. 1 Introduction Motion blur is the result of the relative motion between the camera and the scene during image exposure time. This includes both camera and scene objects motion. As blurring can significantly degrade the visual quality of images, photographers and camera manufactures are frequently searching for methods to limit the phenomenon. One solution that reduces the degree of blur is to capture images using shorter exposure intervals. This, however, increases the amount of noise in the image, especially in dark scenes. An alternative approach is to try to remove the blur off-line. Blur is usually modeled as a linear convolution of an image with a blurring kernel, also known as the point spread function (or PSF). Image deconvolution is the process of recovering the unknown image from its blurred version, given a blurring kernel. In most situations, however, the blurring kernel is unknown as well, and the task also requires the estimation of the underlying blurring kernel. Such a process is usually referred to as blind deconvolution. Most of the existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. While the uniform blur assumption is valid for a restricted set of camera motions, it’s usually far from being satisfying when the scene contains several objects moving independently. Existing deblurring methods which handle different motions usually rely on multiple frames. In this work, however, we would like to address blind multiple motions deblurring using a single frame. The suggested approach is fully automatic, under the following two assumptions. The first assumption is that the image consists of a small number of blurring layers with the same blurring kernel within each layer. Most of the examples in this paper include a single blurred object and an unblurred background. Our second simplifying assumption is that the motion is in a single direction ∗Current address: MIT CSAIL, alevin@csail.mit.edu and that the motion velocity is constant, such as in the case of a moving vehicle captured by a static camera. As a result, within each blurred layer, the blurring kernel is a simple one dimensional box filter, so that the only unknown parameters are the blur direction and the width of the blur kernel. Deblurring different motions requires the segmentation of the image into layers with different blurs as well as the reconstruction of the blurring kernel in each layer. While image segmentation is an active and challenging research area which utilizes various low level and high level cues, the only segmentation cue used in this work is the degree of blur. In order to discriminate different degrees of blur we use the statistics of natural images. Our observation is that statistics of derivatives responses in images are significantly changed as a result of blur, and that the expected statistics under different blurring kernels can be modeled. Given a model of the derivatives statistics under different blurring kernels our algorithm searches for a mixture model that will best describe the distribution observed in the input image. This results in a set of 2 (or some other small number) blurring kernels that were used in the image. In order to segment the image into blurring layers we measure the likelihood of the derivatives in small image windows, under each model. We then look for a smooth layers assignment that will maximize the likelihood in each local window. 1.1 Related work Blind deconvolution is an extensive research area. Research about blind deconvolution given a single image, usually concentrate at cases in which the image is uniformly blurred. A summary and analysis of many deconvolution algorithms can be found in [14]. Early deblurring methods treated blurs that can be characterized by a regular pattern of zeros in the frequency domain such as box filter blurs [26]. This method is known to be very sensitive to noise. Even in the noise free case, box filter blurs can not be identified in the frequency domain if different blurs are present. More recent methods are making other assumptions about the image model. This includes an autoregressive process [22], spatial isotropy [28], power low distributions [8, 20], and piecewise-smoothness edges modeling [3]. In a creative recent research which inspired our approach, Fergus et al [12] use the statistics of natural images to estimate the blurring kernel (again, assuming a uniform blur). Their approach searches for the max-marginal blurring kernel and a deblurred image, using a prior on derivatives distribution in an unblurred image. They address more than box filters, and present impressing reconstructions of complex blurring kernels. Our approach also relies on natural images statistics, but it takes the opposite direction: search for a kernel that will bring the unblurred distribution close to the observed distribution. Thus, in addition to handling non uniform blurs, our approach avoids the need to estimate the unblurred image in every step. In [10], Elder and Zucker propose a scale space approach for estimating the scale of an edge. As the edge’s scale provides some measure of blur this is used for segmenting an image into a focus and out of focus layers. The approach was demonstrated on a rather picewise constant image, unlike the rich texture patterns considered in this paper. In [4], blind restoration of spatially-varying blur was studied in the case of astronomical images, which have statistics quite different from the natural scenes addressed in this paper. Other approaches to motion deblurring include hardware approaches [6, 17, 7], and using multiple frames to estimate blur, e.g. [5, 21, 29]. Another related subject is the research on depth from focus or depth from defocus (see [9, 11] to name a few), in which a scene is captured using multiple focus settings. As a scene point focus is a function of its depth, the relative blur is used to estimate depth information. Again, most of this research relies on more than a single frame. Recent work in computer vision applied natural images priors for a variety of applications like denoising [25, 24], super resolution [27], video matting [2], inpainting [16] and reflections decomposition [15]. 2 Image statistics and blurring Figure 1(a) presents an image of an outdoor scene, with a passing bus. The bus is blurred horizontally as a result of the bus motion. In fig 1(b) we plot the log histogram of the vertical derivatives of this image, and the horizontal derivatives within the blurred area (marked with a rectangle). As can be −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 vertical horizontal −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −12 −10 −8 −6 −4 −2 0 Input 5 taps blur 21 taps blur −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −8 −7 −6 −5 −4 −3 −2 −1 0 horizontal blurred vertical (a) (b) (c) (d) Figure 1: Blurred versus unblurred derivatives histograms. (a) Input image. (b) Horizontal derivatives within the blurred region versus vertical derivatives in the entire image. (c) Simulating different blurs in the vertical direction. (d) Horizontal derivatives within the blurred region matched with blurred verticals (4 tap blur). seen, the blur changes the shape of the histogram significantly. This suggests that the statistics of derivative filters responses can be used for detecting blurred image areas. How does the degree of blur affects the derivatives histogram? To answer this question we simulate histograms of different blurs. Let fk denote the horizontal box kernel of size 1 × k (that is, all entries of fk equal 1/k). We convolve the image with the kernels f T k (where k runs from 1 to 30) and compute the vertical derivatives distributions: pk ∝hist(dy ∗f T k ∗I) (1) where dy = [1 −1]T . Some of those log histograms are plotted in fig 1(c). As the size of the blurring kernel changes the derivatives distribution, we would also like to use the histograms for determining the degree of blur. For example, as illustrated in fig 1(d), we can match the distribution of vertical derivatives in the blurred area, and p4, the distribution of horizontal derivatives after blurring with a 4 tap kernel. 2.1 Identifying blur using image statistics Given an image, the direction of motion blur can be selected as the direction with minimal derivatives variation, as in [28]. For the simplicity of the derivation we will assume here that the motion direction is horizontal, and that the image contains a single blurred object plus an unblurred background. Our goal is to determine the size of the blur kernel. That is, to recover the filter f k which is responsible for the blur observed in the image. For that we compute the histogram of horizontal derivatives in the image. However, not all the image is blurred. Therefore, without segmenting the blurred areas there is no single blurring model p k that will describe the observed histogram. Instead, we try to describe the observed histogram with a mixture model. We define the log-likelihood of the derivatives in a window with respect to each of the blurring models as: ℓk(i) =  j∈Wi log pk(Ix(j)) (2) Where Ix(j) is the horizontal derivative in pixel j, and Wi is a window around pixel i. Thus, ℓk(i) measures how well the i’th window is explained by a k-tap blur. For an input image I and a given pair of kernels, we can measure the data log-likelihood by associating each window with the maximum likelihood kernel: L(I|fk1, fk2) =  i∈I max(ℓk1(i), ℓk2(i)) (3) We search for a blurring model pk0 such that, when combined with the model p1 (derivatives of the unblurred image), will maximize the log-likelihood of the observed derivatives: k0 = arg max k L(I|f1, fk) (4) One problem we need to address in defining the likelihoods is the fact that uniform areas, or areas with pure horizontal edges (the aperture problem) don’t contain any information about the blur. On the other hand, uniform areas receive the highest likelihoods from wide blur kernels (since the derivatives distribution for wide kernels is more concentrated around zero, as can be observed in figure 1(c)). When the image consists of large uniform areas, this bias the likelihood toward wider blur kernels. To overcome this, we start by scanning the image with a simple edge detector and keep only windows with significant vertical edges. In order to make our model consistent, when building the blurred distribution models pk (eq 1), we also take into account only pixels within a window around a vertical edge. Note that since we deal here with one dimensional kernels, we can estimate the expected blurred histogram pk (eq 1) from the perpendicular direction of the same image. 2.2 Segmenting blur layers Once the blurring kernel fk has been found, we can use it to deconvolve the image, as in fig 2(b). While this significantly improves the image in the blurred areas, serious artifacts are observed in the background. Therefore, in addition to recovering the blurring kernel, we need to segment the image into blurred and unblurred layers. We look for a smooth segmentation that will maximize the likelihood of the derivatives in each region. We define the energy of a segmentation as: E(x) =  i −ℓ(x(i), i) +  <ij> eij|x(i) −x(j)| (5) where ℓ(x(i), i) = ℓ1(i) for x(i) = 0 and ℓ(x(i), i) = ℓk(i) for x(i) = 1, < i, j > are neighboring image pixels, and eij is a smoothness term: eij = λ + ν(|I(i) −I−fk(i)| + |I(j) −I−fk(j)|) (6) Here I−fk denotes the deconvolved image. The smoothness term is combined from two parts. The first is just a constant penalty for assigning different labels to neighboring pixels, thus preferring smooth segmentations. The second part encodes the fact that it is cheaper to cut the image in places where there is no visual seam between the original and the deconvolved images (e.g. [1]). Given the local likelihood scores and the energy definition, we would like to find the minimal energy segmentation. This reduces to finding a min-cut in a graph. Given the segmentation mask x we convolve it with a Gaussian filter to obtain a smoother seam. The final restorated image is computed as: R(i) = x(i)I−fk(i) + (1 −x(i))I(i) (7) 3 Results To compute a deconvolved image I −fk given the blurring kernel, we follow [12] in using the matlab implementation (deconvlucy) of the Richardson-Lucy deconvolution algorithm [23, 18]. Figure 2 presents results for several example images. For the doll example the image was segmented into 3 blurring layers. The examples of figure 2 and additional results are available in a high resolution in the supplementary material. The supplementary file also includes examples with non horizontal blurs. To determine the blur direction in those images we select the direction with minimal derivatives variation, as in [28]. This approach wasn’t always robust enough. For each image we show what happens if the segmentation is ignored and the entire image is deconvolved with the selected kernel (for the doll case the wider kernel is shown). While this improves the result in the blurred area, strong artifacts are observed in the rest of the image. In comparison, the third row presents the restorated images computed from eq 7 using the blurring layers segmentation. We also show the local MAP labeling of the edges. White pixels are ones for which an unblurred model receives a higher likelihood, that is ℓ1(i) > ℓk(i), and for gray pixels ℓ1(i) < ℓk(i) (for the doll case there are 3 groups, defined in a similar way). The last row presents the segmentation contour. The output contour does not perfectly align with image edges. This is because our goal in the segmentation selection is to produce visually plausible results. The smoothness term of our energy (eq 6) does not aim to output an accurate segmentation, and it does not prefer to align segmentation edges with image edges. Instead it searches for a cut that will make the seam between the layers unobservable. (a) (b) (c) (d) (e) Figure 2: Deblurring Results. (a)Input image. (b)Applying the recovered kernel on the entire image. (c)Our result. (d)Local classification of windows. (e)Segmentation contour The recovered blur sizes for those examples were 12 pixels for the bicycles image and 4 pixels for the bus. For the doll image a 9 pixels blur was identified in the skirt segment and a 2 pixels blur in the doll head. We note that while recovering big degrees of blur as in the bicycles example is visually more impressing, discriminating small degrees of blur as in the bus example is more challenging from the statistical aspect. This is because the derivatives distributions in the case of small blurs are much more similar to the distributions of unblurred images. For the bus image the size of the blur kernel found by our algorithm was 4 pixels. To demonstrate the fact that this is actually the true kernel size, we show in figure 3 the deconvolution results with a 3-tap filter and with a 5-tap filter. Stronger artifacts are observed in each of those cases. (input) (3-taps) (4-taps) (5-taps) (matlab) Figure 3: Deconvolving the bus image using different filters. The 4-tap filter selected by our algorithm yields best results Next, we consider several simple alternatives to some of the algorithm parts. We start by investigating the need in segmentation and then discuss the usage of the image statistics. Segmentation: As demonstrated in fig 2(b) deconvolving the entire image with the same kernel damages the unblurred parts. One obvious solution is to divide the image into regions and match a separate blur kernel to each region. As demonstrated by fig 2(d), even if we limit the kernel choice in each local window to a small set of 2-3 kernels, the local decision could be wrong. For all the examples in this paper we used 15 × 35 windows. There is some tradeoff in selecting a good window size. While likelihood measure based on a big window is more reliable, such a window might cover regions from different blurring layers. Another alternative is to brake the image into segments using an unsupervised segmentation algorithm, and match a kernel to each segment. The fact that blur changes the derivatives distributions also suggests that it might be captured as a kind of texture cue. Therefore, it’s particularly interesting to try segmenting the image using texture affinities (e.g. [13, 19]). However, as this is an unsupervised segmentation process which does not take into account the grouping goal, it’s hard to expect it to yield exactly the blurred layers. Fig 4(b) presents segmentation results using the Ncuts framework of [19]. The output over-segments blur layers, while merging parts of blurred and unblurred objects. Unsurprisingly, the recovered kernels are wrong. 13 1 1 3 18 (a) (b) (c) Figure 4: Deblurring using unsupervised segmentation. (a) Input. (b) Unsupervised segmentation and the width of the kernel matched to each segment. (c) Result from deblurring each segment independently. Image statistics: We move to evaluating the contribution of the image statistics. To do that independently of the segmentation, we manually segmented the bus and applied the matlab blind deconvolution function (deconvblind), initialized with a 1 × 7 box kernel. Strong artifacts were introduced as shown in the last column of fig 3. The algorithm results also depend on the actual histograms used. Derivatives histograms of different natural images usually have common characteristics such as the heavy tail structure. Yet, the histogram structure of different images is not identical, and we found that trying to deblur one image using the statistics of a different image doesn’t work that well. For example, figure 5 shows the result of deblurring the bus image using the bicycles image statistics. The selected blur in this case was a 6-tap kernel, but deblurring the image with this kernel introduces artifacts. The classification of pixels into layers using this model is wrong as well. Our solution was to work on each image using the vertical derivatives histograms from the same image. This isn’t an optimal solution as when the image is blurred horizontally some of the vertical derivatives are degraded as well. Yet, it provided better results than using histograms obtained from different images. (a) (b) (c) (d) Figure 5: Deblurring the bus image using the bicycles image statistics. (a) Applying the recovered kernel on the entire image. (b) Deblurring result. (c) Local classification of windows. (d) Segmentation contour. Limitations: Our algorithm uses simple derivatives statistics and the power of such statistics is somewhat surprising. Yet, the algorithm might fail. One failure source is blurs which can’t be described as a box filter, or failures in identifying the blur direction. Even when this isn’t the case, the algorithm may fail to identify the correct blur size or it may not infer the correct segmentation. Figure 6 demonstrate a failure. In this case the algorithm preferred a model explaining the bushes texture instead of a model explaining the car blur. The bushes area consists of many small derivatives which are explained better by a small blur model than by a no-blur model. On the other hand, the car consists of very few vertical edges. As a result the algorithm selected a 6-pixels blur model. This model might increase the likelihood of the bushes texture and the noise on the road, but it doesn’t remove the blur of the car. (a) (b) (c) (d) (e) Figure 6: Deblurring failure. (a) Input. (b) Applying the recovered kernel (6-taps) on the entire image. (c) Deblurring result. (d) Local classification. (e) Segmentation contour. 4 Discussion This paper addresses the problem of blind motion deconvolution without assuming that the entire image undergone the same blur. Thus, in addition to recovering an unknown blur kernel, we segment the image into layers with different blurs. We treat this highly challenging task using a surprisingly simple approach, relying on the derivatives distribution in blurred images. We model the expected derivatives distributions under different degrees of blur, and those distributions are used for detecting different blurs in image windows. The box filters model used in this work is definitely limiting, and as pointed out by [12, 6], many blurring patterns observed in real images are more complex. A possible future research direction is to try to develop stronger statistical models which can include stronger features in addition to the simple first order derivatives. Stronger models might enable us to identify a wider class of blurring kernels rather than just box filters. Particularly, they could provide a better strategy for identifying the blur direction. A better model might also avoid the need to detect vertical edges and artificially limit the model to windows around edges. In future work, it will also be interesting to try to detect different blurs without assuming a small number of blurring layers. This will require estimating the blurs in the image in a continues way, and might also provide a depth from focus algorithm that will work on a single image. References [1] A. Agarwala et al. Interactive digital photomontage. SIGGRAPH, 2004. [2] N. Apostoloff and A. Fitzgibbon. Bayesian image matting using learnt image priors. In CVPR, 2005. [3] L. Bar, N. Sochen, and N. Kiryati. Variational pairing of image segmentation and blind restoration. In ECCV, 2004. [4] J. Bardsley, S. Jefferies, J. Nagy, and R. Plemmons. Blind iterative restoration of images with spatiallyvarying blur. Optics Express, 2006. [5] B. Bascle, A. Blake, and A. Zisserman. Motion deblurring and superresolution from an image sequence. In ECCV, 1996. [6] M. Ben-Ezra and S. K. Nayar. Motion-based motion deblurring. PAMI, 2004. [7] 2006 Canon Inc. What is optical image stabilizer? http://www.canon.com/bctv/faq/optis.html. [8] J. Caron, N. Namazi, and C. Rollins. Noniterative blind data restoration by use of an extracted filter function. Applied Optics, 2002. [9] T. Darrell and K. Wohn. Pyramid based depth from focus. In CVPR, 1988. [10] J. H. Elder and S. W. Zucker. Local scale control for edge detection and blur estimation. PAMI, 1998. [11] P. Favaro, S. Osher, S. Soatto, and L.A. Vese. 3d shape from anisotropic diffusion. In CVPR, 2003. [12] R. Fergus et al. Removing camera shake from a single photograph. SIGGRAPH, 2006. [13] T. Hofmann, J. Puzicha, and J. M. Buhmann. Unsupervised texture segmentation in a deterministic annealing framework. PAMI, 1998. [14] D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE Signal Processing Magazine, 1996. [15] A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a sparsity prior. In ECCV, 2004. [16] A. Levin, A. Zomet, and Y. Weiss. Learning how to inpaint from global image statistics. In ICCV, 2003. [17] X. Liu and A. Gamal. Simultaneous image formation and motion blur restoration via multiple capture. In Int. Conf. Acoustics, Speech, Signal Processing, 2001. [18] L. Lucy. Bayesian-based iterative method of image restoration. Journal of Ast., 1974. [19] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. In Perceptual Organization for artificial vision systems. Kluwer Academic, 2000. [20] R. Neelamani, H. Choi, and R. Baraniuk. Forward: Fourier-wavelet regularized deconvolution for illconditioned systems. IEEE Trans. on Signal Processing, 2004. [21] A. Rav-Acha and S. Peleg. Two motion-blurred images are better than one. Pattern Recognition Letters, 2005. [22] S. Reeves and R. Mersereau. Blur identification by the method of generalized cross-validation. Transactions on Image Processing, 1992. [23] W. Richardson. Bayesian-based iterative method of image restoration. J. of the Optical Society of America, 1972. [24] S. Roth and M.J. Black. Fields of experts: A framework for learning image priors. In CVPR, 2005. [25] E. P. Simoncelli. Statistical modeling of photographic images. In Handbook of Image and Video Processing, 2005. [26] T. Stockham, T. Cannon, and R. Ingebretsen. Blind deconvolution through digital signal processing. IEEE, 1975. [27] M. F. Tappen, B. C. Russell, and W. T. Freeman. Exploiting the sparse derivative prior for super-resolution and image demosaicing. SCTV, 2003. [28] Y. Yitzhaky, I. Mor, A. Lantzman, and N.S. Kopeika. Direct method for restoration of motion blurred images. JOSA-A, 1998. [29] M. Shiand J. Zheng. A slit scanning depth of route panorama from stationary blur. In Proc. IEEE Conf. Comput. Vision Pattern Recog., 2005.
2006
31
3,050
Accelerated Variational Dirichlet Process Mixtures Kenichi Kurihara Dept. of Computer Science Tokyo Institute of Technology Tokyo, Japan kurihara@mi.cs.titech.ac.jp Max Welling Bren School of Information and Computer Science UC Irvine Irvine, CA 92697-3425 welling@ics.uci.edu Nikos Vlassis Informatics Institute University of Amsterdam The Netherlands vlassis@science.uva.nl Abstract Dirichlet Process (DP) mixture models are promising candidates for clustering applications where the number of clusters is unknown a priori. Due to computational considerations these models are unfortunately unsuitable for large scale data-mining applications. We propose a class of deterministic accelerated DP mixture models that can routinely handle millions of data-cases. The speedup is achieved by incorporating kd-trees into a variational Bayesian algorithm for DP mixtures in the stick-breaking representation, similar to that of Blei and Jordan (2005). Our algorithm differs in the use of kd-trees and in the way we handle truncation: we only assume that the variational distributions are fixed at their priors after a certain level. Experiments show that speedups relative to the standard variational algorithm can be significant. 1 Introduction Evidenced by three recent workshops1, nonparametric Bayesian methods are gaining popularity in the machine learning community. In each of these workshops computational efficiency was mentioned as an important direction for future research. In this paper we propose computational speedups for Dirichlet Process (DP) mixture models [1, 2, 3, 4, 5, 6, 7], with the purpose of improving their applicability in modern day data-mining problems where millions of data-cases are no exception. Our approach is related to, and complements, the variational mean-field algorithm for DP mixture models of Blei and Jordan [7]. In this approach, the intractable posterior of the DP mixture is approximated with a factorized variational finite (truncated) mixture model with T components, that is optimized to minimize the KL distance to the posterior. However, a downside of their model is that the variational families are not nested over T, and locating an optimal truncation level T may be difficult (see Section 3). In this paper we propose an alternative variational mean-field algorithm, called VDP (Variational DP), in which the variational families are nested over T. In our model we allow for an unbounded number of components for the variational mixture, but we tie the variational distributions after level 1http://aluminum.cse.buffalo.edu:8079/npbayes/nipsws05/topics http://www.cs.toronto.edu/∼beal/npbayes/ http://www2.informatik.hu-berlin.de/∼bickel/npb-workshop.html T to their priors. Our algorithm proceeds in a greedy manner by starting with T = 1 and releasing components when this improves (significantly) the KL bound. Releasing is most effectively done by splitting a component in two children and updating them to convergence. Our approach essentially resolves the issue in [7] of searching for an optimal truncation level of the variational mixture (see Section 4). Additionally, a significant contribution is that we incorporate kd-trees into the VDP algorithm as a way to speed up convergence [8, 9]. A kd-tree structure recursively partitions the data space into a number of nodes, where each node contains a subset of the data-cases. Following [9], for a given tree expansion we tie together the responsibility over mixture components of all data-cases contained in each outer node of the tree. By caching certain sufficient statistics in each node of the kd-tree we then achieve computational gains, while the variational approximation becomes a function of the depth of the tree at which one operates (see Section 6). The resulting Fast-VDP algorithm provides an elegant way to trade off computational resources against accuracy. We can always release new components from the pool and split kd-tree nodes as long as we have computational resources left. Our setup guarantees that this will always (at least in theory) improve the KL bound (in practice local optima may force us to reject certain splits, see Section 7). As we empirically demonstrate in Section 8, a kd-tree can offer significant speedups, allowing our algorithm to handle millions of data-cases. As a result, Fast-VDP is the first algorithm entertaining an unbounded number of clusters that is practical for modern day data-mining applications. 2 The Dirichlet Process Mixture in the Stick-Breaking Representation A DP mixture model in the stick-breaking representation can be viewed as possessing an infinite number of components with random mixing weights [4]. In particular, the generative model of a DP mixture assumes: • An infinite collection of components H = {ηi}∞ i=1 that are independently drawn from a prior pη(ηi|λ) with hyperparameters λ. • An infinite collection of ‘stick lengths’ V = {vi}∞ i=1, vi ∈[0, 1], ∀i, that are independently drawn from a prior pv(vi|α) with hyperparameters α. They define the mixing weights {πi}∞ i=1 of the mixture as πi(V ) = vi Qi−1 j=1(1 −vj), for i = 1, . . . , ∞. • An observation model px(x|η) that generates a datum x from component η. Given a dataset X = {xn}N n=1, each data-case xn is assumed to be generated by first drawing a component label zn = k ∈{1, . . . , ∞} from the infinite mixture with probability pz(zn = k|V ) ≡ πk(V ), and then drawing xn from the corresponding observation model px(xn|ηk). We will denote Z = {zn}N n=1 the set of all labels, W = {H, V, Z} the set of all latent variables of the DP mixture, and θ = {λ, α} the hyperparameters. In clustering problems we are mainly interested in computing the posterior over data labels p(zn|X, θ), as well as the predictive density p(x|X, θ) = R H,V p(x|H, V ) R Z p(W|X, θ), which are both intractable since p(W|X, θ) cannot be computed analytically. 3 Variational Inference in Dirichlet Process Mixtures For variational inference, the intractable posterior p(W|X, θ) of the DP mixture can be approximated with a parametric family of factorized variational distributions q(W; φ) of the form q(W; φ) = L Y i=1 h qvi(vi; φv i ) qηi(ηi; φη i ) i N Y n=1 qzn(zn) (1) where qvi(vi; φv i ) and qηi(ηi; φη i ) are parametric models with parameters φv i and φη i (one parameter per i), and qzn(zn) are discrete distributions over the component labels (one distribution per n). Blei and Jordan [7] define an explicit truncation level L ≡T for the variational mixture in (1) by setting qvT (vT = 1) = 1 and assuming that data-cases assign zero responsibility to components with index higher than the truncation level T, i.e., qzn(zn > T) = 0. Consequently, in their model only components of the mixture up to level T need to be considered. Variational inference then consists in estimating a set of T parameters {φv i , φη i }T i=1 and a set of N distributions {qzn(zn)}N n=1, collectively denoted by φ, that minimize the Kullback-Leibler divergence D[q(W; φ)||p(W|X, θ)] between the true posterior and the variational approximation, or equivalently that minimize the free energy F(φ) = Eq[log q(W; φ)] −Eq[log p(W, X|θ)]. Since each distribution qzn(zn) has nonzero support only for zn ≤T, minimizing F(φ) results in a set of update equations for φ that involve only finite sums [7]. However, explicitly truncating the variational mixture as above has the undesirable property that the variational family with truncation level T is not contained within the variational family with truncation level T + 1, i.e., the families are not nested.2 The result is that there may be an optimal finite truncation level T for q, that contradicts the intuition that the more components we allow in q the better the approximation should be (reaching its best when T →∞). Moreover, locating a near-optimal truncation level may be difficult since F as a function of T may exhibit local minima (see Fig. 4 in [7]). 4 Variational Inference with an Infinite Variational Model Here we propose a slightly different variational model for q that allows families over T to be nested. In our setup, q is given by (1) where we let L go to infinity but we tie the parameters of all models after a specific level T (with T ≪L). In particular, we impose the condition that for all components with index i > T the variational distributions for the stick-lengths qvi(vi) and the variational distributions for the components qηi(ηi) are equal to their corresponding priors, i.e., qvi(vi; φv i ) = pv(vi|α) and qηi(ηi; φη i ) = pη(ηi|λ). In our model we define the free energy F as the limit F = limL→∞FL, where FL is the free energy defined by q in (1) and a truncated DP mixture at level L (justified by the almost sure convergence of an L-truncated Dirichlet process to an infinite Dirichlet process when L→∞[6]). Using the parameter tying assumption for i > T, the free energy reads F = T X i=1  Eqvi  log qvi(vi; φv i ) pv(vi|α)  +Eqηi  log qηi(ηi; φη i ) pη(ηi|λ)  + N X n=1 Eq  log qzn(zn) pz(zn|V )px(xn|ηzn)  . (2) In our scheme T defines an implicit truncation level of the variational mixture, since there are no free parameters to optimize beyond level T. As in [7], the free energy F is a function of T parameters {φv i , φη i }T i=1 and N distributions {qzn(zn)}N n=1. However, contrary to [7], data-cases may now assign nonzero responsibility to components beyond level T, and therefore each qzn(zn) must now have infinite support (which requires computing infinite sums in the various quantities of interest). An important implication of our setup is that the variational families are now nested with respect to T (since for i > T, qvi(vi) and qηi(ηi) can always revert to their priors), and as a result it is guaranteed that as we increase T there exist solutions that decrease F. This is an important result because it allows for optimization with adaptive T starting from T = 1 (see Section 7). ¿From the last term of (2) we directly see that the qzn(zn) that minimizes F is given by qzn(zn = i) = exp(Sn,i) P∞ j=1 exp(Sn,j) (3) where Sn,i = EqV [log pz(zn = i|V )] + Eqηi[log px(xn|ηi)]. (4) Minimization of F over φv i and φη i can be carried out by direct differentiation of (2) for particular choices of models for qvi and qηi (see Section 5). Using qzn from (3), the free energy (2) reads F = T X i=1  Eqvi  log qvi(vi; φv i ) pv(vi|α)  + Eqηi  log qηi(ηi; φη i ) pη(ηi|λ)   − N X n=1 log ∞ X i=1 exp(Sn,i). (5) Evaluation of F requires computing the infinite sum P∞ i=1 exp(Sn,i) in (5). The difficult part is P∞ i=T +1 exp(Sn,i). Under the parameter tying assumption for i > T, most terms of Sn,i in (4) 2We thank David Blei for pointing this out. factor out of the infinite sum as constants (since they do not depend on i) except for the term Pi−1 j=T +1 Epv[log(1 −v)] = (i −1 −T)Epv[log(1 −v)]. From the above, the infinite sum can be shown to be ∞ X i=T +1 exp(Sn,i) = Sn,T +1 1 −exp Epv[log(1 −v)] . (6) Using the variational q(W) as an approximation to the true posterior p(W|X, θ), the required posterior over data labels can be approximated by p(zn|X, θ) ≈qzn(zn). Although qzn(zn) has infinite support, in practice it suffices to use the individual qzn(zn = i) for the finite part i ≤T, and the cumulative qzn(zn > T) for the infinite part. Finally, using the parameter tying assumption for i > T, and the identity P∞ i=1 πi(V ) = 1, the predictive density p(x|X, θ) can be approximated by p(x|X, θ) ≈ T X i=1 EqV [πi(V )]Eqηi [px(x|ηi)] + h 1 − T X i=1 Epv[πi(V )] i Epη[px(x|η)]. (7) Note that all quantities of interest, such as the free energy (5) and the predictive distribution (7), can be computed analytically even though they involve infinite sums. 5 Solutions for the exponential family The results in the previous section apply independently of the choice of models for the DP mixture. In this section we provide analytical solutions for models in the exponential family. In particular we assume that pv(vi|α) = Beta(α1, α2) and qvi(vi; φv i ) = Beta(φv i,1, φv i,2), and that px(x|η), pη(η|λ), and qηi(ηi; φη i ) are given by px(x|η) = h(x) exp{ηT x −a(η)} (8) pη(η|λ) = h(η) exp{λ1η + λ2(−a(η)) −a(λ)} (9) qηi(ηi; φη i ) = h(ηi) exp{φη i,1ηi + φη i,2(−a(ηi)) −a(φη i )}. (10) In this case, the probabilities qzn(zn = i) are given by (3) with Sn,i computed from (4) using Eqvi[log vi] = Ψ(φv i,1) −Ψ(φv i,1 + φv i,2) (11) Eqvj [log(1 −vj)] = Ψ(φv i,2) −Ψ(φv i,1 + φv i,2) (12) Eqηi[log px(xn|ηi)] = Eqηi[ηi]T xn −Eqηi[a(ηi)] (13) where Ψ(·) is the digamma function. The optimal parameters φv, φη can be found to be φv i,1 = α1 + N X n=1 qzn(zn = i) φv i,2 = α2 + N X n=1 ∞ X j=i+1 qzn(zn = j) (14) φη i,1 = λ1 + N X n=1 qzn(zn = i)xn φη i,2 = λ2 + N X n=1 qzn(zn = i). (15) The update equations are similar to those in [7] except that we have used Beta(α1, α2) instead of Beta(1, α), and φv i,2 involves an infinite sum P∞ j=i+1 qzn(zn = j) which can be computed using (3) and (6). In [7] the corresponding sum is finite since qzn(zn) is truncated at T. Note that the VDP algorithm operates in a space where component labels are distinguishable, i.e., if we permute the labels the total probability of the data changes. Since the average a priori mixture weights of the components are ordered by their size, the optimal labelling of the a posteriori variational components is also ordered according to cluster size. Hence, we have incorporated a reordering step of components according to approximate size after each optimization step in our final algorithm (a feature that was not present in [7]). 6 Accelerating inference using a kd-tree In this section we show that we can achieve accelerated inference for large datasets when we store the data in a kd-tree [10] and cache data sufficient statistics in each node of the kd-tree [8]. A kd-tree is a binary tree in which the root node contains all points, and each child node contains a subset of the data points contained in its father node, where points are separated by a (typically axis-aligned) hyperplane. Each point in the set is contained in exactly one node, and the set of outer nodes of a given expansion of the kd-tree form a partition of the data set. Suppose the kd-tree containing our data X is expanded to some level. Following [9], to achieve accelerated update equations we constrain all xn in outer node A to share the same qzn(zn) ≡ qzA(zA). We can then show that, under this constraint, the qzA(zA) that minimizes F is given by qzA(zA = i) = exp(SA,i) P∞ j=1 exp(SA,j) (16) where SA,i is computed as in (4) using (11)–(13) with (13) replaced by Eqηi[ηi]T ⟨x⟩A−Eqηi[a(ηi)], and ⟨x⟩A denotes average over all data xn contained in node A. Similarly, if |nA| is the number of data in node A, the optimal parameters can be shown to be φv i,1 = α1 + X A |nA|qzA(zA = i) φv i,2 = α2 + X A |nA| ∞ X j=i+1 qzA(zA = j) (17) φη i,1 = λ1 + X A |nA|qzA(zA = i)⟨x⟩A φη i,2 = λ2 + X A |nA|qzA(zA = i). (18) Finally, using qzA(zA) from (16) the free energy (5) reads F = T X i=1  Eqvi  log qvi(vi; φv i ) pv(vi|α)  + Eqηi  log qηi(ηi; φη i ) pη(ηi|λ)   − X A |nA| log ∞ X i=1 exp(SA,i). (19) The infinite sums in (17) and (19) can be computed from (6) with Sn,T +1 replaced by SA,T +1. Note that the cost of each update cycle is O(T|A|), which can be a significant improvement over the O(TN) cost when not using a kd-tree. (The cost of building the kd-tree is O(N log N) but this is amortized by multiple optimization steps.) Note that by refining the tree (expanding outer nodes) the free energy F cannot increase. This allows us to control the trade-off between computational resources and approximation: we can always choose to descend the tree until our computational resources run out, and the level of approximation will be directly tied to F (deeper levels will mean lower F). 7 The algorithm The proposed framework is quite general and allows flexibility in the design of an algorithm. Below we show in pseudocode the algorithm that we used in our experiments (for DP Gaussian mixtures). Input is a dataset X = {xn}N n=1 that is already stored in a kd-tree structure. Output is a set of parameters {φv i , φη i }T i=1 and a value for T. From that we can compute responsibilities qzn using (3). 1. Set T = 1. Expand the kd-tree to some initial level (e.g. four). 2. Sample a number of ‘candidate’ components c according to size P A |nA|qzA(zA = c), and split the component that leads to the maximal reduction of FT . For each candidate c do: (a) Expand one-level deeper the outer nodes of the kd-tree that assign to c the highest responsibility qzA(zA = c) among all components. (b) Split c in two components, i and j, through the bisector of its principal component. Initialize the responsibilities qzA(zA = i) and qzA(zA = j). (c) Update only SA,i, φv i , φη i and SA,j, φv j , φη j for the new components i and j, keeping all other parameters as well as the kd-tree expansion fixed. 3. Update SA,t, φv t , φη t for all t ≤T + 1, while expanding the kd-tree and reordering components. 4. If FT +1 > FT −ǫ then halt, else set T := T + 1 and go to step 2. In the above algorithm, the number of sampled candidate components in step 2 can be tuned according to the desired cost/accuracy tradeoff. In our experiments we used 10 candidate components. In step 2b we initialized the responsibilities by qzA(zA = i) = 1 = 1 −qzA(zA = j) if ⟨x⟩A is 5000 2000 1000 1 0.9 0.8 free energy ratio #data 23 9 1 speedup factor Fast-VDP VDP BJ Figure 1: Relative runtimes and free energies of Fast-VDP, VDP, and BJ. 10 100 1000 1.02 1.01 1 #data in thousands 160 40 1 Fast-VDP VDP 128 64 32 16 1.02 1.01 1 dimensionality 15 10 5 1 3 2 1 1.02 1.01 1 c-separation 5 3 1 Figure 2: Speedup factors and free energy ratios between Fast-VDP and VDP. Top and bottom figures show speedups and free energy ratios, respectively. closer to i than to j (according to distance to the expected first moment). In order to speed up the partial updates in step 2c, we additionally set qzA(zA = k) = 0 for all k ̸= i, j (so all responsibility is shared between the two new components). In step 3 we reordered components every one cycle and expanded the kd-tree every three update cycles, controlling the expansion by the relative change of qzA(zA) between a node and its children (alternatively one can measure the change of FT +1). Finally, in step 2c we monitored convergence of the partial updates through FT +1 which can be efficiently computed by adding/subtracting terms involving the new/old components. 8 Experimental results In this section we demonstrate VDP, and its kd-tree extension Fast-VDP, on synthetic and real datasets. In all experiments we assumed a Gaussian observation model px(x|η) and a Gaussianinverse Wishart for pη(η|λ) and qηi(ηi; φη i ). Synthetic datasets. As argued in Section 4, an important advantage of VDP over the ‘BJ’ algorithm of [7] is that in VDP the variational families are nested over T, which ensures that the free energy is a monotone decreasing function of T and therefore allows for an adaptive T (starting with the trivial initialization T = 1). On the contrary, BJ optimizes the parameters for fixed T (and potentially minimizes the resulting free energy over different values of T), which requires a nontrivial initialization step for each T. Clearly, both the total runtime as well as the quality of the final solution of BJ depend largely on its initialization step, which makes the direct comparison of VDP with BJ difficult. Still, to get a feeling of the relative performance of VDP, Fast-VDP, and BJ, we applied all three algorithms on a synthetic dataset containing 1000 to 5000 data-cases sampled from 10 Gaussians in 16 dimensions, in which the free parameters of BJ were set exactly as described in [7] (20 initialization trials and T = 20). VDP and Fast-VDP were also executed until T = 20. In Fig. 1 we show the speedup factors and free energy ratios3 among the three algorithms. Fast-VDP 3Free energy ratio is defined as 1 + (FA −FB)/|FB|, where A and B are either Fast-VDP, VDP or BJ. Fast-VDP VDP Figure 3: Clustering results of Fast-VDP and VDP, with a speedup of 21. The clusters are ordered according to size (from top left to bottom right). was approximately 23 times faster than BJ, and three times faster than VDP on 5000 data-cases. Moreover, Fast-VDP and VDP were always better than BJ in terms of free energy. In a second synthetic set of experiments we compared the speedup of Fast-VDP over VDP. We sampled data from 10 Gaussians in dimension D with component separation4 c. Using default number of data-cases N = 10, 000, dimensionality D = 16, and separation c = 2, we varied each of them, one at a time. In Fig. 2 we show the speedup factor (top) and the free energy ratio (bottom) between the two algorithms. Note that the latter is always worse for Fast-VDP since it is an approximation to VDP (ratio closer to one means better approximation). Fig. 2-left illustrates that the speedup of Fast-VDP over VDP is at least linear in N, as expected from the update equations in Section 6. The speedup factor was approximately 154 for one million data-cases, while the free energy ratio was almost constant over N. Fig. 2-center shows an interesting dependence of speed on dimensionality, with D = 64 giving the largest speedup. The three plots in Fig. 2 are in agreement with similar plots in [8, 9]. Real datasets. In this experiment we applied VDP and Fast-VDP for clustering image data. We used the MNIST dataset (http://yann.lecun.com/exdb/mnist/) which consists of 60, 000 images of the digits 0–9 in 784 dimensions (28 by 28 pixels). We first applied PCA to reduce the dimensionality of the data to 50. Fast-VDP found 96 clusters in 3, 379 seconds with free energy F = 1.759 × 107, while VDP found 88 clusters in 72, 037 seconds with free energy 1.684 × 107. The speedup was 21 and the free energy ratio was 1.044. The mean images of the discovered components are illustrated in Fig. 3. The results of the two algorithms seem qualitatively similar, while Fast-VDP computed its results much faster than VDP. In a second real data experiment we clustered documents from citeseer (http://citeseer.ist. psu.edu). The dataset has 30, 696 documents, with a vocabulary size of 32, 473 words. Each document is represented by the counts of words in its abstract. We preprocessed the dataset by Latent Dirichlet Allocation [12] with 200 topics5. We subsequently transformed these topic-counts yj,k (count value of k’th topic in the j’th document) into xj,k = log(1 + yj,k) to fit a normal distribution better. In this problem the elapsed time of Fast-VDP and VDP were 335 seconds and 2,256 seconds, respectively, hence a speedup of 6.7. The free energy ratio was 1.040. Fast-VDP found five clusters, while VDP found six clusters. Table 1 shows the three most frequent topics in each cluster. Although the two algorithms found a different number of clusters, we can see that the clusters B and F found by VDP are similar clusters, whereas Fast-VDP did not distinguish between these two. Table 2 shows words included in these topics, showing that the documents are wellclustered. 9 Conclusions We described VDP, a variational mean-field algorithm for Dirichlet Process mixtures, and its fast extension Fast-VDP that utilizes kd-trees to achieve speedups. Our contribution is twofold: First, 4A Gaussian mixture is c-separated if for each pair (i, j) of components we have ||mi −mj||2 ≥ c2D max(λmax i , λmax j ) , where λmax denotes the maximum eigenvalue of their covariance [11]. 5We thank David Newman for this preprocessing. Fast-VDP VDP cluster a b c d e A B C D E F (in descending order) the three most 1 81 73 35 49 76 81 73 35 76 49 73 frequent topics 2 102 174 50 92 4 102 40 50 4 92 174 3 59 40 110 94 129 59 174 110 129 94 40 Table 1: The three most frequent topics in each clusters. Fast-VDP found five clusters, a–e, while VDP found six clusters, A–F. cluster the most words frequent topic a, A 81 economic, policy, countries, bank, growth, firm, public, trade, market, ... b, B, F 73 traffic, packet, tcp, network, delay, rate, bandwidth, buffer, end, loss, ... c, C 35 algebra, algebras, ring, algebraic, ideal, field, lie, group, theory, ... d, E 49 motion, tracking, camera, image, images, scene, stereo, object, ... e, D 76 grammar, semantic, parsing, syntactic, discourse, parser, linguistic, ... Table 2: Words in the most frequent topic of each cluster. we extended the framework of [7] to allow for nested variational families and an adaptive truncation level for the variational mixture. Second, we showed how kd-trees can be employed in the framework offering significant speedups, thus extending related results for finite mixture models [8, 9]. To our knowledge, the VDP algorithm is the first nonparametric Bayesian approach to large-scale data mining. Future work includes extending our approach to other models in the stick-breaking representation (e.g., priors of the form pvi(vi|ai, bi) = Beta(ai, bi)), as well as alternative DP mixture representations such as the Chinese restaurant process [3]. Acknowledgments We thank Dave Newman for sharing code and David Blei for helpful comments. This material is based upon work supported by ONR under Grant No. N00014-06-1-0734 and the National Science Foundation under Grant No. 0535278 References [1] T. Ferguson. A Bayesian analysis of some nonparametric problems. Ann. Statist., 1:209–230, 1973. [2] C. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Ann. Statist., 2(6):1152–1174, 1974. [3] D. Aldous. Exchangeability and related topics. In ´Ecole d’ ´et´e de Probabilit´e de Saint-Flour XIII, 1983. [4] J. Sethuraman. A constructive definition of Dirichlet priors. Statist. Sinica, 4:639–650, 1994. [5] C.E. Rasmussen. The infinite Gaussian mixture model. In NIPS 12. MIT Press, 2000. [6] H. Ishwaran and M. Zarepour. Exact and approximate sum-representations for the Dirichlet process. Can. J. Statist., 30:269–283, 2002. [7] D.M. Blei and M.I. Jordan. Variational inference for Dirichlet process mixtures. Journal of Bayesian Analysis, 1(1):121–144, 2005. [8] A.W. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In NIPS 11. MIT Press, 1999. [9] J.J. Verbeek, J.R.J. Nunnink, and N. Vlassis. Accelerated EM-based clustering of large data sets. Data Mining and Knowledge Discovery, 13(3):291–307, 2006. [10] J.L. Bentley. Multidimensional binary search trees used for associative searching. Commun. ACM, 18(9):509–517, 1975. [11] S. Dasgupta. Learning mixtures of Gaussians. In IEEE Symp. on Foundations of Computer Science, 1999. [12] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003.
2006
32
3,051
Online Clustering of Moving Hyperplanes Ren´e Vidal Center for Imaging Science, Department of Biomedical Engineering, Johns Hopkins University 308B Clark Hall, 3400 N. Charles St., Baltimore, MD 21218, USA rvidal@cis.jhu.edu Abstract We propose a recursive algorithm for clustering trajectories lying in multiple moving hyperplanes. Starting from a given or random initial condition, we use normalized gradient descent to update the coefficients of a time varying polynomial whose degree is the number of hyperplanes and whose derivatives at a trajectory give an estimate of the vector normal to the hyperplane containing that trajectory. As time proceeds, the estimates of the hyperplane normals are shown to track their true values in a stable fashion. The segmentation of the trajectories is then obtained by clustering their associated normal vectors. The final result is a simple recursive algorithm for segmenting a variable number of moving hyperplanes. We test our algorithm on the segmentation of dynamic scenes containing rigid motions and dynamic textures, e.g., a bird floating on water. Our method not only segments the bird motion from the surrounding water motion, but also determines patterns of motion in the scene (e.g., periodic motion) directly from the temporal evolution of the estimated polynomial coefficients. Our experiments also show that our method can deal with appearing and disappearing motions in the scene. 1 Introduction Principal Component Analysis (PCA) [1] refers to the problem of fitting a linear subspace S ⊂RD of unknown dimension d < D to N sample points X = {xi ∈S}N i=1. A natural extension of PCA is subspace clustering, which refers to the problem of fitting a union of n ≥1 linear subspaces {Sj ⊂RD}n j=1 of unknown dimensions dj = dim(Sj), 0 < dj < D, to N points X = {xi ∈RD}N i=1 drawn from ∪n j=1Sj, without knowing which points belong to which subspace. This problem shows up in a variety of applications in computer vision (image compression, motion segmentation, dynamic texture segmentation) and also in control (hybrid system identification). Subspace clustering has been an active topic of research over the past few years. Existing methods randomly choose a basis for each subspace, and then iterate between data segmentation and standard PCA. This can be done using methods such as Ksubspaces [2], an extension of Kmeans to the case of subspaces, or Expectation Maximization for Mixtures of Probabilistic PCAs [3]. An alternative algebraic approach, which does not require any initialization, is Generalized PCA (GPCA) [4]. In GPCA the data points are first projected onto a low-dimensional subspace. Then, a set of polynomials is fitted to the projected data points and a basis for each one of the projected subspaces is obtained from the derivatives of these polynomials at the data points. Unfortunately, all existing subspace clustering methods are batch, i.e. the subspace bases and the segmentation of the data are obtained after all the data points have been collected. In addition, existing methods are designed for clustering data lying in a collection of static subspaces, i.e. the subspace bases do not change as a function of time. Therefore, when these methods are applied to time-series data, e.g., dynamic texture segmentation, one typically applies them to a moving time window, under the assumption that the subspaces are static within that window. A major disadvantage of this approach is that it does not incorporate temporal coherence, because the segmentation and the bases at time t + 1 are obtained independently from those at time t. Also, this approach is computationally expensive, since a new subspace clustering problem is solved at each time instant. In this paper, we propose a computationally simple and temporally coherent online algorithm for clustering point trajectories lying in a variable number of moving hyperplanes. We model a union of n moving hyperplanes in RD, Sj(t) = {x ∈RD : b⊤ j (t)x = 0}, j = 1, . . . , n, where b(t) ∈RD, as the zero set of a polynomial with time varying coefficients. Starting from an initial polynomial at time t, we compute an update of the polynomial coefficients using normalized gradient descent. The hyperplane normals are then estimated from the derivatives of the new polynomial at each trajectory. The segmentation of the trajectories is obtained by clustering their associated normal vectors. As time proceeds, new data are added, and the estimates of the polynomial coefficients are more accurate, because they are based on more observations. This not only makes the segmentation of the data more accurate, but also allows us to handle a variable number of hyperplanes. We test our approach on the challenging problem of segmenting dynamic textures from rigid motions in video. 2 Recursive estimation of a single hyperplane In this section, we review the normalized gradient algorithm for estimating a single hyperplane. We consider both static and moving hyperplanes, and analyze the stability of the algorithm in each case. Recursive linear regression. For the sake of simplicity, let us first revisit a simple linear regression problem in which we are given measurements {x(t), y(t)} related by the equation y(t) = b⊤x(t). At time t, we seek an estimate ˆb(t) of b that minimizes f(b) = Pt τ=1(y(τ) −b⊤x(τ))2. A simple strategy is to recursively update ˆb(t) by following the negative of the gradient direction at time t, v(t) = −(ˆb(t)⊤x(t) −y(t))x(t). (1) However, it is better to normalize this gradient in order to achieve better convergence properties. As shown in Theorem 2.8, page 77 of [5], the following normalized gradient recursive identifier ˆb(t + 1) = ˆb(t) −µ(ˆb(t)⊤x(t) −y(t)) 1 + µ∥x(t)∥2 x(t), (2) where µ > 0 is a fixed parameter, is such that ˆb(t) →b exponentially if the regressors {x(t)} are persistently exciting, i.e. if there is an S ∈N and ρ1, ρ2 > 0 such that for all m ρ1ID ≺ m+S X t=m x(t)x(t)⊤≺ρ2ID, (3) where A ≺B means that (B−A) is positive definite and ID is the identity matrix in RD. Intuitively, the condition on the left hand side of (3) means that the data has to be persistently ”rich enough” in time in order to uniquely estimate the vector b, while the condition on the right hand side is needed for stability purposes, as it imposes a uniform upper bound on the covariance of the data. Consider now a modification of the linear regression problem in which the parameter vector varies with time, i.e. y(t) = b⊤(t)x(t). As shown in [6], if the regressors {x(t)} are persistently exciting and the sequence {b(t+1)−b(t)} is L2-stable, i.e. sup t≥1 ∥b(t+1)−b(t)∥2 < ∞, then the normalized gradient recursive identifier (2) produces an estimate ˆb(t) of b(t) such that {b(t)−ˆb(t)} is L2-stable. Recursive hyperplane estimation. Let {x(t)} be a set of measurements lying in the moving hyperplane S(t) = {x ∈RD : b⊤(t)x = 0}. At time t, we seek an estimate ˆb(t) of b(t) that minimizes the error f(b(t)) = Pt τ=1(b⊤(τ)x(τ))2 subject to the constraint ∥b(t)∥= 1. Notice that the main difference between linear regression and hyperplane estimation is that in the latter case the parameter vector b(t) is constrained to lie in the unit sphere SD−1. Therefore, instead of applying standard gradient descent as in (2), we must follow the negative gradient direction along the geodesic curve in SD−1 passing through ˆb(t). As shown in [7], the geodesic curve passing through b ∈SD−1 along the tangent vector v ∈T SD−1 is b cos(∥v∥)+ v ∥v∥sin(∥v∥). Therefore, the update equation for the normalized gradient recursive identifier on the sphere is ˆb(t + 1) = ˆb(t) cos(∥v(t)∥) + v(t) ∥v(t)∥sin(∥v(t)∥), (4) where the negative normalized gradient is computed as v(t) = −µ ID −ˆb(t)ˆb ⊤(t) (ˆb ⊤(t)x(t))x(t) 1 + µ∥x(t)∥2 . (5) Notice that the gradient on the sphere is essentially the same as the Euclidean gradient, except that it needs to be projected onto the subspace orthogonal to ˆb(t) by the matrix ID−ˆb(t)ˆb ⊤(t)∈RD×(D−1). Another difference between recursive linear regression and recursive hyperplane estimation is that the persistence of excitation condition (3) needs to be modified to ρ1ID−1 ≺ m+S X t=m Pb(t)x(t)x(t)⊤P ⊤ b(t) ≺ρ2ID−1, (6) where the projection matrix Pb(t) ∈R(D−1)×D onto the orthogonal complement of b(t) accounts for the fact that ∥b(t)∥= 1. Under persistence of excitation condition (6), if b(t) = b the identifier (4) is such that ˆb(t) →b exponentially, while if {b(t + 1) −b(t)} is L2-stable, so is {b(t) −ˆb(t)}. 3 Recursive segmentation of a known number of moving hyperplanes In this section, we generalize the recursive identifier (4) and its stability properties to the case of N trajectories {xi(t)}N i=1 lying in n hyperplanes {Sj(t)}n j=1. In principle, we could apply the identifier (2) to each one of the hyperplanes. However, as we do not know the segmentation of the data, we do not know which data to use to update each one of the n identifiers. In the approach, the n hyperplanes are represented with a single polynomial whose coefficients do not depend on the segmentation of the data. By updating the coefficients of this polynomial, we can simultaneously estimate all the hyperplanes, without first clustering the point trajectories. Representing moving hyperplanes with a time varying polynomial. Let x(t) be an arbitrary point in one of the n hyperplanes. Then there is a vector bj(t) normal to Sj(t) such that b⊤ j (t)x(t) = 0. Thus, the following homogeneous polynomial of degree n in D variables must vanish at x(t): pn(x(t), t) =  b⊤ 1 (t)x(t)   b⊤ 2 (t)x(t)  · · ·  b⊤ n (t)x(t)  = 0. (7) This homogeneous polynomial can be written as a linear combination of all the monomials of degree n in x, xI = xn1 1 xn2 2 · · · xnD D with 0 ≤nk ≤n for k = 1, . . . , D, and n1 + n2 + · · · + nD = n, as pn(x, t) .= X cn1,...,nD(t)xn1 1 · · · xnD D = c(t)⊤νn(x) = 0, (8) where cI(t) ∈R represents the coefficient of the monomial xI. The map νn : RD →RMn(D) is known as the Veronese map of degree n, which is defined as [8]: νn : [x1, . . . , xD]⊤7→[. . . , xI, . . .]⊤, (9) where I is chosen in the degree-lexicographic order and Mn(D) = n+D−1 n  is the total number of independent monomials. Notice that since the normal vectors {bj(t)} are time dependent, the vector of coefficients c(t) is also time dependent. Since both the normal vectors and the coefficient vector are defined up to scale, we will assume that ∥bj(t)∥= ∥c(t)∥= 1, without loss of generality. Recursive identification of the polynomial coefficients. Thanks to the polynomial equation (8), we now propose a new online hyperplane clustering algorithm that operates on the polynomial coefficients c(t), rather than on the normal vectors {bj(t)}n i=1. The advantage of doing so is that c(t) does not depend on which hyperplane the measurement x(t) belongs to. Our method operates as follows. At each time t, we seek to find an estimate ˆc(t) of c(t) that minimizes f(c(t)) = 1 N t X τ=1 N X i=1 (c(τ)⊤νn(xi(τ)))2. (10) By using normalized gradient descent on SMn(D)−1, we obtain the following recursive identifier ˆc(t + 1) = ˆc(t) cos(∥v(t)∥) + v(t) ∥v(t)∥sin(∥v(t)∥), (11) where the negative normalized gradient is computed as v(t) = −µ IMn(D) −ˆc(t)ˆc⊤(t) PN i=1(ˆc⊤(t)νn(xi(t)))νn(xi(t))/N 1 + µ PN i=1 ∥νn(xi(t))∥2/N . (12) Notice that (11) reduces to (4) and (12) reduces to (5) if n = 1 and N = 1. Recursive identification of the hyperplane normals. Given an estimate of c(t), we may obtain an estimate of the vector normal to the hyperplane containing a trajectory x(t) from the derivative of the polynomial ˆpn(x, t) = ˆc⊤(t)νn(x) at x(t) as ˆb(x(t)) = Dν⊤ n (x(t))ˆc(t) ∥Dν⊤ n (x(t))ˆc(t)∥, (13) where Dνn(x) is the Jacobian of νn at x. We choose the derivative of ˆpn to estimate the normal vector bj(t), because if x(t) is a trajectory in the jth hyperplane, then b⊤ j (t)x(t) = 0, hence the derivative of the true polynomial pn at the trajectory gives Dpn(x(t), t) = ∂pn(x(t), t) ∂x(t) = n X k=1 Y ℓ̸=k (b⊤ ℓ(t)x(t))bk(t) ∼bj(t). (14) Stability of the recursive identifier. Since in practice we do not know the true polynomial coefficients c(t), and we estimate b(t) from ˆc(t), we need to show that both ˆc(t) and ˆb(x(t)) track their true values in a stable fashion. Theorem 1 shows that this is the case. Notice that the persistence of excitation condition for multiple hyperplanes (15) is essentially the same as the one for a single hyperplane (6), but properly modified to take into account that the regressors are a set of trajectories in the embedded space {νn(xi(t))}N i=1, rather than a single trajectory in the original space {x(t)}. Theorem 1 Let Pc(t) ∈R(Mn(D)−1)×Mn(D) be a projection matrix onto the orthogonal complement of c(t). Consider the recursive identifier (11)–(13) and assume that the embedded regressors {νn(xi(t))}N i=1 are persistently exciting, i.e. there exist ρ1, ρ2 > 0 and S ∈N such that for all m ρ1IMn(D)−1 ≺ m+S X t=m N X i=1 Pc(t)νn(xi(t))ν⊤ n (xi(t))P ⊤ c(t) ≺ρ2IMn(D)−1. (15) Then the sequence c(t) −ˆc(t) is L2-stable. Furthermore, if a trajectory x(t) belongs to the jth hyperplane, then the corresponding ˆb(x(t)) in (13) is such that bj(t) −ˆb(x(t)) is L2-stable. If in addition the hyperplanes are static, then c(t) −ˆc(t) →0 and bj(t) −ˆb(x(t)) →0 exponentially. Proof. [Sketch only] When the hyperplanes are static, the exponential convergence of c(t) to c follows with minor modifications from Theorem 2.8, page 77 of [5]. This implies that ∃κ, λ > 0 such that ∥ˆc(t) −c∥< κλ−t. Also, since the vectors b1, . . . , bn are different, the polynomial c⊤νn(x) has no repeated factor. Therefore, there is a δ > 0 and a T > 0 such that for all t > T we have ∥Dνn(x(t))⊤c∥≥δ and ∥Dνn(x(t))⊤ˆc(t)∥≥δ (see proof of Theorem 3 in [9] for the latter claim). Combining this with ∥ˆc∥≤∥c∥+ ∥ˆc −c∥and ∥c∥= 1, we obtain that when x(t) ∈Sj, ∥bj −ˆb(x(t))∥= ‚‚‚‚‚ ∥Dν⊤ n (x(t))ˆc(t)∥Dν⊤ n (x(t))c −∥Dν⊤ n (x(t))c∥Dν⊤ n (x(t))ˆc(t) ∥Dν⊤ n (x(t))ˆc(t)∥∥Dν⊤ n (xt)c∥ ‚‚‚‚‚ ≤ ‚‚‚∥(Dν⊤ n (x(t))(ˆc(t) −c)∥Dν⊤ n (x(t))c −∥Dν⊤ n (x(t))c∥Dν⊤ n (x(t))(ˆc(t) −c)) ‚‚‚ δ2 ≤2 ∥Dν⊤ n (x(t))(ˆc(t) −c)∥∥Dν⊤ n (x(t))c∥ δ2 ≤2 ∥Dν⊤ n (x(t))∥2∥ˆc(t) −c)∥ δ2 = 2 α2 nE2 nκλ−t δ2 , showing that ˆb(x(t))→bj exponentially. In the last step we used the fact that for all x ∈RD there is a constant matrix of exponents Ekn ∈RMn(D)×Mn−1(D) such that ∂νn(x)/∂xk = Eknνn−1(x). Therefore, ∥Dνn(x)∥≤En∥νn−1(x)∥= En n n−1p ∥νn(x)∥≤αnEn, where En = max(∥Ekn∥) and αn = n 2(n−1)√ρ2. Consider now the case in which the hyperplanes are moving. Since SD−1 is compact, the sequences {bj(t + 1) −bj(t)}n j=1 are trivially L2-stable, hence so is the sequence {c(t + 1) −c(t)}. The L2-stability of {c(t) −ˆc(t)} and {bj(t) −ˆb(t)} follows. Segmentation of the point trajectories. Theorem 1 provides us with a method for computing an estimate ˆb(xi(t)) for the normal to the hyperplane passing through each one of the N trajectories {xi(t) ∈RD}N i=1 at each time instant. The next step is to cluster these normals into n groups, thereby segmenting the N trajectories. We do so by using a recursive version of the K-means algorithm, adapted to vectors on the unit sphere. Essentially, at each t, we seek the normal vectors ˆbj(t) ∈SD−1 and the membership of wij(t) ∈{0, 1} of trajectory i to hyperplane j that maximize f({wij(t)}, {ˆbj(t)}) = N X i=1 n X j=1 wij(t)(ˆb ⊤ j (t)ˆb(xi(t)))2. (16) The main difference with K-means is that we maximize the dot product of each data point with the cluster center, rather than minimizing the distance. Therefore, the cluster center is given by the principal component of each group, rather than the mean. In order to obtain temporally coherent estimates of the normal vectors, we use the estimates at time t to initialize the iterations at time t+1. Algorithm 1 (Recursive hyperplane segmentation) Initialization step 1: Randomly choose {ˆbj(1)}n j=1 and ˆc(1), or else apply the GPCA algorithm to {xi(1)}N i=1. For each t ≥1 1: Update the coefficients of the polynomial ˆpn(x(t), t) = ˆc(t)⊤νn(x(t)) using the recursive procedure ˆc(t + 1) = ˆc(t) cos(∥v(t)∥) + v(t) ∥v(t)∥sin(∥v(t)∥), v(t) = −µ ` IMn(D) −ˆc(t)ˆc⊤(t) ´PN i=1(ˆc⊤(t)νn(xi(t)))νn(xi(t))/N 1 + µ PN i=1 ∥νn(xi(t))∥2/N . 2: Solve for the normal vectors from the derivatives of ˆpn at the given trajectories ˆb(xi(t)) = Dν⊤ n (xi(t))ˆc(t) ∥Dν⊤ n (xi(t))ˆc(t)∥ i = 1, . . . , N. 3: Segment the normal vectors using the K-means algorithm on the sphere (a) Set wij(t) = 8 < : 1 if j = arg max k=1,...,n(ˆb ⊤ k (t)ˆb(xi(t)))2 0 otherwise , i = 1, . . . , N, j = 1, . . . , n (b) Set ˆbj(t) = PCA( ˆ w1j(t)ˆb(x1(t)) w2j(t)ˆb(x2(t)) · · · wNj(t)ˆb(xN(t)) ˜ ), j = 1, . . . , n (c) Iterate (a) and (b) until convergence of wij(t), and then set ˆbj(t + 1) = ˆbj(t). 4 Recursive segmentation of a variable number of moving hyperplanes In the previous section, we proposed a recursive algorithm for segmenting n moving hyperplanes under the assumption that n is known and constant in time. However, in many practical situations the number of hyperplanes may be unknown and time varying. For example, the number of moving objects in a video sequence may change due to objects entering or leaving the camera field of view. In this section, we consider the problem of segmenting a variable number of moving hyperplanes. We denote by n(t) ∈N the number of hyperplanes at time t and assume we are given an upper bound n ≥n(t). We show that if we apply Algorithm 1 with the number of hyperplanes set to n, then we can still recover the correct segmentation of the scene, even if n(t) < n. To see this, let us have a close look at the persistence of excitation condition in equation (15) of Theorem 1. Since the condition on the right hand side of (15) holds trivially when the regressors xi(t) are bounded, the only important condition is the one on the left hand side. Notice that the condition on the left hand side implies that the spatial-temporal covariance matrix of the embedded regressors must be of rank Mn(D) −1 in any time window of size S for some integer S. Loosely speaking, the embedded regressors must be ”rich enough” either in space or in time. The case in which there is a ρ1 > 0 such that for all t n(t) = n and N X i=1 Pc(t)νn(xi(t))ν⊤ n (xi(t))P ⊤ c(t) ≻ρ1IMn(D)−1 (17) corresponds to the case of data that is rich in space. In this case, at each time instant we draw data from all n hyperplanes and the data is rich enough to estimate all n hyperplanes at each time instant. In fact, condition (17) is the one required by GPCA [4], which in this case can be applied at each time t independently. Notice also that (17) is equivalent to (15) with S = 1. The case in which n(t) = 1 and there are ρ1 > 0, S ∈N and i ∈{1, . . ., N} such that for all m m+S X t=m νn(xi(t))ν⊤ n (xi(t)) ≻ρ1 N IMn(D)−1 (18) corresponds to the case of data that is rich in time. In this case, at each time instant we draw data from a single hyperplane. As time proceeds, however, the data must be persistently drawn from at least n hyperplanes in order for (18) to hold. This can be achieved either by having n different static hyperplanes and persistently drawing data from all of them, or by having less than n moving hyperplanes whose motion is rich enough so that (18) holds. In summary, as long as the embedded regressors satisfy condition (15) for some upper bound n on the number of hyperplanes, the recursive identifier (11)-(13) will still provide L2-stable estimates of the parameters, even if the number of hyperplanes is unknown and variable, and n(t) < n for all t. 5 Experiments Experiments on synthetic data. We randomly draw N = 200 3D points lying in n = 2 planes and apply a time varying rotation to these points for t = 1, . . . , 1000 to generate N trajectories {xi(t)}N i=1. Since the true segmentation is known, we compute the vectors {bj(t)} normal to each plane, and use them to generate the vector of coefficients c(t). We run our algorithm on the sogenerated data with n = 2, µ = 1 and a random initial estimate for the parameters. We compare these estimates with the ground truth using the percentage of misclassified points. We also consider the error of the polynomial coefficients and the normal vectors by computing the angles between the estimated and true values. Figure 1 shows the true and estimated parameters, as well as the estimation errors. Observe that the algorithm takes about 100 seconds for the errors to stabilize within 1.62◦for the coefficients, 1.62◦for the normals, and 4% for the segmentation error. 0 200 400 600 800 1000 −1 −0.5 0 0.5 1 Time (seconds) True polynomial coefficients 0 200 400 600 800 1000 −1 −0.5 0 0.5 1 Time (seconds) Estimated polynomial coefficients 0 200 400 600 800 1000 0 20 40 60 Time (seconds) Estimation error of the polynomial (degrees) 0 200 400 600 800 1000 −1 −0.5 0 0.5 1 Time (seconds) True normal vector b1 0 200 400 600 800 1000 −1 −0.5 0 0.5 1 Time (seconds) Estimated normal vector b1 0 200 400 600 800 1000 0 10 20 30 40 Time (seconds) Estimation error of b1 and b2 (degrees) b1 b2 0 200 400 600 800 1000 0 10 20 30 40 50 Time (seconds) Segmentation error (%) Figure 1: Segmenting 200 points lying on two moving planes in R3 using our recursive algorithm. Segmentation of dynamic textures. We now apply our algorithm to the problem of segmenting video sequences of dynamic textures, i.e. sequences of nonrigid scenes that exhibit some temporal stationarity, e.g., water, smoke, or foliage. As proposed in [10], one can model the temporal evolution of the image intensities as the output of a linear dynamical system. Since the trajectories of the output of a linear dynamical system live in the so-called observability subspace, the intensity trajectories of pixels associated with a single dynamic texture lie in a subspace. Therefore, the set of all intensity trajectories lie in multiple subspaces, one per dynamic texture. Given γ consecutive frames of a video sequence {I(f)}t f=t−γ+1, we interpret the data as a matrix W(t) ∈RN×3γ, where N is the number of pixels, and 3 corresponds to the three RGB color channels. We obtain a data point xi(t) ∈RD from image I(t) by projecting the ith row of W(t), w⊤ i (t) onto a subspace of dimension D, i.e. xi(t) = Πwi(t), with Π ∈RD×3γ. The projection matrix Π can be obtained in a variety of ways. We use the D principal components of the first γ frames to define Π. More specifically, if W(γ) = UΣV ⊤, with U ∈RN×D, Σ ∈RD×D and V ∈ R3γ×D is a rank-D approximation of W(γ) computed using SVD, then we choose Π = Σ−1V ⊤. We applied our method to a sequence (110 × 192, 130 frames) containing a bird floating on water, while rotating around a fix point. The task is to segment the bird’s rigid motion from the water’s dynamic texture, while at the same time tracking the motion of the bird. We chose D = 5 principal components of the γ = 5 first frames of the RGB video sequence to project each frame onto a lower dimensional space. Figure 2 shows the segmentation. Although the convergence is not guaranteed with only 130 frames, it is clear that the polynomial coefficients already capture the periodicity of the motion. As shown in the last row of Figure 2, some coefficients of the polynomial oscillate in time. One can notice that the orientation of the bird is related to the value of the coefficient c8. If the bird is facing to the right showing her right side, the value of c8 achieves a local maximum. On the contrary if the bird is oriented to the left, the value of c8 achieves a local minimum. Some irregularities seem to appear at the local minima of this coefficient: they actually correspond to a rapid motion of the bird. One can distinguish three behaviors for the polynomial coefficients: oscillations, pseudooscillations or quasi-linearity. For both the oscillations and the pseudo-oscillations the period is identical to the bird’s motion period (40 frames). This example shows that the coefficients of the estimated polynomial give useful information about the scene motion. 0 50 100 −0.04 −0.03 −0.02 Time (seconds) 0 50 100 −0.04 −0.03 −0.02 Time (seconds) 0 50 100 −0.04 −0.03 −0.02 Time (seconds) 0 50 100 −0.04 −0.03 −0.02 Time (seconds) 0 50 100 −0.04 −0.03 −0.02 Time (seconds) Figure 2: Segmenting a bird floating on water. Top: frames 17, 36, 60, 81, and 98 of the sequence. Middle: segmentation obtained using our method. Bottom: temporal evolution of c8 during the video sequence, with the red dot indicating the location of the corresponding frame in this evolution. To test the performance of our method on a video sequence with a variable number of motions, we extracted a sub-clip of the bird sequence (55 × 192, 130 frames) in which the camera moves up at 1 pixel/frame until the bird disappears at t = 51. The camera stays stationary from t = 56 to t = 66, and then moves down at 1 pixel/frame, the bird reappears at t = 76. We applied both GPCA and our method initialized with GPCA to this video sequence. For GPCA we used a moving window of γ = 5 frames. For our method we chose D = 5 principal components of the γ = 5 first frames of the RGB video sequence to project each frame onto a fixed lower dimensional space. We set the parameter of the recursive algorithm to µ = 1. Figure 3 shows the segmentation results. Notice that both methods give excellent results during the first few frames, when both the bird and the water are present. This is expected, as our method is initialized with GPCA. Nevertheless, notice that the performance of GPCA deteriorates dramatically when the bird disappears, because GPCA overestimates the number of hyperplanes, whereas our method is robust to this change and keeps segmenting the scene correctly, i.e. assigning all the pixels to the background. When the bird reappears, our method detects the bird correctly from the first frame whereas GPCA produces a wrong segmentation for the first frames after the bird reappears. Towards the end of the sequence, both algorithms give a good segmentation. This demonstrates that our method has the ability to deal with a variable number of motions, while GPCA has not. In addition the fixed projection and the recursive estimation of the polynomial coefficients make our method much faster than GPCA. Sequence GPCA Our method Figure 3: Segmenting a video sequence with a variable number of dynamic textures. Top: frames 1, 24, 65, 77, and 101. Middle: segmentation with GPCA. Bottom: segmentation with our method. 6 Conclusions We have proposed a simple recursive algorithm for segmenting trajectories lying in a variable number of moving hyperplanes. The algorithm updates the coefficients of a polynomial whose derivatives give the normals to the moving hyperplanes as well as the segmentation of the trajectories. We applied our method successfully to the segmentation of videos containing multiple dynamic textures. Acknowledgments The author acknowledges the support of grants NSF CAREER IIS-04-47739, NSF EHS-05-09101 and ONR N00014-05-10836. References [1] I. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986. [2] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman. Clustering apperances of objects under varying illumination conditions. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 11–18, 2003. [3] M. Tipping and C. Bishop. Mixtures of probabilistic principal component analyzers. Neural Computation, 11(2):443–482, 1999. [4] R. Vidal, Y. Ma, and S. Sastry. Generalized Principal Component Analysis (GPCA). IEEE Trans. on Pattern Analysis and Machine Intelligence, 27(12):1–15, 2005. [5] B.D.O. Anderson, R.R. Bitmead, C.R. Johnson Jr., P.V. Kokotovic, R.L. Ikosut, I.M.Y. Mareels, L. Praly, and B.D. Riedle. Stability of Adaptive Systems. MIT Press, 1986. [6] L. Guo. Stability of recursive stochastic tracking algorithms. In IEEE Conf. on Decision & Control, pages 2062–2067, 1993. [7] A. Edelman, T. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM Journal of Matrix Analysis Applications, 20(2):303–353, 1998. [8] J. Harris. Algebraic Geometry: A First Course. Springer-Verlag, 1992. [9] R. Vidal and B.D.O. Anderson. Recursive identification of switched ARX hybrid models: Exponential convergence and persistence of excitation. In IEEE Conf. on Decision & Control, pages 32–37, 2004. [10] G. Doretto, A. Chiuso, Y. Wu, and S. Soatto. Dynamic textures. International Journal of Computer Vision, 51(2):91–109, 2003.
2006
33
3,052
Efficient sparse coding algorithms Honglak Lee Alexis Battle Rajat Raina Andrew Y. Ng Computer Science Department Stanford University Stanford, CA 94305 Abstract Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L1-regularized least squares problem and an L2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons. 1 Introduction Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it learns basis functions that capture higher-level features in the data. When a sparse coding algorithm is applied to natural images, the learned bases resemble the receptive fields of neurons in the visual cortex [1, 2]; moreover, sparse coding produces localized bases when applied to other natural stimuli such as speech and video [3, 4]. Unlike some other unsupervised learning techniques such as PCA, sparse coding can be applied to learning overcomplete basis sets, in which the number of bases is greater than the input dimension. Sparse coding can also model inhibition between the bases by sparsifying their activations. Similar properties have been observed in biological neurons, thus making sparse coding a plausible model of the visual cortex [2, 5]. Despite the rich promise of sparse coding models, we believe that their development has been hampered by their expensive computational cost. In particular, learning large, highly overcomplete representations has been extremely expensive. In this paper, we develop a class of efficient sparse coding algorithms that are based on alternating optimization over two subsets of the variables. The optimization problems over each of the subsets of variables are convex; in particular, the optimization over the first subset is an L1-regularized least squares problem; the optimization over the second subset of variables is an L2-constrained least squares problem. We describe each algorithm and empirically analyze their performance. Our method allows us to efficiently learn large overcomplete bases from natural images. We demonstrate that the resulting learned bases exhibit (i) end-stopping [6] and (ii) modulation by stimuli outside the classical receptive field (nCRF surround suppression) [7]. Thus, sparse coding may also provide a partial explanation for these phenomena in V1 neurons. Further, in related work [8], we show that the learned succinct representation captures higher-level features that can then be applied to supervised classification tasks. 2 Preliminaries The goal of sparse coding is to represent input vectors approximately as a weighted linear combination of a small number of (unknown) “basis vectors.” These basis vectors thus capture high-level patterns in the input data. Concretely, each input vector ⃗ξ ∈Rk is succinctly represented using basis vectors ⃗b1, . . . ,⃗bn ∈Rk and a sparse vector of weights or “coefficients” ⃗s ∈Rn such that ⃗ξ ≈P j⃗bjsj. The basis set can be overcomplete (n > k), and can thus capture a large number of patterns in the input data. Sparse coding is a method for discovering good basis vectors automatically using only unlabeled data. The standard generative model assumes that the reconstruction error ⃗ξ −P j⃗bjsj is distributed as a zero-mean Gaussian distribution with covariance σ2I. To favor sparse coefficients, the prior distribution for each coefficient sj is defined as: P(sj) ∝exp(−βφ(sj)), where φ(·) is a sparsity function and β is a constant. For example, we can use one of the following: φ(sj) =    ∥sj∥1 (L1 penalty function) (s2 j + ϵ) 1 2 (epsilonL1 penalty function) log(1 + s2 j) (log penalty function). (1) In this paper, we will use the L1 penalty unless otherwise mentioned; L1 regularization is known to produce sparse coefficients and can be robust to irrelevant features [9]. Consider a training set of m input vectors ⃗ξ(1), ..., ⃗ξ(m), and their (unknown) corresponding coefficients ⃗s(1), ...,⃗s(m). The maximum a posteriori estimate of the bases and coefficients, assuming a uniform prior on the bases, is the solution to the following optimization problem:1 minimize{⃗bj},{⃗s(i)} Pm i=1 1 2σ2 ∥⃗ξ(i) −Pn j=1⃗bjs(i) j ∥2 + β Pm i=1 Pn j=1 φ(s(i) j ) (2) subject to ∥⃗bj∥2 ≤c, ∀j = 1, ..., n. This problem can be written more concisely in matrix form: let X ∈Rk×m be the input matrix (each column is an input vector), let B ∈Rk×n be the basis matrix (each column is a basis vector), and let S ∈Rn×m be the coefficient matrix (each column is a coefficient vector). Then, the optimization problem above can be written as: minimizeB,S 1 2σ2 ∥X −BS∥2 F + β P i,j φ(Si,j) (3) subject to P i B2 i,j ≤c, ∀j = 1, ..., n. Assuming the use of either L1 penalty or epsilonL1 penalty as the sparsity function, the optimization problem is convex in B (while holding S fixed) and convex in S (while holding B fixed),2 but not convex in both simultaneously. In this paper, we iteratively optimize the above objective by alternatingly optimizing with respect to B (bases) and S (coefficients) while holding the other fixed. For learning the bases B, the optimization problem is a least squares problem with quadratic constraints. There are several approaches to solving this problem, such as generic convex optimization solvers (e.g., QCQP solver) as well as gradient descent using iterative projections [10]. However, generic convex optimization solvers are too slow to be applicable to this problem, and gradient descent using iterative projections often shows slow convergence. In this paper, we derive and solve the Lagrange dual, and show that this approach is much more efficient than gradient-based methods. For learning the coefficients S, the optimization problem is equivalent to a regularized least squares problem. For many differentiable sparsity functions, we can use gradient-based methods (e.g., conjugate gradient). However, for the L1 sparsity function, the objective is not continuously differentiable and the most straightforward gradient-based methods are difficult to apply. In this case, the following approaches have been used: generic QP solvers (e.g., CVX), Chen et al.’s interior point method [11], a modification of least angle regression (LARS) [12], or grafting [13]. In this paper, we present a new algorithm for solving the L1-regularized least squares problem and show that it is more efficient for learning sparse coding bases. 3 L1-regularized least squares: The feature-sign search algorithm Consider solving the optimization problem (2) with an L1 penalty over the coefficients {s(i) j } while keeping the bases fixed. This problem can be solved by optimizing over each ⃗s(i) individually: minimize⃗s(i)∥⃗ξ(i) − X j ⃗bjs(i) j ∥2 + (2σ2β) X j |s(i) j |. (4) Notice now that if we know the signs (positive, zero, or negative) of the s(i) j ’s at the optimal value, we can replace each of the terms |s(i) j | with either s(i) j (if s(i) j > 0), −s(i) j (if s(i) j < 0), or 0 (if 1We impose a norm constraint for bases: ∥⃗bj∥2 ≤c, ∀j = 1, ..., n for some constant c. Norm constraints are necessary because, otherwise, there always exists a linear transformation of ⃗bj’s and ⃗s(i)’s which keeps Pn j=1⃗bjs(i) j unchanged, while making s(i) j ’s approach zero. Based on similar motivation, Olshausen and Field used a scheme which retains the variation of coefficients for every basis at the same level [1, 2]. 2A log (non-convex) penalty was used in [1]; thus, gradient-based methods can get stuck in local optima. s(i) j = 0). Considering only nonzero coefficients, this reduces (4) to a standard, unconstrained quadratic optimization problem (QP), which can be solved analytically and efficiently. Our algorithm, therefore, tries to search for, or “guess,” the signs of the coefficients s(i) j ; given any such guess, we can efficiently solve the resulting unconstrained QP. Further, the algorithm systematically refines the guess if it turns out to be initially incorrect. To simplify notation, we present the algorithm for the following equivalent optimization problem: minimizexf(x) ≡∥y −Ax∥2 + γ∥x∥1, (5) where γ is a constant. The feature-sign search algorithm is shown in Algorithm 1. It maintains an active set of potentially nonzero coefficients and their corresponding signs—all other coefficients must be zero—and systematically searches for the optimal active set and coefficient signs. The algorithm proceeds in a series of “feature-sign steps”: on each step, it is given a current guess for the active set and the signs, and it computes the analytical solution ˆxnew to the resulting unconstrained QP; it then updates the solution, the active set and the signs using an efficient discrete line search between the current solution and ˆxnew (details in Algorithm 1).3 We will show that each such step reduces the objective f(x), and that the overall algorithm always converges to the optimal solution. Algorithm 1 Feature-sign search algorithm 1 Initialize x := ⃗0, θ := ⃗0, and active set := {}, where θi ∈{−1, 0, 1} denotes sign(xi). 2 From zero coefficients of x, select i = arg maxi ∂∥y−Ax∥2 ∂xi . Activate xi (add i to the active set) only if it locally improves the objective, namely: If ∂∥y−Ax∥2 ∂xi > γ, then set θi := −1, active set := {i}∪active set. If ∂∥y−Ax∥2 ∂xi < −γ, then set θi := 1, active set := {i}∪active set. 3 Feature-sign step: Let ˆA be a submatrix of A that contains only the columns corresponding to the active set. Let ˆx and ˆθ be subvectors of x and θ corresponding to the active set. Compute the analytical solution to the resulting unconstrained QP (minimizeˆx∥y −ˆAˆx∥2 + γˆθ⊤ˆx): ˆxnew := ( ˆA⊤ˆA)−1( ˆA⊤y −γˆθ/2), Perform a discrete line search on the closed line segment from ˆx to ˆxnew: Check the objective value at ˆxnew and all points where any coefficient changes sign. Update ˆx (and the corresponding entries in x) to the point with the lowest objective value. Remove zero coefficients of ˆx from the active set and update θ := sign(x). 4 Check the optimality conditions: (a) Optimality condition for nonzero coefficients: ∂∥y−Ax∥2 ∂xj + γ sign(xj) = 0, ∀xj ̸= 0 If condition (a) is not satisfied, go to Step 3 (without any new activation); else check condition (b). (b) Optimality condition for zero coefficients: ∂∥y−Ax∥2 ∂xj ≤γ, ∀xj = 0 If condition (b) is not satisfied, go to Step 2; otherwise return x as the solution. To sketch the proof of convergence, let a coefficient vector x be called consistent with a given active set and sign vector θ if the following two conditions hold for all i: (i) If i is in the active set, then sign(xi) = θi, and, (ii) If i is not in the active set, then xi = 0. Lemma 3.1. Consider optimization problem (5) augmented with the additional constraint that x is consistent with a given active set and sign vector. Then, if the current coefficients xc are consistent with the active set and sign vector, but are not optimal for the augmented problem at the start of Step 3, the feature-sign step is guaranteed to strictly reduce the objective. Proof sketch. Let ˆxc be the subvector of xc corresponding to coefficients in the given active set. In Step 3, consider a smooth quadratic function ˜f(ˆx) ≡∥y −ˆAˆx∥2 +γˆθ⊤ˆx. Since ˆxc is not an optimal point of ˜f, we have ˜f(ˆxnew) < ˜f(ˆxc). Now consider the two possible cases: (i) if ˆxnew is consistent with the given active set and sign vector, updating ˆx := ˆxnew strictly decreases the objective; (ii) if ˆxnew is not consistent with the given active set and sign vector, let ˆxd be the first zero-crossing point (where any coefficient changes its sign) on a line segment from ˆxc to ˆxnew, then clearly ˆxc ̸= ˆxd, 3A technical detail has been omitted from the algorithm for simplicity, as we have never observed it in practice. In Step 3 of the algorithm, in case ˆA⊤ˆA becomes singular, we can check if q ≡ˆA⊤y−γˆθ/2 ∈R( ˆA⊤ˆA). If yes, we can replace the inverse with the pseudoinverse to minimize the unconstrained QP; otherwise, we can update ˆx to the first zero-crossing along any direction z such that z ∈N( ˆA⊤ˆA), z⊤q ̸= 0. Both these steps are still guaranteed to reduce the objective; thus, the proof of convergence is unchanged. and ˜f(ˆxd) < ˜f(ˆxc) by convexity of ˜f, thus we finally have f(ˆxd) = ˜f(ˆxd) < ˜f(ˆxc) = f(ˆxc).4 Therefore, the discrete line search described in Step 3 ensures a decrease in the objective value. Lemma 3.2. Consider optimization problem (5) augmented with the additional constraint that x is consistent with a given active set and sign vector. If the coefficients xc at the start of Step 2 are optimal for the augmented problem, but are not optimal for problem (5), the feature-sign step is guaranteed to strictly reduce the objective. Proof sketch. Since xc is optimal for the augmented problem, it satisfies optimality condition (a), but not (b); thus, in Step 2, there is some i, such that ∂∥y−Ax∥2 ∂xi > γ; this i-th coefficient is activated, and i is added to the active set. In Step 3, consider the smooth quadratic function ˜f(ˆx) ≡∥y −ˆAˆx∥2 + γˆθ⊤ˆx. Observe that (i) since a Taylor expansion of ˜f around ˆx = ˆxc has a first order term in xi only (using condition 4(a) for the other coefficients), we have that any direction that locally decreases ˜f(ˆx) must be consistent with the sign of the activated xi, and, (ii) since ˆxc is not an optimal point of ˜f(ˆx), ˜f(ˆx) must decrease locally near ˆx = ˆxc along the direction from ˆxc to ˆxnew. From (i) and (ii), the line search direction ˆxc to ˆxnew must be consistent with the sign of the activated xi. Finally, since ˜f(ˆx) = f(ˆx) when ˆx is consistent with the active set, either ˆxnew is consistent, or the first zero-crossing from ˆxc to ˆxnew has a lower objective value (similar argument to Lemma 3.1). Theorem 3.3. The feature-sign search algorithm converges to a global optimum of the optimization problem (5) in a finite number of steps. Proof sketch. From the above lemmas, it follows that the feature-sign steps always strictly reduce the objective f(x). At the start of Step 2, x either satisfies optimality condition 4(a) or is ⃗0; in either case, x is consistent with the current active set and sign vector, and must be optimal for the augmented problem described in the above lemmas. Since the number of all possible active sets and coefficient signs is finite, and since no pair can be repeated (because the objective value is strictly decreasing), the outer loop of Steps 2–4(b) cannot repeat indefinitely. Now, it suffices to show that a finite number of steps is needed to reach Step 4(b) from Step 2. This is true because the inner loop of Steps 3–4(a) always results in either an exit to Step 4(b) or a decrease in the size of the active set. Note that initialization with arbitrary starting points requires a small modification: after initializing θ and the active set with a given initial solution, we need to start with Step 3 instead of Step 1.5 When the initial solution is near the optimal solution, feature-sign search can often obtain the optimal solution more quickly than when starting from ⃗0. 4 Learning bases using the Lagrange dual In this subsection, we present a method for solving optimization problem (3) over bases B given fixed coefficients S. This reduces to the following problem: minimize ∥X −BS∥2 F (6) subject to Pk i=1 B2 i,j ≤c, ∀j = 1, ..., n. This is a least squares problem with quadratic constraints. In general, this constrained optimization problem can be solved using gradient descent with iterative projection [10]. However, it can be much more efficiently solved using a Lagrange dual. First, consider the Lagrangian: L(B,⃗λ) = trace (X −BS)⊤(X −BS)  + n X j=1 λj( k X i=1 B2 i,j −c), (7) where each λj ≥0 is a dual variable. Minimizing over B analytically, we obtain the Lagrange dual: D(⃗λ) = min B L(B,⃗λ) = trace X⊤X −XS⊤(SS⊤+ Λ)−1(XS⊤)⊤−c Λ  , (8) where Λ = diag(⃗λ). The gradient and Hessian of D(⃗λ) are computed as follows: ∂D(⃗λ) ∂λi = ∥XS⊤(SS⊤+ Λ)−1ei∥2 −c, (9) ∂2D(⃗λ) ∂λi∂λj = −2 (SS⊤+ Λ)−1(XS⊤)⊤XS⊤(SS⊤+ Λ)−1 i,j (SS⊤+ Λ)−1 i,j , (10) 4To simplify notation, we reuse f(·) even for subvectors such as ˆx; in the case of f(ˆx), we consider only the coefficients in ˆx as variables, and all coefficients not in the subvector can be assumed constant at zero. 5If the algorithm terminates without reaching Step 2, we are done; otherwise, once the algorithm reaches Step 2, the same argument in the proof applies. natural image speech stereo video 196×512 500×200 288×400 512×200 Feature-sign 2.16 (0) 0.58 (0) 1.72 (0) 0.83 (0) LARS 3.62 (0) 1.28 (0) 4.02 (0) 1.98 (0) Grafting 13.39 (7e-4) 4.69 (4e-6) 11.12 (5e-4) 5.88 (2e-4) Chen et al.’s 88.61 (8e-5) 47.49 (8e-5) 66.62 (3e-4) 47.00 (2e-4) QP solver (CVX) 387.90 (4e-9) 1,108.71 (1e-8) 538.72 (7e-9) 1,219.80 (1e-8) Table 1: The running time in seconds (and the relative error in parentheses) for coefficient learning algorithms applied to different natural stimulus datasets. For each dataset, the input dimension k and the number of bases n are specified as k × n. The relative error for an algorithm was defined as (fobj −f ∗)/f ∗, where fobj is the final objective value attained by that algorithm, and f ∗is the best objective value attained among all the algorithms. where ei ∈Rn is the i-th unit vector. Now, we can optimize the Lagrange dual (8) using Newton’s method or conjugate gradient. After maximizing D(⃗λ), we obtain the optimal bases B as follows: B⊤= (SS⊤+ Λ)−1(XS⊤)⊤. (11) The advantage of solving the dual is that it uses significantly fewer optimization variables than the primal. For example, optimizing B ∈R1,000×1,000 requires only 1,000 dual variables. Note that the dual formulation is independent of the sparsity function (e.g., L1, epsilonL1, or other sparsity function), and can be extended to other similar models such as “topographic” cells [14].6 5 Experimental results 5.1 The feature-sign search algorithm We evaluated the performance of our algorithms on four natural stimulus datasets: natural images, speech, stereo images, and natural image videos. All experiments were conducted on a Linux machine with AMD Opteron 2GHz CPU and 2GB RAM. First, we evaluated the feature-sign search algorithm for learning coefficients with the L1 sparsity function. We compared the running time and accuracy to previous state-of-the-art algorithms: a generic QP solver,7 a modified version of LARS [12] with early stopping,8 grafting [13], and Chen et al.’s interior point method [11];9 all the algorithms were implemented in MATLAB. For each dataset, we used a test set of 100 input vectors and measured the running time10 and the objective function at convergence. Table 1 shows both the running time and accuracy (measured by the relative error in the final objective value) of different coefficient learning algorithms. Over all datasets, feature-sign search achieved the best objective values as well as the shortest running times. Feature-sign search and modified LARS produced more accurate solutions than the other methods.11 Feature-sign search was an order of magnitude faster than both Chen et al.’s algorithm and the generic QP solver, and it was also significantly faster than modified LARS and grafting. Moreover, feature-sign search has the crucial advantage that it can be initialized with arbitrary starting coefficients (unlike LARS); we will demonstrate that feature-sign search leads to even further speedup over LARS when applied to iterative coefficient learning. 5.2 Total time for learning bases The Lagrange dual method for one basis learning iteration was much faster than gradient descent with iterative projections, and we omit discussion of those results due to space constraints. Below, we directly present results for the overall time taken by sparse coding for learning bases from natural stimulus datasets. 6The sparsity penalty for topographic cells can be written as P l φ((P j∈cell l s2 j) 1 2 ), where φ(·) is a sparsity function and cell l is a topographic cell (e.g., group of ‘neighboring’ bases in 2-D torus representation). 7We used the CVX package available at http://www.stanford.edu/∼boyd/cvx/. 8LARS (with LASSO modification) provides the entire regularization path with discrete L1-norm constraints; we further modified the algorithm so that it stops upon finding the optimal solution of the Equation (4). 9MATLAB code is available at http://www-stat.stanford.edu/∼atomizer/. 10For each dataset/algorithm combination, we report the average running time over 20 trials. 11A general-purpose QP package (such as CVX) does not explicitly take the sparsity of the solutions into account. Thus, its solution tends to have many very small nonzero coefficients; as a result, the objective values obtained from CVX were always slightly worse than those obtained from feature-sign search or LARS. L1 sparsity function Coeff. / Basis learning natural image speech stereo video Feature-sign / LagDual 260.0 248.2 438.2 186.6 Feature-sign / GradDesc 1,093.9 1,280.3 950.6 933.2 LARS / LagDual 666.7 1,697.7 1,342.7 1,254.6 LARS / GradDesc 13,085.1 17,219.0 12,174.6 11,022.8 Grafting / LagDual 720.5 1,025.5 3,006.0 1,340.5 Grafting / GradDesc 2,767.9 8,670.8 6,203.3 3,681.9 epsilonL1 sparsity function Coeff. / Basis learning natural image speech stereo video ConjGrad / LagDual 1,286.6 544.4 1,942.4 1,461.9 ConjGrad / GradDesc 5,047.3 11,939.5 3,435.1 2,479.2 Table 2: The running time (in seconds) for different algorithm combinations using different sparsity functions. Figure 1: Demonstration of speedup. Left: Comparison of convergence between the Lagrange dual method and gradient descent for learning bases. Right: The running time per iteration for modified LARS and grafting as a multiple of the running time per iteration for feature-sign search. We evaluated different combinations of coefficient learning and basis learning algorithms: the fastest coefficient learning methods from our experiments (feature-sign search, modified LARS and grafting for the L1 sparsity function, and conjugate gradient for the epsilonL1 sparsity function) and the state-of-the-art basis learning methods (gradient descent with iterative projection and the Lagrange dual formulation). We used a training set of 1,000 input vectors for each of the four natural stimulus datasets. We initialized the bases randomly and ran each algorithm combination (by alternatingly optimizing the coefficients and the bases) until convergence.12 Table 2 shows the running times for different algorithm combinations. First, we observe that the Lagrange dual method significantly outperformed gradient descent with iterative projections for both L1 and epsilonL1 sparsity; a typical convergence pattern is shown in Figure 1 (left). Second, we observe that, for L1 sparsity, feature-sign search significantly outperformed both modified LARS and grafting.13 Figure 1 (right) shows the running time per iteration for modified LARS and grafting as a multiple of that for feature-sign search (using the same gradient descent algorithm for basis learning), demonstrating significant efficiency gains at later iterations; note that feature-sign search (and grafting) can be initialized with the coefficients obtained in the previous iteration, whereas modified LARS cannot. This result demonstrates that feature-sign search is particularly efficient for iterative optimization, such as learning sparse coding bases. 5.3 Learning highly overcomplete natural image bases Using our efficient algorithms, we were able to learn highly overcomplete bases of natural images as shown in Figure 2. For example, we were able to learn a set of 1,024 bases (each 14×14 pixels) 12We ran each algorithm combination until the relative change of the objective per iteration became less than 10−6 (i.e., |(fnew −fold)/fold| < 10−6). To compute the running time to convergence, we first computed the “optimal” (minimum) objective value achieved by any algorithm combination. Then, for each combination, we defined the convergence point as the point at which the objective value reaches within 1% relative error of the observed “optimal” objective value. The running time measured is the time taken to reach this convergence point. We truncated the running time if the optimization did not converge within 60,000 seconds. 13We also evaluated a generic conjugate gradient implementation on the L1 sparsity function; however, it did not converge even after 60,000 seconds. Figure 2: Learned overcomplete natural image bases. Left: 1,024 bases (each 14×14 pixels). Right: 2,000 bases (each 20×20 pixels). Figure 3: Left: End-stopping test for 14×14 sized 1,024 bases. Each line in the graph shows the coefficients for a basis for different length bars. Right: Sample input image for nCRF effect. in about 2 hours and a set of 2,000 bases (each 20×20 pixels) in about 10 hours.14 In contrast, the gradient descent method for basis learning did not result in any reasonable bases even after running for 24 hours. Further, summary statistics of our learned bases, obtained by fitting the Gabor function parameters to each basis, qualitatively agree with previously reported statistics [15]. 5.4 Replicating complex neuroscience phenomena Several complex phenomena of V1 neural responses are not well explained by simple linear models (in which the response is a linear function of the input). For instance, many visual neurons display “end-stopping,” in which the neuron’s response to a bar image of optimal orientation and placement is actually suppressed as the bar length exceeds an optimal length [6]. Sparse coding can model the interaction (inhibition) between the bases (neurons) by sparsifying their coefficients (activations), and our algorithms enable these phenomena to be tested with highly overcomplete bases. First, we evaluated whether end-stopping behavior could be observed in the sparse coding framework. We generated random bars with different orientations and lengths in 14×14 image patches, and picked the stimulus bar which most strongly activates each basis, considering only the bases which are significantly activated by one of the test bars. For each such highly activated basis, and the corresponding optimal bar position and orientation, we vary length of the bar from 1 pixel to the maximal size and run sparse coding to measure the coefficients for the selected basis, relative to their maximum coefficient. As shown in Figure 3 (left), for highly overcomplete bases, we observe many cases in which the coefficient decreases significantly as the bar length is increased beyond the optimal point. This result is consistent with the end-stopping behavior of some V1 neurons. Second, using the learned overcomplete bases, we tested for center-surround non-classical receptive field (nCRF) effects [7]. We found the optimal bar stimuli for 50 random bases and checked that these bases were among the most strongly activated ones for the optimal stimulus. For each of these 14We used Lagrange dual formulation for learning bases, and both conjugate gradient with epsilonL1 sparsity as well as the feature-sign search with L1 sparsity for learning coefficients. The bases learned from both methods showed qualitatively similar receptive fields. The bases shown in the Figure 2 were learned using epsilonL1 sparsity function and 4,000 input image patches randomly sampled for every iteration. bases, we measured the response with its optimal bar stimulus with and without the aligned bar stimulus in the surround region (Figure 3 (right)). We then compared the basis response in these two cases to measure the suppression or facilitation due to the surround stimulus. The aligned surround stimuli produced a suppression of basis activation; 42 out of 50 bases showed suppression with aligned surround input images, and 13 bases among them showed more than 10% suppression, in qualitative accordance with observed nCRF surround suppression effects. 6 Application to self-taught learning Sparse coding is an unsupervised algorithm that learns to represent input data succinctly using only a small number of bases. For example, using the “image edge” bases in Figure 2, it represents a new image patch ⃗ξ as a linear combination of just a small number of these bases⃗bj. Informally, we think of this as finding a representation of an image patch in terms of the “edges” in the image; this gives a slightly higher-level/more abstract representation of the image than the pixel intensity values, and is useful for a variety of tasks. In related work [8], we apply this to self-taught learning, a new machine learning formalism in which we are given a supervised learning problem together with additional unlabeled instances that may not have the same class labels as the labeled instances. For example, one may wish to learn to distinguish between cars and motorcycles given images of each, and additional—and in practice readily available—unlabeled images of various natural scenes. (This is in contrast to the much more restrictive semi-supervised learning problem, which would require that the unlabeled examples also be of cars or motorcycles only.) We apply our sparse coding algorithms to the unlabeled data to learn bases, which gives us a higher-level representation for images, thus making the supervised learning task easier. On a variety of problems including object recognition, audio classification, and text categorization, this approach leads to 11–36% reductions in test error. 7 Conclusion In this paper, we formulated sparse coding as a combination of two convex optimization problems and presented efficient algorithms for each: the feature-sign search for solving the L1-least squares problem to learn coefficients, and a Lagrange dual method for the L2-constrained least squares problem to learn the bases for any sparsity penalty function. We test these algorithms on a variety of datasets, and show that they give significantly better performance compared to previous methods. Our algorithms can be used to learn an overcomplete set of bases, and show that sparse coding could partially explain the phenomena of end-stopping and nCRF surround suppression in V1 neurons. Acknowledgments. We thank Bruno Olshausen, Pieter Abbeel, Sara Bolouki, Roger Grosse, Benjamin Packer, Austin Shoemaker and Joelle Skaf for helpful discussions. Support from the Office of Naval Research (ONR) under award number N00014-06-1-0828 is gratefully acknowledged. References [1] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [2] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37:3311–3325, 1997. [3] M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural Comp., 12(2), 2000. [4] B. A. Olshausen. Sparse coding of time-varying natural images. Vision of Vision, 2(7):130, 2002. [5] B.A. Olshausen and D.J. Field. Sparse coding of sensory inputs. Cur. Op. Neurobiology, 14(4), 2004. [6] M. P. Sceniak, M. J. Hawken, and R. Shapley. Visual spatial characterization of macaque V1 neurons. The Journal of Neurophysiology, 85(5):1873–1887, 2001. [7] J.R. Cavanaugh, W. Bair, and J.A. Movshon. Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology, 88(5):2530–2546, 2002. [8] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning. In NIPS Workshop on Learning when test and training inputs have different distributions, 2006. [9] A. Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In ICML, 2004. [10] Y. Censor and S. A. Zenios. Parallel Optimization: Theory, Algorithms and Applications. 1997. [11] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [12] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2), 2004. [13] S. Perkins and J. Theiler. Online feature selection using grafting. In ICML, 2003. [14] Aapo Hyv¨arinen, Patrik O. Hoyer, and Mika O. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527–1558, 2001. [15] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359–366, 1998.
2006
34
3,053
Approximate Correspondences in High Dimensions Kristen Grauman Department of Computer Sciences University of Texas at Austin grauman@cs.utexas.edu Trevor Darrell CS and AI Laboratory Massachusetts Institute of Technology trevor@csail.mit.edu Abstract Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1 Introduction When a single data object is described by a set of feature vectors, it is often useful to consider the matching or “correspondence” between two sets’ elements in order to measure their overall similarity or recover the alignment of their parts. For example, in computer vision, images are often represented as collections of local part descriptions extracted from regions or patches (e.g., [11, 12]), and many recognition algorithms rely on establishing the correspondence between the parts from two images to quantify similarity between objects or localize an object within the image [2, 3, 7]. Likewise, in text processing, a document may be represented as a bag of word-feature vectors; for example, Latent Semantic Analysis can be used to recover a “word meaning” subspace on which to project the co-occurrence count vectors for every word [9]. The relationship between documents may then be judged in terms of the matching between the sets of local meaning features. The critical challenge, however, is to compute the correspondences between the feature sets in an efficient way. The optimal correspondences—those that minimize the matching cost—require cubic time to compute, which quickly becomes prohibitive for sizeable sets and makes processing realistic large data sets impractical. Due to the optimal matching’s complexity, researchers have developed approximation algorithms to compute close solutions for a fraction of the computational cost [4, 8, 1, 7]. However, previous approximations suffer from distortion factors that increase linearly with the dimension of the features, and they fail to take advantage of structure in the feature space. In this paper we present a new algorithm for computing an approximate partial matching between point sets that can remain accurate even for sets with high-dimensional feature vectors, and benefits from taking advantage of the underlying structure in the feature space. The main idea is to derive a hierarchical, data-dependent decomposition of the feature space that can be used to encode feature sets as multi-resolution histograms with non-uniformly shaped bins. For two such histograms (pyramids), the matching cost is efficiently calculated by counting the number of features that intersect in each bin, and weighting these match counts according to geometric estimates of inter-feature distances. Our method allows for partial matchings, which means that the input sets can have varying numbers of features in them, and outlier features from the larger set can be ignored with no penalty to the matching cost. The matching score is computed in time linear in the number of features per set, and it forms a Mercer kernel suitable for use within existing kernel-based algorithms. In this paper we demonstrate how, unlike previous set matching approximations (including our original pyramid match algorithm [7]), the proposed approach can maintain consistent accuracy as the dimension of the features within the sets increases. We also show how the data-dependent hierarchical decomposition of the feature space produces more accurate correspondence fields than a previous approximation that uses a uniform decomposition. Finally, using our matching measure as a kernel in a discriminative classifier, we achieve improved object recognition results over a state-of-the-art set kernel on a benchmark data set. 2 Related Work Several previous matching approximation methods have also considered a hierarchical decomposition of the feature space to reduce matching complexity, but all suffer from distortion factors that scale linearly with the feature dimension [4, 8, 1, 7]. In this work we show how to alleviate this decline in accuracy for high-dimensional data by tuning the hierarchical decomposition according to the particular structure of the data, when such structure exists. We build on our pyramid match algorithm [7], a partial matching approximation that also uses histogram intersection to efficiently count matches implicitly formed by the bin structures. However, in contrast to [7], our use of data-dependent, non-uniform bins and a more precise weighting scheme results in matchings that are consistently accurate for structured, high-dimensional data. The idea of partitioning a feature space with vector quantization (VQ) is fairly widely used in practice; in the vision literature in particular, VQ has been used to establish a vocabulary of prototypical image features, from “textons” to the “visual words” of [16]. A variant of the pyramid match applied to spatial features was shown to be effective for matching quantized features in [10]. More recently, the authors of [13] have shown that a tree-structured vector quantization (TSVQ [5]) of image features provides a scalable means of indexing into a very large feature vocabulary. The actual tree structure employed is similar to the one constructed in this work; however, whereas the authors of [13] are interested in matching individual features to one another to access an inverted file, our approach computes approximate correspondences between sets of features. Note the distinction between the problem we are addressing—approximate matchings between sets—and the problem of efficiently identifying approximate or exact nearest neighbor feature vectors (e.g., via k-d trees): in the former, the goal is a one-to-one correspondence between sets of vectors, whereas in the latter, a single vector is independently matched to a nearby vector. 3 Approach The main contribution of this work is a new very efficient approximate bipartite matching method that measures the correspondence-based similarity between unordered, variable-sized sets of vectors, and can optionally extract an explicit correspondence field. We call our algorithm the vocabulary-guided (VG) pyramid match, since the histogram pyramids are defined by the “vocabulary” or structure of the feature space, and the pyramids are used to count implicit matches. The basic idea is to first partition the given feature space into a pyramid of non-uniformly shaped regions based on the distribution of a provided corpus of feature vectors. Point sets are then encoded as multi-resolution histograms determined by that pyramid, and an efficient intersection-based computation between any two histogram pyramids yields an approximate matching score for the original sets. The implicit matching version of our method estimates the inter-feature distances based on their respective distances to the bin centers. To produce an explicit correspondence field between the sets, we use the pyramid construct to divide-and-conquer the optimal matching computation. As our experiments will show, the proposed algorithm in practice provides a good approximation to the optimal partial matching, but is orders of magnitude faster to compute. Preliminaries: We consider a feature space F of d-dimensional vectors, F ⊆ℜd. The point sets our algorithm matches will come from the input space S, which contains sets of feature vectors drawn from F: S = {X|X = {x1, . . . , xm}}, where each xi ∈F, and the value m = |X| may vary across instances of sets in S. Throughout the text we will use the terms feature, vector, and point interchangeably to refer to the elements within a set. (a) Uniform bins (b) Vocabulary-guided bins Figure 1: Rather than carve the feature space into uniformly-shaped partitions (left), we let the vocabulary (structure) of the feature space determine the partitions (right). As a result, the bins are better concentrated on decomposing the space where features cluster, particularly for high-dimensional feature spaces. These figures depict the grid boundaries for two resolution levels for a 2-D feature space. In both (a) and (b), the left plot contains the coarser resolution level, and the right plot contains the finer one. Features are red points, bin centers are larger black points, and blue lines denote bin boundaries. A partial matching between two point sets is an assignment that maps all points in the smaller set to some subset of the points in the larger (or equally-sized) set. Given point sets X and Y, where m = |X|, n = |Y|, and m ≤n, a partial matching M (X, Y; π) = {(x1, yπ1), . . . , (xm, yπm)} pairs each point in X to some unique point in Y according to the permutation of indices specified by π = [π1, . . . , πm], 1 ≤πi ≤n, where πi specifies which point yπi ∈Y is matched to xi ∈X, for 1 ≤i ≤m. The cost of a partial matching is the sum of the distances between matched points: C (M(X, Y; π)) = P xi∈X ||xi −yπi||2. The optimal partial matching M(X, Y; π∗) uses the assignment π∗that minimizes this cost: π∗= argminπ C (M(X, Y; π)). It is this matching that we wish to efficiently approximate. In Section 3.2 we describe how our algorithm approximates the cost C (M(X, Y; π∗)); for a small increase in computational cost we can also extract explicit correspondences to estimate π∗itself. 3.1 Building Vocabulary-Guided Pyramids The first step is to generate the structure of the vocabulary-guided (VG) pyramid to define the bin placement for the multi-resolution histograms used in the matching. This is a one-time process performed before any matching takes place. We would like the bins in the pyramid to follow the feature distribution and concentrate partitions where the features actually fall. To accomplish this, we perform hierarchical clustering on a sample of representative feature vectors from F. We randomly select some example feature vectors from the feature type of interest to form the representative feature corpus, and perform hierarchical k-means clustering with the Euclidean distance to build the pyramid tree. Other hierarchical clustering techniques, such as agglomerative clustering, are also possible and do not change the operation of the method. For this unsupervised clustering process there are two parameters: the number of levels in the tree L, and the branching factor k. The initial corpus of features is clustered into k top-level groups, where group membership is determined by the Voronoi partitioning of the feature corpus according to the k cluster centers. Then the clustering is repeated recursively L −1 times on each of these groups, filling out a tree with L total levels containing ki bins (nodes) at level i, where levels are counted from the root (i = 0) to the leaves (i = L −1). The bins are irregularly shaped and sized, and their boundaries are determined by the Voronoi cells surrounding the cluster centers. (See Figure 1.) For each bin in the VG pyramid we record its diameter, which we estimate empirically based on the maximal inter-feature distance between any points from the initial feature corpus that were assigned to it. Once we have constructed a VG pyramid, we can embed point sets from S as multi-resolution histograms. A point’s placement in the pyramid is determined by comparing it to the appropriate k bin centers at each of the L pyramid levels. The histogram count is incremented for the bin (among the k choices) that the point is nearest to in terms of the same distance function used to cluster the initial corpus. We then push the point down the tree and continue to increment finer level counts only along the branch (bin center) that is chosen at each level. So a point is first assigned to one of the top-level clusters, then it is assigned to one of its children, and so on recursively. This amounts to a total of kL distances that must be computed between a point and the pyramid’s bin centers. Given the bin structure of the VG pyramid, a point set X is mapped to its pyramid: Ψ (X) = [H0(X), . . . , HL−1(X)], with Hi(X) = [⟨p, n, d⟩1, . . . , ⟨p, n, d⟩ki], and where Hi(X) is a kidimensional histogram associated with level i in the pyramid, p ∈Zi for entries in Hi(X), and 0 ≤i < L. Each entry in this histogram is a triple ⟨p, n, d⟩giving the bin index, the bin count, and the bin’s points’ maximal distance to the bin center, respectively. Storing the VG pyramid itself requires space for O(kL) d-dimensional feature vectors, i.e., all of the cluster centers. However, each point set’s histogram is stored sparsely, meaning only O(mL) nonzero bin counts are maintained to encode the entire pyramid for a set with m features. This is an important point: we do not store O(kL) counts for every point set; Hi(X) is represented by at most m triples having n > 0. We achieve a sparse implementation as follows: each vector in a set is pushed through the tree as described above. At every level i, we record a ⟨p, n, d⟩triple describing the nonzero entry for the current bin. The vector p = [p1, . . . , pi], pj ∈[1, k] denotes the indices of the clusters traversed from the root so far, n ∈Z+ denotes the count for the bin (initially 1), and d ∈ℜdenotes the distance computed between the inserted point and the current bin’s center. Upon reaching the leaf level, p is an L-dimensional path-vector indicating which of the k bins were chosen at each level, and every path-vector uniquely identifies some bin on the pyramid. Initially, an input set with m features yields a total of mL such triples—there is one nonzero entry per level per point, and each has n = 1. Then each of the L lists of entries is sorted by the index vectors (p in the triple), and they are collapsed to a list of sorted nonzero entries with unique indices: when two or more entries with the same index are found, they are replaced with a single entry with the same index for p, the summed counts for n, and the maximum distance for d. The sorting is done in linear time using integer sorting algorithms. Maintaining the maximum distance of any point in a bin to the bin center will allow us to efficiently estimate inter-point distances at the time of matching, as described in Section 3.2. 3.2 Vocabulary-Guided Pyramid Match Given two point sets’ pyramid encodings, we efficiently compute the approximate matching score using a simple weighted intersection measure. The VG pyramid’s multi-resolution partitioning of the feature space is used to direct the matching. The basic intuition is to start collecting groups of matched points from the bottom of the pyramid up, i.e., from within increasingly larger partitions. In this way, we will first consider matching the closest points (at the leaves), and as we climb to the higher-level clusters in the pyramid we will allow increasingly further points to be matched. We define the number of new matches within a bin to be a count of the minimum number of points either of the two input sets contributes to that bin, minus the number of matches already counted by any of its child bins. A weighted sum of these counts yields an approximate matching score. Let nij(X) denote the element n from ⟨p, n, d⟩j, the jth bin entry of histogram Hi(X), and let ch (nij(X)) denote the element n for the hth child bin of that entry, 1 ≤h ≤k. Similarly, let dij(X) refer to the element d from the same triple. Given point sets X and Y, we compute the matching score via their pyramids Ψ(X) and Ψ(Y) as follows: C (Ψ(X), Ψ(Y)) = L−1 X i=0 ki X j=1 wij " min (nij(X), nij(Y)) − k X h=1 min (ch (nij(X)) , ch (nij(Y))) # . (1) The outer sum loops over the levels in the pyramids; the second sum loops over the bins at a given level, and the innermost sum loops over the children of a given bin. The first min term reflects the number of matchable points in the current bin, and the second min term tallies the number of matches already counted at finer resolutions (in child bins). Note that as the leaf nodes have no children, when i = L −1 the last sum is zero. All matches are new at the leaves. The matching scores are normalized according to the size of the input sets in order to not favor larger sets. The number of new matches calculated for a bin is weighted by wij, an estimate of the distance between points contained in the bin.1 With a VG pyramid match there are two alternatives for the distance estimate: (a) weights based on the diameters of the pyramid’s bins, or (b) input-dependent weights based on the maximal distances of the points in the bin to its center. Option (a) is a conservative estimate of the actual inter-point distances in the bin if the corpus of features used to build the pyramid is representative of the feature space; its advantages are that it provides a guaranteed Mercer kernel (see below) and eliminates the need to store a distance d in the entry triples. Option (b)’s input-specific weights estimate the distance between any two points in the bin as the sum of the stored maximal to-center distances from either input set: wij = dij(X) + dij(Y). This weighting 1To use our matching as a cost function, weights are set as the distance estimates; to use as a similarity measure or kernel, weights are set as (some function of) the inverse of the distance estimates. gives a true upper bound on the furthest any two points could be from one another, and it has the potential to provide tighter estimates of inter-feature distances (as we confirm experimentally below); however, we cannot guarantee this weighting will yield a Mercer kernel. Just as we encode the pyramids sparsely, we derive a means to compute intersections in Eqn. 1 without ever traversing the entire pyramid tree. Given two sparse lists Hi(X) and Hi(Y) which have been sorted according to the bin indices, we obtain the minimum counts in linear time by moving pointers down the lists and processing only those nonzero entries that share an index, making the time required to compute a matching between two pyramids O(mL). A key aspect of our method is that we obtain a measure of matching quality between two point sets without computing pair-wise distances between their features—an O(m2) savings over sub-optimal greedy matchings. Instead, we exploit the fact that the points’ placement in the pyramid reflects their distance from one another. The only inter-feature distances computed are the kL distances needed to insert a point into the pyramid, and this small one-time cost is amortized every time we re-use a histogram to approximate another matching against a different point set. We first suggested the idea of using histogram intersection to count implicit matches in a multiresolution grid in [7]. However, in [7], bins are constructed to uniformly partition the space, bin diameters exponentially increase over the levels, and intersections are weighted indistinguishably across an entire level. In contrast, here we have developed a pyramid embedding that partitions according to the distribution of features, and weighting schemes that allow more precise approximations of the inter-feature costs. As we will show in Section 4, our VG pyramid match remains accurate and efficient even for high-dimensional feature spaces, while the uniform-bin pyramid match is limited in practice to relatively low-dimensional features. For the increased accuracy our method provides, there are some complexity trade-offs versus [7], which does not require computing any distances to place the points into bins; their uniform shape and size allows points to be placed directly via division by bin size. On the other hand, sorting the bin indices with the VG method has a lower complexity, since the values only range to k, the branch factor, which is typically much smaller than the aspect ratio that bounds the range in [7]. In addition, as we show in Section 4, in practice the cost of extracting an explicit correspondence field using the uniform-bin pyramid in high dimensions approaches the cubic cost of the optimal measure, whereas it remains linear with the proposed approach, assuming features are not uniformly distributed. Our approximation can be used to compare sets of vectors in any case where the presence of lowcost correspondences indicates their similarity (e.g., nearest-neighbor retrieval). We can also employ the measure as a kernel function for structured inputs. According to Mercer’s theorem, a kernel is p.s.d if and only if it corresponds to an inner product in some feature space [15]. We can re-write Eqn. 1 as: C (Ψ(X), Ψ(Y)) = PL−1 i=0 Pki j=1 (wij −pij) min (nij(X), nij(Y)), where pij refers to the weight associated with the parent bin of the jth node at level i. Since the min operation is p.d. [14], and since kernels are closed under summation and scaling by a positive constant [15], we have that the VG pyramid match is a Mercer kernel if wij ≥pij. This inequality holds if every child bin receives a similarity weight that is greater than its parent bin, or rather that every child bin has a distance estimate that is less than that of its parent. Indeed this is the case for weighting option (a), where wij is inversely proportional to the diameter of the bin. It holds by definition of the hierarchical clustering: the diameter of a subset of points must be less than or equal to the diameter of all those points. We cannot make this guarantee for weighting option (b). In addition to scalar matching scores, we can optionally extract explicit correspondence fields through the pyramid. In this case, the VG pyramid decomposes the required matching computation into a hierarchy of smaller matchings. Upon encountering a bin with a nonzero intersection, the optimal matching is computed between only those features from the two sets that fall into that particular bin. All points that are used in that per-bin matching are then flagged as matched and may not take part in subsequent matchings at coarser resolutions of the pyramid. 4 Results In this section, we provide results to empirically demonstrate our matching’s accuracy and efficiency on real data, and we compare it to a pyramid match using a uniform partitioning of the feature space. In addition to directly evaluating the matching scores and correspondence fields, we show that our method leads to improved object recognition performance when used as a kernel within a discriminative classifier. 0 20 40 60 80 100 120 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 Feature dimension (d) Spearman rank correlation with optimal match (R) Ranking quality over feature dimensions Uniform bin pyramid VG pyramid − input−specific weights VG pyramid − global weights 0 2000 4000 6000 8000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Rank correlations, d=8(R=0.86) Uniform bin pyramid match ranks Optimal ranks 0 2000 4000 6000 8000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Rank correlations, d=128(R=0.78) Uniform bin pyramid match ranks Optimal ranks 0 2000 4000 6000 8000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Rank correlations, d=8(R=0.92) VG pyramid match (input−specific weights) ranks Optimal ranks 0 2000 4000 6000 8000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Rank correlations, d=128(R=0.95) VG pyramid match (input−specific weights) ranks Optimal ranks Figure 2: Comparison of optimal and approximate matching rankings on image data. Left: The set rankings produced with the VG pyramid match are consistently accurate for increasing feature dimensions, while the accuracy with uniform bins degrades about linearly in the feature dimension. Right: Example rankings for both approximations at d = [8, 128]. Approximate Matching Scores: In these experiments, we extracted local SIFT [11] features from images in the ETH-80 database, producing an unordered set of about m = 256 vectors for every example. In this case, F is the space of SIFT image features. We sampled some features from 300 of the images to build the VG pyramid, and 100 images were used to test the matching. In order to test across varying feature dimensions, we also used some training features to establish a PCA subspace that was used to project features onto varying numbers of bases. For each feature dimension, we built a VG pyramid with k = 10 and L = 5, encoded the 100 point sets as pyramids, and computed the pair-wise matching scores with both our method and the optimal least-cost matching. If our measure is approximating the optimal matching well, we should find the ranking we induce to be highly correlated with the ranking produced by the optimal matching for the same data. In other words, the images should be sorted similarly by either method. Spearman’s rank correlation coefficient R provides a good quantitative measure to evaluate this: R = 1−6 PN 1 D2/N(N 2 −1), where D is the difference in rank for the N corresponding ordinal values assigned by the two measures. The left plot in Figure 2 shows the Spearman correlation scores against the optimal measure for both our method (with both weighting options) and the approximation in [7] for varying feature dimensions for the 10,000 pair-wise matching scores for the 100 test sets. Due to the randomized elements of the algorithms, for each method we have plotted the mean and standard deviation of the correlation for 10 runs on the same data. While the VG pyramid match remains consistently accurate for high feature dimensions (R = 0.95 with input-specific weights), the accuracy of the uniform bins degrades rapidly for dimensions over 10. The ranking quality of the input-specific weighting scheme (blue diamonds) is somewhat stronger than that of the “global” bin diameter weighting scheme (green squares). The four plots on the right of Figure 2 display the actual ranks computed for both approximations for two of the 26 dimensions summarized in the left plot. The black diagonals denote the optimal performance, where the approximate rankings would be identical to the optimal ones; higher Spearman correlations have points clustered more tightly along this diagonal. For the low-dimensional features, the methods perform fairly comparably; however, for the full 128-D features, the VG pyramid match is far superior (rightmost column). The optimal measure requires about 1.25s per match, while our approximation is about 2500x faster at 5×10−4s per match. Computing the pyramid structure from the feature corpus took about three minutes in Matlab; this is a one-time offline cost. For a pyramid matching to work well, the gradation in bin sizes up the pyramid must be such that at most levels of the pyramid we can capture distinct groups of points to match within the bins. That is, unless all the points in two sets are equidistant, the bin placement must allow us to match very near points at the finest resolutions, and gradually add matches that are more distant at coarser resolutions. In low dimensions, both uniform or data-dependent bins can achieve this. In high dimensions, however, uniform bin placement and exponentially increasing bin diameters fail to capture such a gradation: once any features from different point sets are close enough to 0 2 4 6 8 −50 0 50 100 150 200 250 Pyramid level Number of new matches formed d = 3 Vocabulary−guided bins Uniform bins 0 2 4 6 8 −50 0 50 100 150 200 250 Pyramid level Number of new matches formed d = 8 Vocabulary−guided bins Uniform bins 0 2 4 6 8 −50 0 50 100 150 200 250 Pyramid level Number of new matches formed d = 13 Vocabulary−guided bins Uniform bins 0 2 4 6 8 −50 0 50 100 150 200 250 Pyramid level Number of new matches formed d = 128 Vocabulary−guided bins Uniform bins Figure 3: Number of new matches formed at each pyramid level for either uniform (dashed red) or VG (solid blue) bins for increasing feature dimensions. Points represent mean counts per level for 10,000 matches. In low dimensions, both partition styles gradually collect matches up the pyramid. In high dimensions with uniform partitions, points begin sharing a bin “all at once”; in contrast, the VG bins still accrue new matches consistently across levels since the decomposition is tailored to where points cluster in the feature space. 0 20 40 60 80 100 120 0.4 0.6 0.8 1 1.2 1.4 1.6 x 10 5 Error in approximate correspondence fields Feature dimension (d ) Mean error per match Uniform bins, random per Uniform bins, optimal per Vocab.−guided bins, random per Vocab.−guided bins, optimal per 0 20 40 60 80 100 120 10 −3 10 −2 10 −1 10 0 10 1 Feature dimension (d ) Mean time per match (s) (LOG SCALE) Computation time Optimal Uniform bins, random per Uniform bins, optimal per Vocab.−guided bins, random per Vocab.−guided bins, optimal per Figure 4: Comparison of correspondence field errors (left) and associated computation times (right). This figure is best viewed in color. (Note that errors level out with d for all methods due to PCA.) match (share bins), the bins are so large that almost all of them match. The matching score is then approximately the number of points weighted by a single bin size. In contrast, when we tailor the feature space partitions to the distribution of the data, even in high dimensions the match counts increase gradually across levels, thereby yielding more discriminating implicit matches. Figure 3 confirms this intuition, again using the ETH-80 image data from above. Approximate Correspondence Fields: For the same image data, we ran the explicit matching variant of our method and compared the induced correspondences to those produced by the globally optimal measure. For comparison, we also applied the same variant to pyramids with uniform bins. We measure the error of an approximate matching ˆπ by the sum of the errors at every link in the field: E (M (X, Y; ˆπ) , M (X, Y; π∗)) = P xi∈X ||yˆπi −yπ∗ i ||2. Figure 4 compares the correspondence field error and computation times for the VG and uniform pyramids. For each approximation, there are two variations tested: in one, an optimal assignment is computed for all points in the same bin; for the other, a random assignment is made. The left plot shows the mean error per match for each method, and the right plot shows the corresponding mean time required to compute those matches. The computation times are as we would expect: the optimal matching is orders of magnitude more expensive than the approximations. Using the random assignment variation, both approximations have negligible costs, since they simply choose any combination of points within a bin. However, in high dimensions, the time required by the uniform bin pyramid with the optimal per-bin matching approaches the time required by the optimal matching itself. This occurs for similar reasons as the poorer matching score accuracy exhibited by the uniform bins, both in the left plot and above in Figure 2; since most or all of the points begin to match at a certain level, the pyramid does not help to divide-and-conquer the computation, and for high dimensions, the optimal matching in its entirety must be computed. In contrast, the expense of the VG pyramid matching remains steady and low, even for high dimensions, since data-dependent pyramids better divide the matching labor into the natural segments in the feature space. For similar reasons, the errors are comparable for the optimal per-bin variation with either the VG or uniform bins. The VG bins divide the computation so it can be done inexpensively, while the uniform bins divide the computation poorly and must compute it expensively, but about as accurately. Likewise, the error for the uniform bins when using a per-bin random assignment is very high for any but the lowest dimensions (red line on left plot), since such a large number of points are being randomly assigned to one another. In contrast, the VG bins actually result in similar errors whether the points in a bin are matched optimally or randomly (blue and pink lines on left plot). Pyramid matching method Mean recognition rate/class (d=128 / d=10) Time/match (s) (d=128 / d=10) Vocabulary-guided bins 99.0 / 97.7 6.1e-4 / 6.2e-4 Uniform bins 64.9 / 96.5 1.5e-3 / 5.7e-4 This again indicates that tuning the pyramid bins to the data’s distribution achieves a much more suitable breakdown of the computation, even in high dimensions. Realizing Improvements in Recognition: Finally, we have experimented with the VG pyramid match within a discriminative classifier for an object recognition task. We trained an SVM with our matching as the kernel to recognize the four categories in the Caltech-4 benchmark data set. We trained with 200 images per class and tested with all the remaining images. We extracted features using both the Harris and MSER [12] detectors and the 128-D SIFT [11] descriptor. We also generated lower-dimensional (d = 10) features using PCA. To form a Mercer kernel, the weights were set according to each bin diameter Aij: wij = e−Aij/σ, with σ set automatically as the mean distance between a sample of features from the training set. The table shows our improvements over the uniform-bin pyramid match kernel. The VG pyramid match is more accurate and requires minor additional computation. Our near-perfect performance on this data set is comparable to that reached by others in the literature; the real significance of the result is that it distinguishes what can be achieved with a VG pyramid embedding as opposed to the uniform histograms used in [7], particularly for high-dimensional features. In addition, here the optimal matching requires 0.31s per match, over 500x the cost of our method. Conclusion: We have introduced a linear-time method to compute a matching between point sets that takes advantage of the underlying structure in the feature space and remains consistently accurate and efficient for high-dimensional inputs on real image data. Our results demonstrate the strength of the approximation empirically, compare it directly against an alternative state-of-the-art approximation, and successfully use it as a Mercer kernel for an object recognition task. We have commented most on potential applications in vision and text, but in fact it is a generic matching measure that can be applied whenever it is meaningful to compare sets by their correspondence. Acknowledgments: We thank Ben Kuipers for suggesting the use of Spearman’s rank correlation. References [1] P. Agarwal and K. R. Varadarajan. A Near-Linear Algorithm for Euclidean Bipartite Matching. In Symposium on Computational Geometry, 2004. [2] S. Belongie, J. Malik, and J. Puzicha. Shape Matching and Object Recognition Using Shape Contexts. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(24):509–522, April 2002. [3] A. Berg, T. Berg, and J. Malik. Shape Matching and Object Recognition using Low Distortion Correspondences. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, San Diego, CA, June 2005. [4] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing, 2002. [5] A. Gersho and R. Gray. Vector Quantization and Signal Compression. Springer, 1992. [6] K. Grauman. Matching Sets of Features for Efficient Retrieval and Recognition. PhD thesis, MIT, 2006. [7] K. Grauman and T. Darrell. The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features. In Proc. IEEE Int. Conf. on Computer Vision, Beijing, China, Oct 2005. [8] P. Indyk and N. Thaper. Fast Image Retrieval via Embeddings. In 3rd International Workshop on Statistical and Computational Theories of Vision, Nice, France, Oct 2003. [9] T. K. Landauer, P. W. Foltz, and D. Laham. Introduction to LSA. Discourse Processes, 25:259–84, 1998. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Scene Categories. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, June 2006. [11] D. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2):91–110, Jan 2004. [12] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust Wide Baseline Stereo from Maximally Stable Extremal Regions. In British Machine Vision Conference, Cardiff, UK, Sept. 2002. [13] D. Nister and H. Stewenius. Scalable Recognition with a Vocabulary Tree. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, New York City, NY, June 2006. [14] F. Odone, A. Barla, and A. Verri. Building Kernels from Binary Strings for Image Matching. IEEE Trans. on Image Processing, 14(2):169–180, Feb 2005. [15] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004. [16] J. Sivic and A. Zisserman. Video Google: A Text Retrieval Approach to Object Matching in Videos. In Proc. IEEE Int. Conf. on Computer Vision, Nice, Oct 2003.
2006
35
3,054
Temporal Coding using the Response Properties of Spiking Neurons Thomas Voegtlin INRIA - Campus Scientifique, B.P. 239 F-54506 Vandoeuvre-Les-Nancy Cedex, FRANCE voegtlin@loria.fr Abstract In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1 Introduction The temporal coding hypothesis states that information is encoded in the precise timing of action potentials sent by neurons. In order to achieve computations in the time domain, it is thus necessary to have neurons spike at desired times. However, at a more fundamental level, it is also necessary to describe how the timings of action potentials received by a neuron are combined together, in a way that is consistent with the neural code. So far, the main theory has posited that the shape of post-synaptic potentials (PSPs) is relevant for computations [1, 2, 3]. In these models, the membrane potential at the soma of a neuron is a weighted sum of PSPs arriving from dendrites at different times. The spike time of the neuron is defined as the time when its membrane potential first reaches a firing threshold, and it depends on the precise temporal arrangement of PSPs, thus enabling computations in the time domain. Hence, the nature of the temporal code is closely tied to the shape of PSPs. A consequence is that the length of the rising segment of post-synaptic potentials limits the available coding interval [1, 2]. Here we propose a new theory, based on the non-linear dynamics of integrate-and-fire neurons. This theory takes advantage of the fact that the effect of synaptic currents depends on the internal state of the postsynaptic neuron. For neurons spiking regularly, this dependency is classically described by the Phase Response Curve (PRC) [4]. We use theta neurons, which are mathematically equivalent to quadratic integrate-and-fire neurons [5, 6]. In these neuron models, once the potential has crossed the firing threshold, the neuron is still sensitive to incoming currents, which may change the timing of the next spike. In the proposed model, computations do not rely on the shape of PSPs, which alleviates the restriction imposed by the length of their rising segment. Therefore, we may use a simplified model of synaptic currents; we model synaptic currents as Diracs, which means that we do not take into account synaptic time constants. Another advantage of our model is that computations do not rely on the delays imposed by inter-neuron transmission; this means that it is not necessary to fine-tune delays in order to learn desired spike times. 2 Description of the model 2.1 The Theta Neuron The theta neuron is described by the following differential equation: dθ dt = (1 −cosθ) + αI(1 + cosθ) (1) where θ is the “potential” of the neuron, and I is a variable input current, measured in radians per unit of time. For convenience, we call units of time ’milliseconds’. The neuron is said to fire everytime θ crosses π. The dynamics of the model can be represented on a phase circle (Figure 1). The effect of an input current is not uniform across the circle; currents that occur late (for θ close to π) have little effect on θ, while currents that arrive when θ is close to zero have a much greater effect. +θ θ− I<0 I>0 0 0 π π Figure 1: Phase circle of the theta model. The neuron fires everytime θ crosses π. For I < 0 there are two fixed points: An unstable point θ+ 0 = arccos 1+αI 1−αI , and an attractor θ− 0 = −θ+ 0 . 2.2 Synaptic interactions The input current I is the sum of a constant current I0 and transient synaptic currents Ii(t), where i ∈1..N indexes the synapses: I = I0 + N X i=1 Ii(t) (2) Synaptic currents are modeled as Diracs : Ii(t) = wiδ(t −ti), where ti is the firing time of presynaptic neuron i, and wi is the weight of the synapse. Transmission delays are not taken into account. Figure 2: Response properties of the theta model. Curves shows the change of firing time tf of a neuron receiving a Dirac current of weight w at time t. Left: For I0 > 0, the neuron spikes regularly (I0 = 0.005, θ(0) = −π). If w is small, the curves corresponding to w > 0 and w < 0 are symmetric; the positive curve is called the Phase Response Curve (PRC). If w is large, curves are no longer symmetric; the portions correspond to the ascending (resp. descending) phase of sin θ have different slopes. Right: Response for I0 < 0. The initial condition is slightly above the unstable equilibrium point (I0 = −0.005, θ(0) = θ+ 0 + 0.0001), so that the neuron fires if not perturbed. For w > 0, the response curve is approximately linear, until it reaches zero. For w < 0, the current might cancel the spike if it occurs early. Figure 2 shows how the firing time of a theta neuron changes with the time of arrival of a synaptic current. In our time coding model, we view this curve as the transfer function of the neuron; it describes how the neuron converts input spike times into output spike times. 2.3 Learning rule We derive a spike-timing dependent learning rule from the objective of learning a set of target firing times. Following [2], we consider the mean squared error, E, between desired spike times ¯ts and actual spike times ts: E =< (ts −¯ts)2 > (3) where < . > denotes the mean. Gradient descent on E yields the following stochastic learning rule: ∆wi = −η ∂E ∂wi = −2η(ts −¯ts) ∂ts ∂wi (4) The partial derivative ∂ts ∂wi expresses the credit assignment problem for synapses. i w t t i s θ θ θ θ θ d t i + i time − i i + Figure 3: Notations used in the text. An incoming spike triggers an instantaneous change of the potential θ. θ− i (resp. θ+ i ) denotes the postsynaptic potential before (resp. after) the presynaptic spike. A small modification dwi of the synaptic weight wi induces a change dθ+ i Let F denote the “remaining time”, that is, the time that remains before the neuron will fire: F(t) = Z π θ(t) dθ (1 −cosθ) + αI(1 + cosθ) (5) In our model, I is not continuous, because of Dirac synaptic currents. For the moment, we assume that θ is between the unstable point θ+ 0 and π. In addition, we assume that the neuron receives one spike on each of its synapses, and that all synaptic weights are positive. Let tj denote the time of arrival of the action potential on synapse j. Let θ− j (resp. θ+ j ) denote the potential before (resp. after) the synaptic current:  θ− j = θ(t− j ) θ+ j = θ(t+ j ) = θ− j + αwj(1 + cos θ− j ) (6) We consider the effect of a small change of weight wi. We shall rewrite integral (5) on the intervals where the integrand is continuous. To keep notations simple, we assume that action potentials are ordered, ie : tj ≤tj+1 for all j. For consistency, we use the notation θ− N+1 = π. We may write: F(ti) = X j≥i Z θ− j+1 θ+ j dθ (1 −cosθ) + αI0(1 + cosθ) (7) The partial derivative of the spiking time ts can be expressed as : ∂ts ∂wi = ∂F ∂θ+ i ∂θ+ i ∂wi + X j>i ∂F ∂θ+ j ∂θ+ j ∂wi + ∂F ∂θ− j ∂θ− j ∂wi ! (8) In this expression, the sum expresses how a change of weight wi will modify the effect of other spikes, for j > i. The jth terms of this sum depend on the time elapsed between tj and ti. Since we have no a priori information on the distribution of tj given ti, we shall consider that this term is not correlated with ∂E ∂wi . For that reason, we neglect this sum in our stochastic learning rule: ∂ts ∂wi ≈∂F ∂θ+ i ∂θ+ i ∂wi (9) which yields : ∂ts ∂wi ≈− (1 + cos θ− i )α (1 −cos θ+ i ) + αI0(1 + cos θ+ i ) (10) Note that this expression is not bounded when θ+ i is close to the unstable point θ+ 0 . In that case, θ is in a region where it changes very slowly, and the timing of other action potentials for j > i will mostly determine the firing time ts. This means that approximation (9) will not hold. In addition, it is necessary to extend the learning rule to the case θ+ i ∈[θ− 0 θ+ 0 [, where the above expression is negative. For these reasons, we introduce a credit bound, C, and we modify the learning rule as follows: if 0 < −∂ts ∂wi < C then: ∆wi = −2η(ts −¯ts) ∂ts ∂wi (11) else: ∆wi = 2η(ts −¯ts)C (12) 2.4 Algorithm The algorithm updates the weights in the direction of the gradient. The learning rule takes effect at the end of a trial of fixed duration. If a neuron does not fire at all during the trial, then its firing time is considered to be equal to the duration of the trial. For each synapse, it is necessary to compute the credit from Equation (10) everytime a current is transmitted. We may relax the assumption that each synapse receives one single action potential; if a presynaptic neuron fires several times before the postsynaptic neuron fires, then the credit corresponding to all spikes is summed. Theta neurons were simulated using Euler integration of Equation (1). The time step must be carefully chosen; if the temporal resolution is too coarse, then the credit assignment problem becomes too difficult, which increases the number of trials necessary for learning. On the other hand, small values of the time step mean that simulations take more time. 3 Auto-encoder network Predicting neural activities has been proposed as a possible role for spike-timing dependent learning rules [7]. Here we train a network to predict its own activities using the learning rule derived above. For this, a time-delayed version of the input (echo) is used as the desired output (see Figure 4). The network has to find a representation of the input that minimizes mean squared reconstruction error. The network has three populations of neurons: (i) An input population X of size n neurons, where an input vector is represented using spike times. We call Inter Stimulus Interval (ISI) the interval between the spikes encoding the input and the echo. After the ISI, population X fires a second burst of spikes, that is a time-delayed version of the initial burst. (ii) An output population Y , of size m neurons, that is activated by neurons in X. (iii) A population X′ of size n neurons, where the input is reconstructed. Neurons in X′ are activated by Y . The learning rule updates the feedback connections (wij)i≤n,j≤m from Y to X, comparing spike times in X and in X′. We use I0 < 0, so the response to positive transient currents is approximately linear (see fig. 2). We thus expect neurons to perform linear summation of spike times. For the feed-forward connections from X to Y , we use the transpose of the feedback weights matrix. This is inspired by Oja’s Principal Subspace Network [8]. If spike times are within the linear part of the response curve, then we expect this network to perform Principal Component Analysis (PCA) in the time domain. However, one difference is that the PRC we use is always positive (type I neurons). This means that spike times can only code for positive values (even though synaptic weights can be of both signs). time feed−forward feedback X Y Input Echo Output Figure 4: Auto-encoder network. An input vector is translated into firing times of the input population. Output neurons are activated by input neurons through feed-forward connections. A reconstruction of the input burst is generated through feedback connections. Target firing times are provided by a delayed version of the input burst (echo). In order to code for values of both signs, one would need a transfer function that changes its sign around a time that would code for zero, so that the effect of a current is reversed when its arrival time crosses zero. Here we may view the neural code as a positive code: Early spikes code for high values, and late spikes code for values close to zero. In this architecture, it is necessary to ensure that each neuron in Y fires a single spike on each trial. In order to do this, we impose that neurons in Y have the same average firing time. For this, we add a centering term to the learning rule: ∆wij = −η ∂E ∂wij −λφj (13) where λ ∈IR and φj is the average phase of neuron j. φj is a leaky average of the difference between the firing time tj and the average firing times of all neurons in population Y . It is updated after each trial: φj ←τφj + (1 −τ) tj −1 m m X k=1 tk ! (14) This modification of the learning rule results in neurons that have no preferred firing order. 4 Experiments We used I0 = −0.01 for all neurons. This ensures that neurons have no spontaneous activity. At the beginning of a trial, all neurons were initialized to their stable fixed point. In order to balance the effect of the different sizes of populations X and Y , different values of α were used for X and Y neurons: We used αX = 0.1 and αY = m n αX. In the leaky average we used τ = 0.1 In each experiment, the input vector was encoded in spike times. When doing so, one must make sure that the values taken by the input are within the coding interval of the neurons, ie the range of values where the PRC is not zero. In practice, spikes that arrive too late in the firing cycle are not taken into account by the learning rule. In that case, the weights corresponding to other synapses become overly increased, which eventually causes some postsynaptic neurons in X′ to fire before presynaptic neurons in Y (”anticausal spikes”). If this occurs, one possibility is to reduce the variance of the input. 4.1 Principal Component Analysis of a Gaussian distribution A two-dimensional Gaussian random variable was encoded in the spike times of three input neurons. The ellipsoid had a long axis of standard deviation 1ms and a short axis of deviation 0.5ms, and it was rotated by π/3. Because the network does not have an absolute time reference, it is necessary to use three input neurons, in order to encode two degrees of freedom in relative spiking times. The output layer had two neurons (one degree of freedom). Therefore the network has to find a 1D representation of a 2D variable, that minimizes the mean-squared reconstruction error. The input was encoded as follows: ( t0 = 3 t1 = 3 + ν1 cos(π/3) + 0.5ν2 sin(π/3) t2 = 3 + 0.5ν2 cos(π/3) + ν1 sin(π/3) (15) where ν1 and ν2 are two independent random variables picked from a Gaussian distribution of variance 1. Input spikes times were centered around t = 3ms, where t = 0 denotes the beginning of a trial. We used a time step of 0.05 ms. Each trial lasted for 400 iterations, which corresponds to 20ms of simulated time. The ISI was 5ms. The credit bound was C = 1000. Other parameters were η = 0.0001 and λ = 0.001. Weights were initialized with random values between 0.5 and 1.5. Figure 5: Principal Component Analysis of a 2D Gaussian distribution. The input vector was encoded in the relative spike times of three input neurons. Top: Evolution of the weights over 20.000 learning iterations. Bottom: Final synaptic weights represented as bars. Note the complementary shapes of weight vectors. Right: The input (white dots) and its reconstruction (dark dots) from the network’s activities. Each branch corresponds to a firing order of the two output neurons. Figure 5 shows that the network has learned to extract the principal direction of the distribution. Two branches are visible in the distribution of dots corresponding to the reconstruction. They correspond to two firing orders of the output neurons. The direction of the branches results from the synaptic weights of the neurons. Note that the lower branch has a slight curvature. This suggests that the response function of neurons is not perfectly linear in the interval where spike times are coded. The fact that branches do not exactly have the same orientation might result from non-linearities, or from the approximation made in deriving the learning rule. There are six synaptic weights in the network. One degree of freedom per neuron in X′ is used to adapt its mean firing times to the value imposed by the ISI; the smaller the ISI, the larger the weights. This ”normalization” removes three degrees of freedom. One additional constraint is imposed by the centering term that was added to the learning rule in (13). Thus the network had two degrees of freedom. It used them to find the directions of the two branches shown in Figure 5 (left). These two branches can be viewed as the base vectors used in the compressed representation in Y . The network uses two base vectors in order to represent one single principal direction; each codes for one half of the Gaussian. This is because the network uses a positive code, where negative values are not allowed. 4.2 Encoding natural images An encoder network was trained on the set of raw natural images used in [9]1. The encoder had 64 output neurons and 256 input neurons. On each trial, a random patch of size 16 × 16 was extracted 1Images were retrieved from http://redwood.berkeley.edu/bruno/sparsenet/ from a random image of the dataset, and encoded in the network. Raw grey values from the dataset were encoded as milliseconds. The standard deviation per pixel was 1.00ms. The time step of the simulation was 0.1ms, and each trial lasted for 200 time steps (20ms). The ISI was 9ms, and the parameters of the learning rule were η = 0.0001, C = 50 and λ = 0.001. Weights were initialized with random values between 0 and 0.3. Figure 6: Synaptic weights learned by the network. 64 neurons were trained to represent natural images patches of size 16 × 16. Different grey scales are used in order to display positive and negative weights (black is negative, white is positive). Left: grey scale between -1 and 1. Only positive weights are visible at this scale, because they are much larger than negative weights. Right: grey scale between -0.1 and 0.1. Negative weights are visible, positive weights are beyond scale. Synaptic weights after 100.000 trials are shown in Figure 6. There is a strong difference of amplitude between positive and negative weights; positive weights typically have values between 0 and 1, while negative weights are one order of magnitude smaller. For that reason, weights are displayed twice, with two different grey scales. An image reconstructed from spike times is shown in Figure 7. After training, the mean reconstruction error on the entire dataset was 0.25ms/pixel. For comparison, the mean error performed by Oja’s principal subspace network [8] trained on the same image patches was 0.11ms/pixel. The difference of amplitude between positive and negative weights results from higher sensitivity of the response curves to negative weights, as shown in Figure 2. Synaptic weights with negative values have the ability to strongly delay the output spike, and even to cancel it. Synaptic weights have the shape of local filters, with antagonistic center-surround structures. This contrasts with the base vectors typically obtained from PCA of natural images, which are not local. One possible explanation lies in the response properties of the theta neurons. The response function is not linear, especially in the case of negative weights (Figure 2). This will disfavor solutions involving linear combinations of both positive and negative weights, and favor sparse representations. Hence, the network could be performing something similar to Nonlinear PCA [10]. 5 Conclusions We have shown that the dynamic response properties of spiking neurons can be effectively used as transfer functions, in order to perform computations (in this paper, PCA and Nonlinear PCA). A similar proposal was made in [11], where the PRC of neurons has been adapted to a biologically realistic STDP rule. Here we took a complementary approach, adapting the learning rule to the neuronal dynamics. We used theta neurons, which are of type I, and equivalent to quadratic integrate-and-fire neurons. Type I neurons have a PRC that is always positive. This means that spike times can encode only Figure 7: Natural image and reconstruction from spike times. The 512 × 512 image from the training set (left) was divided into 16 × 16 patches, and encoded using 64 neurons. The reconstruction (right) is derived from spikes times in X′. Standard deviation of the encoded images was 1.00ms/pixel. The mean reconstruction error on the entire dataset was 0.25ms/pixel, about 2.5 times the error made by PCA. positive values. In order to encode values of both signs, one would need the transfer function to change its sign around a time that codes for zero. This will be possible with more complex type II neurons, where the sign of the PRC is not constant. Acknowledgments The author thanks Samuel McKennoch and Dominique Martinez for helpful comments. References [1] W. Maass. Lower bounds for the computational power of networks of spiking neurons. Neural Computation, 8(1):1–40, 1996. [2] S.M. Bohte, J.N. Kok, and H. La Poutr´e. Spike-prop: error-backprogation in multi-layer networks of spiking neurons. Neurocomputing, 48:17–37, 2002. [3] A. J. Bell and L. C. Parra. Maximising sensitivity in a spiking network. In Advances in Neural Information Processing Systems, volume 17, pages 121–128, 2005. [4] R. F. Gal´an, G. B. Ermentrout, and N. N. Urban. Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Physical Review Letters, 94:158101, 2005. [5] G. B. Ermentrout. Type I membranes, phase resetting curves, and synchrony. Neural Computation, 8:979–1001, 1996. [6] W. Gerstner and W. M. Kistler. Spiking Neuron Models : Single Neurons, Populations, Plasticity. Cambridge University Press, 2002. [7] R. P. N. Rao and T. J. Sejnowski. Predictive sequence learning in recurrent neocortical circuits. In Advances in Neural Information Processing Systems, volume 12, pages 164–170, 2000. [8] E. Oja. Neural networks, principal components and subspaces. International Journal of Neural Systems, 1(1):61–68, 1989. [9] B. Olshausen and D. Field. Sparse coding of natural images produces localized, oriented, bandpass receptive fields. Nature, 381:607–609, 1996. [10] E. Oja. The nonlinear PCA learning rule in independent component analysis. Neurocomputing, 17(1):25– 46, 1997. [11] Lengyel M., Kwag J., Paulsen O., and Dayan P. Matching storage and recall:hippocampal spike timingdependent plasticity and phase response curves. Nature Neuroscience, 8:1677–1683, 2006.
2006
36
3,055
A Nonparametric Bayesian Method for Inferring Features From Similarity Judgments Daniel J. Navarro Thomas L. Griffiths School of Psychology Department of Psychology University of Adelaide UC Berkeley Adelaide, SA 5005, Australia Berkeley, CA 94720, USA daniel.navarro@adelaide.edu.au tom griffiths@berkeley.edu Abstract The additive clustering model is widely used to infer the features of a set of stimuli from their similarities, on the assumption that similarity is a weighted linear function of common features. This paper develops a fully Bayesian formulation of the additive clustering model, using methods from nonparametric Bayesian statistics to allow the number of features to vary. We use this to explore several approaches to parameter estimation, showing that the nonparametric Bayesian approach provides a straightforward way to obtain estimates of both the number of features used in producing similarity judgments and their importance. 1 Introduction One of the central problems in cognitive science is determining the mental representations that underlie human inferences. A variety of solutionsto this problem are based on the analysis of similarity judgments. By defining a probabilistic model that accounts for the similarity between stimuli based on their representation, statistical methods can be used to infer underlying representations from human similarity judgments. The particular methods used to infer representions from similarity judgments depend on the nature of the underlying representations. For stimuli that are assumed to be represented as points in some psychological space, multidimensional scaling algorithms [1] can be used to translate similarity judgments into stimulus locations. For stimuli that are assumed to be represented in terms of a set of latent features, additive clustering is the method of choice. The original formulation of the additive clustering (ADCLUS) problem [2] is as follows. Assume that we have data in the form of a n×n similarity matrix S = [sij], where sij is the judged similarity between the ith and jth of n objects. Similarities are assumed to be symmetric (with sij = sji) and non-negative, often constrained to lie on the interval [0, 1]. These empirical similarities are assumed to be well-approximated by a weighted linear function of common features. Under these assumptions, a representation that uses m features to describe n objects is given by an n×m matrix F = [fik], where fik = 1 if the ith object possesses the kth feature, and fik = 0 if it is not. Each feature has an associated non-negative saliency weight w = (w1, . . ., wm). When written in matrix form, the ADCLUS model seeks to uncover a feature matrix F and a weight vector w such that S ≈FWF′, where W = diag(w) is a diagonal matrix with nonzero elements corresponding to the saliency weights. In most applications it is assumed that there is a fixed “additive constant”, a required feature possessed by all objects. 2 A Nonparametric Bayesian ADCLUS Model To formalize additive clustering as a statistical model, it is standard practice to assume that error terms are i.i.d. Gaussian [3], yielding the model: S = FWF′ + E, (1) f21 =1 D s n(n-1)/2 P F w O V f1 f2 f3 f4 The Indian Buffet The Diners f11 =1 f12 =1 f23 =1 f33 =1 f34 =1 ... (b) (a) Figure 1: Graphical model representation of the IBP-ADCLUS model. Panel (a) shows the hierarchical structure of the ADCLUS model, and panel (b) illustrates the method by which a feature matrix is generated using the Indian Buffet Process. where E = [ϵij] is an n × n matrix with entries drawn from a Gaussian(0, σ2) distribution. Equation 1 reveals that the additive clustering model is structurally similar to the better-known factor analysis model [4], although there are several differences: most notably the constraints that F is binary valued, W is necessarily diagonal and S is non-negative. In any case, if we define µij = P k wkfikfjk to be the similarity predicted by a particular choice of F and w, then: sij |F, w, σ ∼ Normal(µij, σ2), (2) where σ2 is the variance of the Gaussian error distribution. However, self-similarities sii are not modeled in additive clustering, and are generally fixed to (the same) arbitrary values for both the model and data. It is typical to treat σ2 as a fixed parameter [5], and while this could perhaps be improved upon, we leave this open for future research. In our approach, additive clustering is framed as a form of nonparametric Bayesian inference, in which Equation 2 provides the likelihood function, and the model is completed by placing priors over the weights w and the feature matrix F. We assume a fixed Gamma prior over feature saliencies though it is straightforward to extend this to other, more flexible, priors. Setting a prior over binary feature matrices F is more difficult, since there is generally no good reason to assume an upper bound on the number of features that might be relevant to a particular similarity matrix. For this reason we use the “nonparametric” Indian Buffet Process (IBP) [6], which provides a proper prior distributionover binary matrices with a fixed number of rows and an unbounded number of columns. The IBP can be understood by imagining an Indian buffet containing an infinite number of dishes. Each customer entering the restaurant samples a number of dishes from the buffet, with a preference for those dishes that other diners have tried. For the kth dish sampled by at least one of the first n −1 customers, the probability that the nth customer will also try that dish is p(fnk = 1|Fn−1) = nk n , (3) where Fn−1 records the choices of the previous customers, and nk denotes the number of previous customers that have sampled that dish. Being adventurous, the new customer may try some hitherto untasted meals from the infinite buffet on offer. The number of new dishes taken by customer n follows a Poisson(α/n) distribution. The complete IBP-ADCLUS model becomes, sij | F, w, σ ∼ Normal(µij, σ2) wk | λ1, λ2 ∼ Gamma(λ1, λ2) F | α ∼ IBP(α). (4) The structure of this model is illustrated graphically in Figure 1(a), and an illustration of the IBP prior is shown in Figure 1(b). 3 A Gibbs-Metropolis Sampling Scheme As a Bayesian formulation of additive clustering, statistical inference in Equation 4 is based on the posterior distribution over feature matrices and saliency vectors, p(F, w | S). Naturally, the ideal approach is to calculate posterior quantities using exact methods. Unfortunately, this is generally quite difficult, so a natural alternative is to use Markov chain Monte Carlo (MCMC) methods to repeatedly sample from the posterior distribution: estimates of posterior quantities can be made using these samples as proxies for the full distribution. We construct a simple MCMC scheme for the Bayesian ADCLUS model using a combination of Gibbs sampling [7] and more general Metropolis proposals [8]. Saliency Weights. We use a Metropolis scheme to resample the saliency weights. If the current saliency is wk, a candidate w∗ k is first generated from a Gaussian(wk, 0.05) distribution. The value of wk is then reassigned using the Metropolis update rule. If w−k denotes the set of all saliencies except wk, this rule is wk ←  w∗ k with probability a wk with probability 1 −a , where a = p(S | F,w−k,w∗ k)p(w∗ k | λ) p(S | F,w−k,wk)p(wk | λ). (5) With a Gamma prior, the Metropolis sampler automatically rejects all negative valued w∗ k. “Pre-Existing” Features. For features currently possessed by at least one object, assignments are updated using a standard Gibbs sampler: the value of fik is drawn from the conditional posterior distribution over fik | S, F−ik, w. Since feature assignments are discrete, it is easy to find this conditional probability by noting that p(fik|S, F−ik, w) ∝p(S|F, w)p(fik|F−ik), (6) where F−ik denotes the set of all feature assignments except fik. The first term in this expression is just the likelihood function for the ADCLUS model, and is simple to calculate. Moreover, since feature assignments in the IBP are exchangeable, we can treat the kth assignment as if it were the last. Given this, Equation 3 indicates that p(fik|F−ik) = n−ik/n, where n−ik counts the number of stimuli (besides the ith) that currently possess the kth feature. The Gibbs sampler deletes all single-stimulus features with probability 1, since n−ik will be zero for one of the stimuli. “New” Features. Since the IBP describes a prior over infinitefeature matrices, the resampling procedure needs to accommodate the remaining (infinite) set of features that are not currently represented among the manifest features F. When resampling feature assignments, some finite number of those currently-latent features will become manifest. When sampling from the conditional prior over feature assignments for the ith stimulus, we hold the feature assignments fixed for all other stimuli, so this is equivalent to sampling some number of “singleton” features (i.e., features possessed only by stimulus i) from the conditional prior, which is Poisson(α/n) as noted previously. When working with this algorithm, we typically run several chains. For each chain, we initialize the Gibbs-Metropolis sampler more or less arbitrarily. After a “burn-in” period is allowed for the sampler to converge to a sensible location (i.e., for the state to represent a sample from the posterior), we make a “draw” by recording the state of the sampler, leaving a “lag” of several iterations between successive draws to reduce the autocorrelation between samples. When doing so, it is important to ensure that the Markov chains converge on the target distribution p(F, w | S). We did so by inspecting the time series plot formed by graphing the log posterior probability of successive samples. To illustrate this, one of the chains used in our simulations (see Section 5) is displayed in Figure 2, with nine parallel chains used for comparison: the time series plot shows no long-term trends, and that different chains are visually indistinguishable from one another. Although elaborations and refinements are possible for both the sampler [9] and the convergence check [10], we have found this approach to be reasonably effective for the moderate-sized problems considered in our applications. 4 Four Estimators for the ADCLUS Model Since the introductionof the additive clustering model, a range of algorithms have been used to infer features, including “subset selection” [2], expectation maximization [3], continuous approximations [11] and stochastic hillclimbing [5] among others. A review, as well as an effective combinatorial search algorithm, is given in [12]. Curiously, while the plethora of algorithms available for extracting estimates of F and w have been discussed in the literature, the variety in the choice of estimator has been largely overlooked, to our knowledge. One advantage of the IBP-ADCLUS approach is that it allows us to discuss a range of different estimators that within a single framework. We will explore estimators based on computing the posterior distribution over F and w given S. This includes estimators based on maximum a posteriori (MAP) estimation, corresponding to the value of a variable with highest posterior probability, and taking expectations over the posterior distribution. 0 100 200 300 400 500 600 700 800 900 1000 −195 −190 −185 −180 −175 −170 Smoothed Log−Posterior Probability Sample Number Figure 2: Smoothed time series showing log-posterior probabilities for successive draws from the Gibbs-Metropolis sampler, for simulated similarity data with n = 16. The bold line shows a single chain, while the dotted lines show the remaining nine chains. Conditional MAP Estimation. Much of the literature defines an estimator conditional on the assumption that the number of features in the model m is fixed [3][11][12]. These approaches seek to estimate the values of F and w that jointly maximize some utility function conditional on this known m. If we treat the posterior probability to be our measure of utility, the estimators become, ˆF1, ˆw1 = arg max F,w p(F, w | S, m) (7) Estimating the dimension is harder. The natural (MAP) estimate for m is easy to state: ˆm1 = arg max m p(m | S) = arg max m " X F∈Fm Z p(F, w | S) dw # (8) where Fm denotes the set of feature matrices containing m unique features. In practice, given the difficulty of working with Equation 8, it is typical to fix m on the basis of intuition, or via some heuristic method. MAP Feature Estimation. In the previous approach, m is given primacy, since F and w cannot be estimated until it is known. No distinction is made between F and w. In many practical situations [13], this does not reflect the priorities of the researcher. Often the feature matrix F is the psychologically relevant variable, with w and m being nuisance parameters. In such cases, it is natural to marginalize w when estimating F, and let the estimated feature matrix itself determine m. That is, we first select ˆF2 = arg max F p(F | S) = arg max F Z p(F, w | S)dw  . (9) Notice that ˆF2 provides an implicit estimate of ˆm2, which may differ from ˆm1. The saliencies are estimated after ˆF2 is chosen, via conditional MAP estimation: ˆw2 = arg max w p(w | ˆF2, S). (10) This approach is typical of existing (parametric) Bayesian approaches to additive clustering [5][14], where analytic approximations to p(F | S) are used for expediency. Joint MAP Estimation. Both approaches discussed so far require some aspects of the model to be estimated before others. While the rationales for this constraint differ, both approaches seem sensible. Another approach, not as common in the literature, is to jointly estimate F and w without conditioning on m, yielding the MAP estimators, ˆF3, ˆw3 = arg max F,w p(F, w | S). (11) Early papers [2] recognized that this approach can be prone to overfitting, and thus requires that the prior place some emphasis on parsimony. However, many theoretically-motivated priors (including the IBP) allow the researcher to emphasize parsimony, and some frequentist methods used in ADCLUS-like models apply penalty functions for this reason [15]. (a) 6[8] 8[16] 10[32] 0 5 10 15 Features Recovered Number of Latent Features [Number of Objects] (b) n ˆS1 ˆS2 ˆS3 ˆS4 So 79 81 79 84 8 Sn 78 81 78 84 St 87 88 87 92 So 89 88 89 90 16 Sn 90 88 90 90 St 96 95 96 97 So 91 91 91 91 32 Sn 91 91 91 91 St 100 100 100 100 Figure 3: Posterior distributions (a) over the number of features p(m | So) in simulations containing mt = 6, 8 and 10 features respectively. Variance accounted for (b) by the four similarity estimators ˆS, where the target is either the observed trainingdata So, a new test data set Sn, or the true similarity matrix St. Approximate Expectations. A fourth approach aims to summarize the posterior distributionby looking at the marginal posterior probabilities associated with particular features. The probability that a particular feature fk belongs in the representation is given by: p(fk | S) = X F:fk∈F p(F | S). (12) Although this approach has never been applied in the ADCLUS literature, the concept is implicit in more general discussions of mental representation [16] that ask whether or not a specific predicate is likely to be represented. Letting ˆrk = p(fk | S) denote the posterior probability that feature fk is manifest, we can construct a vector ˆr = [ˆrk] that contains these probabilities for all 2n possible features. Althoughthis vector discards the covariation between features across the posterior distribution, it is useful both theoretically (for testing hypotheses about specific features) and pragmatically, since the expected posterior similarities can be written as follows: E[sij∗|S] = X fk fikfjkˆrk ˆwk, (13) where ˆwk = E [wk|fk, S] denotes the expected saliency for feature fk on those occasions when it is represented (Equation 13 relies on the fact that features combine linearly in the ADCLUS model, and is straightforward to derive). In practice, it is impossible to look at all 2n features, so one would typically report only those features for which ˆrk is large. Since these tend to be the features that make the largest contributionsto E[sij∗|S], there is a sense in which this approach approximates the expected posterior similarities. 5 Recovering Noisy Feature Matrices By using the IBP-ADCLUS framework, we can compare the performance of the four estimators in a reasonable fashion. Loosely following [12], we generated noisy similarity matrices with n = 8, 16 and 32 stimuli, based on “true” feature matrices Ft in which mt = 2 log2(n), where each object possessed each feature with probability0.5. Saliency weights wt were generated uniformly from the interval [1, 3], but were subsequently rescaled to ensure that the “true” similarities St had variance 1. Two sets of Gaussian noise were injected into the similarities with fixed σ = 0.3, ensuring that the noise accounted for approximately 10% of the variance in the “observed” data matrix So and the “new” matrix Sn. We fixed α = 2 for all simulations: since the number of manifest features in an IBP model follows a Poisson(αHn) distribution (where Hn is the nth harmonic number) [6], the prior has a strong bias toward parsimony. The prior expected number of features is approximately 5.4, 6.8 and 8.1 (as compared to the true values of 6, 8 and 10). We approximated the posterior distribution p(F, w | S1), by drawing samples in the following manner. For a given similarity matrix, 10 Gibbs-Metropolis chains were run from different start points, and 1000 samples were drawn from each. The chains were burnt in for 1000 iterations, and a lag of 10 iterations was used between successive samples. Visual inspection suggested that five chains in the n = 32 condition did not converge: log-posteriors were low, differed substantially from one (a) 5 10 15 20 0 0.1 0.2 0.3 0.4 Number of Features Probability (b) 0 5 10 15 0 0.1 0.2 0.3 0.4 Number of Features Probability (c) 10 15 20 25 0 0.1 0.2 0.3 0.4 Number of Features Probability Figure 4: Posterior distributions over the number of features when the Bayesian ADCLUS model is applied to (a) the numbers data, (b) the countries data and (c) the letters data. Table 1: Two representations of the numbers data. (a) The representation reported in [3], extracted using an EM algorithm with the number of features fixed at eight. (b) The 10 most probable features extracted using the Bayesian ADCLUS model. The first column gives the posterior probability that a particular feature belongs in the representation. The second column displays the average saliency of a feature in the event that it is included. (a) FEATURE WEIGHT 2 4 8 0.444 0 1 2 0.345 3 6 9 0.331 6 7 8 9 0.291 2 3 4 5 6 0.255 1 3 5 7 9 0.216 1 2 3 4 0.214 4 5 6 7 8 0.172 additive constant 0.148 (b) FEATURE PROB. WEIGHT 3 6 9 0.79 0.326 2 4 8 0.70 0.385 0 1 2 0.69 0.266 2 3 4 5 6 0.59 0.240 6 7 8 9 0.57 0.262 0 1 2 3 4 0.42 0.173 2 4 6 8 0.41 0.387 1 3 5 7 9 0.40 0.223 4 5 6 7 8 0.34 0.181 7 8 9 0.26 0.293 additive constant 1.00 0.075 another, and had noticable positive slope. In this case, the estimators were constructed from the five remaining chains. Figure 3(a) shows the posterior distributions over the number of features m for each of the three simulation conditions. There is a tendency to underestimate the number of features when provided with small similarity matrices, with the modal number being 3, 7 and 10. However, since the posterior estimate of m is below the prior estimate when n = 8, it seems this effect is data-driven, as 79% of the variance in the data matrix So can be accounted for using only three features. Since each approach allows the construction of an estimated similarity matrix ˆS, a natural comparison is to look at the proportion of variance this estimate accounts for in the observed data So, the novel data set Sn, and the true matrix St. In view of the noise model used to construct these matrices, the “ideal” answer for these three should be around 90%, 90% and 100% respectively. When n = 32, this profile is observed for all four estimators, suggesting that in this case all four estimators have converged appropriately. For the smaller matrices, the conditional MAP and joint MAP estimators (ˆS1 and ˆS3) agree closely. The MAP feature approach ˆS3 appears to perform slightly better, though the difference is very small. The expectation method ˆS4 provides the best estimate. 6 Modeling Empirical Similarities We now turn to the analysis of empirical data. Since space constraints preclude detailed reporting of all four estimators with respect to all data sets, we limit the discussion to the most novel IBPADCLUS estimators, namely the direct estimates of dimensionality provided through Equation 8, and the features extracted via “approximate expectation”. Featural representations of numbers. A standard data set used in evaluating additive clustering models measures the conceptual similarity of the numbers 0 through 9 [17]. This data set is often used as a benchmark due to the complex interrelationships between the numbers. Table 1(a) shows an eight-feature representation of these data, taken from [3] who applied a maximum likelihood approach. This representation explains 90.9% of the variance, with features corresponding to arithTable 2: Featural representation of the similarity between 16 countries. The table shows the eight highest-probability features extracted by the Bayesian ADCLUS model. Each column corresponds to a single feature, with the associated probabilities and saliencies shown below. The average weight associated with the additive constant is 0.035. FEATURE Italy Vietnam Germany Zimbabwe Zimbabwe Iraq Zimbabwe Philippines Germany China Russia Nigeria Nigeria Libya Nigeria Indonesia Spain Japan USA Cuba Iraq Philippines China Jamaica Libya Indonesia Japan Iraq Libya PROB. 1.00 1.00 0.99 0.62 0.52 0.36 0.33 0.25 WEIGHT 0.593 0.421 0.267 0.467 0.209 0.373 0.299 0.311 Table 3: Featural representation of the perceptual similarity between 26 capital letters. The table shows the ten highest-probabilityfeatures extracted by the Bayesian ADCLUS model. Each column corresponds to a single feature, with the associated probabilities and saliencies shown below. The average weight associated with the additive constant is 0.003. FEATURE M I C D P E E K B C N L G O R F H X G J W T Q R U PROB. 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.92 WEIGHT 0.686 0.341 0.623 0.321 0.465 0.653 0.322 0.427 0.226 0.225 metic concepts and to numerical magnitude. Fixing σ = 0.05, and α = 0.5, we drew 10,000 lagged samples to construct estimates. Although the posterior probability is spread over a large number of feature matrices, 92.6% of sampled matrices had between 9 and 13 features. The modal number of represented features was ˆm1=11, with 27.2% of the posterior mass. The posterior distribution over the number of features is shown in Figure 4(a). Since none of the existing literature has used the “approximate expectation” approach to find highly probable features, it is useful to note the strong similarities between Table 1(a) and Table 1(b), which reports the ten highest-probability features across the entire posterior distribution. Applying this approach to obtain an estimate of the posterior predictive similarities ˆS4 revealed that this matrix accounts for 97.4% of the variance in the data. Featural representations of countries. A second application is to human forced-choice judgments of the similarities between 16 countries [18]. In this task, participants were shown lists of four countries and asked to pick out the two countries most similar to each other. Applying the Bayesian model to these data with σ = 0.1 reveals that only eight features appear in the representation more than 25% of the time. Given this, it is not surprising that the posterior distribution over the number of features, shown in Figure 4 (b), indicates that the modal number of features is eight. The eight most probable features are listed in Table 2. The “approximate expectation” method explains 85.4% of the variance, as compared to the 78.1% found by a MAP feature approach [18]. The features are interpretable, corresponding to a range of geographical, historical, and economic regularities. Featural representations of letters. As a third example, we analyzed a somewhat larger data set, consisting of kindergarten children’s assessment of the perceptual similarity of the 26 capital letters [19]. In this case, we used σ = 0.05, and the Bayesian model accounted for 89.2% of the variance in the children’s similarity judgments. The posterior distribution over the number of represented features is shown in Figure 4(c). Table 3 shows the ten features that appeared in more than 90% of samples from the posterior. The model recovers an extremely intuitive set of overlapping features. For example, it picks out the long strokes in I, L, and T, and the elliptical forms of D, O, and Q. 7 Discussion Learning how similarity relations are represented is a difficult modeling problem. Additive clustering provides a framework for learning featural representations of stimulus similarity, but remains underused due to the difficulties associated with the inference. By adopting a Bayesian approach to additive clustering, we are able to obtain a richer characterization of the structure behind human similarity judgments. Moreover, by using nonparametric Bayesian techniques to place a prior distribution over infinite binary feature matrices via the Indian Buffet Process, we can allow the data to determine the number of features that the algorithm recovers. This is theoretically important as well as pragmatically useful. As noted by [16], people are capable of recognizing that individual stimuli possess an arbitrarily large number of characteristics, but in any particular context will make judgments using only a finite, usually small number of properties that form part of our current mental representation. In other words, by moving to a Bayesian nonparametric form, we are able to bring the ADCLUS model closer to the kinds of assumptions that are made by psychological theories. Acknowledgements. TLG was supported by NSF grant number 0631518, and DJN by ARC grants DP0451793 and DP-0773794. We thank Nancy Briggs, Simon Dennis and Michael Lee for helpful comments on this work. References [1] W. S. Torgerson. Theory and Methods of Scaling. Wiley, New York, 1958. [2] R. N. Shepard and P. Arabie. Additive clustering: Representation of similarities as combinations of discrete overlapping properties. Psychological Review, 86:87–123, 1979. [3] J. B. Tenenbaum. Learning the structure of similarity. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 3–9. MIT Press, Cambridge, MA, 1996. [4] L. L. Thurstone. Multiple-Factor Analysis. University of Chicago Press, Chicago, 1947. [5] M. D. Lee. Generating additive clustering models with limited stochastic complexity. Journal of Classification, 19:69–85, 2002. [6] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. Technical Report 2005-001, Gatsby Computational Neuroscience Unit, 2005. [7] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984. [8] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21:1087–1092, 1953. [9] Q.-M. Shao M.-H. Chen and J. G. Ibrahim. Monte Carlo Methods in Bayesian Computation. Springer, New York, 2000. [10] M. K. Cowles and B. P. Carlin. Markov chain Monte Carlo convergence diagnostics: A comparative review. Journal of the American Statistical Association , 91:833–904, 1996. [11] P. Arabie and J. Douglas Carroll. MAPCLUS: A mathematical programming approach to fitting the ADCLUS model. Psychometrika, 45:211–235, 1980. [12] W. Ruml. Constructing distributed representations using additive clustering. In Advances in Neural Information Processing Systems 14, Cambridge, MA, 2001. MIT Press. [13] M. D. Lee and D. J. Navarro. Extending the ALCOVE model of category learning to featural stimulus domains. Psychonomic Bulletin and Review, 9:43–58, 2002. [14] D. J. Navarro. Representing Stimulus Similarity. Ph.D. Thesis, University of Adelaide, 2003. [15] L. E. Frank and W. J. Heiser. Feature selection in Feature Network Models: Finding predictive subsets of features with the Positive Lasso. British Journal of Mathematical and Statistical Psychology , in press. [16] D. L. Medin and A. Ortony. Psychological essentialism. In Similarity and Analogical Reasoning . Cambridge University Press, New York, 1989. [17] R. N. Shepard, D. W. Kilpatric, and J. P. Cunningham. The internal representation of numbers. Cognitive Psychology, 7:82–138, 1975. [18] D. J. Navarro and M. D. Lee. Commonalities and distinctions in featural stimulus representations. In Proceedings of the 24th Annual Conference of the Cognitive Science Society, pages 685–690, Mahwah, NJ, 2002. Lawrence Erlbaum. [19] E. Z. Rothkopf. A measure of stimulus similarity and errors in some paired-associate learning tasks. Journal of Experimental Psychology, 53:94–101, 1957.
2006
37
3,056
Hierarchical Dirichlet Processes with Random Effects Seyoung Kim Department of Computer Science University of California, Irvine Irvine, CA 92697-3435 sykim@ics.uci.edu Padhraic Smyth Department of Computer Science University of California, Irvine Irvine, CA 92697-3435 smyth@ics.uci.edu Abstract Data sets involving multiple groups with shared characteristics frequently arise in practice. In this paper we extend hierarchical Dirichlet processes to model such data. Each group is assumed to be generated from a template mixture model with group level variability in both the mixing proportions and the component parameters. Variabilities in mixing proportions across groups are handled using hierarchical Dirichlet processes, also allowing for automatic determination of the number of components. In addition, each group is allowed to have its own component parameters coming from a prior described by a template mixture model. This group-level variability in the component parameters is handled using a random effects model. We present a Markov Chain Monte Carlo (MCMC) sampling algorithm to estimate model parameters and demonstrate the method by applying it to the problem of modeling spatial brain activation patterns across multiple images collected via functional magnetic resonance imaging (fMRI). 1 Introduction Hierarchical Dirichlet processes (DPs) (Teh et al., 2006) provide a flexible framework for probabilistic modeling when data are observed in a grouped fashion and each group can be thought of as being generated from a mixture model. In the hierarchical DPs all of, or a subset of, the mixture components are shared by different groups and the number of such components are inferred from the data using a DP prior. Variability across groups is modeled by allowing different mixing proportions for different groups. In this paper we focus on the problem of modeling systematic variation in the shared mixture component parameters and not just in the mixing proportions. We will use the problem of modeling spatial fMRI activation across multiple brain images as a motivating application, where the images are obtained from one or more subjects performing the same cognitive tasks. Figure 1 illustrates the basic idea of our proposed model. We assume that there is an unknown true template for mixture component parameters, and that the mixture components for each group are noisy realizations of the template components. For our application, groups and data points correspond to images and pixels. Given grouped data (e.g., a set of images) we are interested in learning both the overall template model and the random variation relative to the template for each group. For the fMRI application, we model the images as mixtures of activation patterns, assigning a mixture component to each spatial activation cluster in an image. As shown in Figure 1 our goal is to extract activation patterns that are common across multiple images, while allowing for variation in fMRI signal intensity and activation location in individual images. In our proposed approach, the amount of variation (called random effects) from the overall true component parameters is modeled as coming from a prior distribution on group-level component parameters (Gelman et al. 2004). By combining hierarchical DPs with a random effects model we let both mixing proportions and mixture component parameters adapt to the data in each group. Although we focus on image data in this paper, the proposed Template mixture model  + ? Q s Group-level mixture model fMRI brain activation Figure 1: Illustration of group level variations from the template model. Model Group-level mixture components Hierarchical DPs θa × ma, θb × mb Transformed DPs θa + ∆a1, . . . , θa + ∆ama, θb + ∆b1, . . . , θb + ∆bmb Hierarchical DPs with random effects (θa + ∆a) × ma, (θb + ∆b) × mb Table 1: Group-level mixture component parameters for hierarchical DPs, transformed DPs, and hierarchical DPs with random effects as proposed in this paper. approach is applicable to more general problems of modeling group-level random variation with mixture models. Hierarchical DPs and transformed DPs (Sudderth et al., 2005) both address a similar problem of modeling groups of data using mixture models with mixture components shared across groups. Table 1 compares the basic ideas underlying these two models with the model we propose in this paper. Given a template mixture of two components with parameters θa and θb, in hierarchical DPs a mixture model for each group can have ma and mb exact copies (commonly known as tables in the Chinese restaurant process representation) of each of the two components in the template—thus, there is no notion of random variation in component parameters across groups. In transformed DPs, each of the copies of θa and θb receives a transformation parameter ∆a1, . . . , ∆ama and ∆b1, . . . , ∆bmb. This is not suitable for modeling the type of group variation illustrated in Figure 1 because there is no direct way to enforce ∆a1 = . . . = ∆ama and ∆b1 = . . . = ∆bmb to obtain ∆a and ∆b as used in our proposed model. In this general context the model we propose here can be viewed as being closely related to both hierarchical DPs and transformed DPs, but having application to quite different types of problems in practice, e.g., as an intermediate between the highly constrained variation allowed by the hierarchical DP and the relatively unconstrained variation present in the computer vision scenes to which the transformed DP has been applied (Sudderth et al, 2005). From an applications viewpoint the use of DPs for modeling multiple fMRI brain images is novel and shows considerable promise as a new tool for analyzing such data. The majority of existing statistical work on fMRI analysis is based on voxel-by-voxel hypothesis testing, with relatively little work on modeling of the spatial aspect of the problem. One exception is the approach of Penny and Friston (2003) who proposed a probabilistic mixture model for spatial activation modeling and demonstrated its advantages over voxel-wise analysis. The application of our proposed model to fMRI data can be viewed as a generalization of Penny and Friston’s work in three different aspects by (a) allowing for analysis of multiple images rather than a single image (b) learning common activation clusters and systematic variation in activation across these images, and (c) automatically learning the number of components in the model in a data-driven fashion. 2 Models 2.1 Dirichlet process mixture models A Dirichlet process DP(α0, G) with a concentration parameter α0 > 0 and a base measure G can be used as a nonparametric prior distribution on mixing proportion parameters in a mixture model when the number of components is unknown a priori (Rasmussen, 2000). The generative process for a mixture of Gaussian distributions with component mean µk and DP prior DP(α0, G) can be   α0   π   zji   yji   µk   H ? ? ? ? k = 1 : ∞ i = 1 : N j = 1 : J   γ   β   α0   πj   zji   yji   µk   H ? ? ? ? ? k = 1 : ∞ i = 1 : N j = 1 : J   γ   β   α0   πj   zji   yji   ujk   µk   H   τ 2 k   R ? ? ? ? ? ? ?  k = 1 : ∞ j = 1 : J i = 1 : N j = 1 : J (a) (b) (c) Figure 2: Plate diagrams for (a) DP mixtures, (b) hierarchical DPs and (c) hierarchical DPs with random effects. written, using a stick breaking construction (Sethuraman, 2004), as: π|α0 ∼Stick(α0), µk|G ∼NG(µ0, ψ2 0), zi|π ∼π, yi|zi, (µk)∞ k=1, σ2 ∼N(µzi, σ2), where yi, i = 1, . . . , N are observed data and zi is a component label for yi. It can be shown that the labels zi’s have the following clustering property: zi|z1, . . . , z(i−1), α0 ∼ K X k=1 n−i k i −1 + α0 δk + α0 i −1 + α0 δknew, where n−i k represents the number of zi′, i′ ̸= i, assigned to component k. The probability that zi is assigned to a new component is proportional to α0. Note that the component with more observations already assigned to it has a higher probability to attract the next observation. 2.2 Hierarchical Dirichlet processes When multiple groups of data are present and each group can be modeled as a mixture it is often useful to let different groups share mixture components. In hierarchical DPs (Teh et al., 2006) components are shared by different groups with varying mixing proportions for each group, and the number of components in the model can be inferred from data. Let yji be the ith data point (i = 1, . . . , N) in group j (j = 1, . . . , J), β the global mixing proportions, πj the mixing proportions for group j, and α0, γ, H are the hyperparameters for the DP. Then, the hierarchical DP can be written as follows, using a stick breaking construction: β|γ ∼Stick(γ), πj|α0, β ∼DP(α0, β), zji|πj ∼πj, µk|H ∼NH(µ0, ψ2 0), yji|zji, (µk)∞ k=1, σ2 ∼N(µzji, σ2), (1) The plate diagram in Figure 2(b) illustrates the generative process of this model. Mixture components described by the µk’s can be shared across the J groups. The hierarchical DP has clustering properties similar to that for DP mixtures, i.e., p(hji|h−ji, α0) ∼ Tj X t=1 n−i jt nj −1 + α0 δt + α0 nj −1 + α0 δtnew (2) p(ljt|l−jt, γ) ∼ K X k=1 m−t k P mu −1 + γ δk + γ P mu −1 + γ δknew, (3) where hji represents the mapping of each data item yji to one of Tj clusters within group j and ljt maps the tth local cluster in group j to one of K global clusters shared by all of the J groups. The probability that a new local cluster is generated within group j is proportional to α0. This new cluster is generated according to Equation (3). Notice that more than one local cluster in group j can be linked to the same global cluster. It is the assignment of data items to K global clusters via local cluster labels that is typically of interest. 3 Hierarchical Dirichlet processes with random effects We now propose an extension of the standard hierarchical DP to a version that includes random effects. We first develop our model for the case of Gaussian density components, and later in the paper apply this model to the specific problem of modeling activation patterns in fMRI brain images. We take µk|H ∼NH(µ0, ψ2 0) and yji|zji, (µk)∞ k=1, σ2 ∼N(µzji, σ2) in Equation (1) and add random effects as follows: µk|H ∼NH(µ0, ψ2 0), τ 2 k|R ∼Inv-χ2 R(v0, s2 0), ujk|µk, τ 2 k ∼N(µk, τ 2 k), yji|zji, (ujk)∞ k=1 ∼N(ujzji, σ2). (4) Each group j has its own component mean ujk for the kth component and these group-level parameters come from a common prior distribution N(µk, τ 2 k). Thus, µk can be viewed as a template, and ujk as a noisy observation of the template for group j with variance τ 2 k. The random effects parameters ujk are generated once per group and shared by local clusters in group j that are assigned to the same global cluster k. For inference we use an MCMC sampling scheme that is based on the clustering property given in Equations (2) and (3). In each iteration we sample labels h = {hji for all j, i}, l = {ljt for all j, t} and component parameters µ = {µk for all k}, τ 2 = {τ 2 k for all k}, u = {ujk for all k, j} alternately. We sample tji’s using the following conditional distribution: p(hji = t|h−ji, u, µ, τ 2, y) ∝  n−jtp(yji|ujk, σ2) if t was used α0p(yji|h−jiu, µ, τ 2, γ) if t = tnew, where p(yji|h−jiu, µ, τ, γ) = X k∈A mk P k mk + γ p(yji|ujk) (5a) + X k∈B mk P k mk + γ Z p(yji|ujk)p(ujk|µk, τ 2 k)dujk (5b) + γ P k mk + γ Z Z Z p(yji|ujk)p(ujk|µk, τ 2 k)NH(µ0, ψ2 0)Inv-χ2 R(v0, s2 0)dujkdµkdτ 2 k. (5c) In Equation (5a) the summation is over components in A = {k| some hji′ for i′ ̸= i is assigned to k}, representing global clusters that already have some local clusters in group j assigned to them. In this case, since ujk is already known, we can simply compute the likelihood p(yji|ujk). In Equation (5b) the summation is over B = {k| no hji′ for i′ ̸= i is assigned to k} representing global clusters that have not yet been assigned in group j. For conjugate priors we can integrate over the unknown random effects parameter ujk to compute the likelihood using N(yji|µk, τ 2 k + σ2) and sample ujk from the posterior distribution p(ujk|µk, τ 2 k, yji). Equation (5c) models the case where a new global component gets generated. The integral cannot be evaluated analytically, so we approximate the integral by sampling new values for µk, τ 2 k, and ujk from prior distributions and evaluating p(yji|ujk) given these new values for the parameters (Neal, 1998). Samples for ljt’s can be obtained from the conditional distribution given as p(ljt = k|l−jt, u, µ, τ 2, y) ∝                    m−jt Q i:hji=t p(yji|ujk, σ2) if k was used in group j m−jt R Q i:hji=t p(yji|ujk, σ2)p(ujk|µk, τ 2 k)dujk if k is new in group j γ R R R Q i:hji=t p(yji|ujk)p(ujk|µk, τ 2 k) NH(µ0, ψ2 0)Inv-χ2 R(v0, s2 0) dujkdµkdτk if k is a new component. (6) 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 Figure 3: Histogram for simulated data with mixture density estimates overlaid. As in the sampling of hji, if k is new in group j we can evaluate the integral analytically and sample ujk from the posterior distribution. If k is a new component we approximate the integral by sampling new values for µk, τ 2 k, and ujk from the prior and evaluating the likelihood. Given h and l we can update the component parameters µ, τ and u using standard Gibbs sampling for a normal hierarchical model (Gelman et al., 2006). In practice, this Markov chain can mix poorly and get stuck in local maxima where the labels for two group-level components are swapped relative to the same two components in the template. To address this problem and restore the correct correspondence between template components and group-level components we propose a move that swaps the labels for two group-level components at the end of each sampling iteration and accepts the move based on a Metropolis-Hastings acceptance rule. To illustrate the proposed model we simulated data from a mixture of one-dimensional Gaussian densities with known parameters and tested if the sampling algorithm can recover the parameters from the data. From a template mixture model with three mixture components we generated 10 group-level mixture models by adding random effects in the form of mean-shifts to the template means, sampled from N(0, 1). Using varying mixing proportions for each group we generated 200 samples from each of the 10 mixture models. Histograms for the samples in eight groups are shown in Figure 3(a). The estimated models after 1000 iterations of the MCMC algorithm are overlaid. We can see that the sampling algorithm was able to learn the original model successfully despite the variability in both component means and mixing proportions of the mixture model. 4 A model for fMRI activation surfaces We now apply the general framework of the hierarchical DP with random effects to the problem of detecting and characterizing spatial activation patterns in fMRI brain images. Underlying our approach is an assumption that there is an unobserved true spatial activation pattern in a subject’s brain given a particular stimulus and that multiple activation images for this individual collected over different fMRI sessions are realizations of the true activation image, with variability in the activation pattern due to various sources. Our goal is to infer the unknown true activation from multiple such activation images. We model each activation image using a mixture of experts model, with a component expert assigned to each local activation cluster (Rasmussen and Ghahramani, 2002). By introducing a hierarchical DP into this model we allow activation clusters to be shared across images, inferring the number of such clusters from the data. In addition, the random effects component can be incorporated to allow activation centers to be slightly shifted in terms of pixel locations or in terms of peak intensity. These types of variation are common in multi-image fMRI experiments, due to a variety of factors such as head motion, variation in the physiological and cognitive states of the subject. In what follows below we will focus on 2-dimensional “slices” rather than 3-dimensional voxel images—in principle the same type of model could be developed for the 3-dimensional case. We briefly discuss the mixture of experts model below (Kim et al., 2006). Assuming the β values yi, i = 1, . . . , N are conditionally independent of each other given the voxel position xi = (xi1, xi2) and the model parameters, we model the activation yi at voxel xi as a mixture of experts: p(yi|xi, θ) = X c∈C p(yi|c, xi)P(c|xi), (7) where C = {cbg, cm, m = 1, . . . , M −1} is a set of M expert component labels for background cbg and M −1 activation components cm’s. The first term on the right hand side of Equation (7) defines the expert for a given component. We model the expert for an activation component as a (a) (b) (c) (d) Figure 4: Results from eight runs for subject 2 at Stanford. (a) Raw images for a cross section of right precentral gyrus and surrounding area. Activation components estimated from the images using (b) DP mixtures, (c) hierarchical DPs, and (d) hierarchical DP with random effects. Gaussian-shaped surface centered at bm with width Σm and height hm as follows. yi = hmexp  −(xi −bm)′(Σm)−1(xi −bm)  + ε, (8) where ε is an additive noise term distributed as N(0, σ2 act). The background component is modeled as yi = µ + ε, having a constant activation level µ with additive noise distributed as N(0, σ2 bg). The second term in Equation (7) is known as a gate function in the mixture of experts framework— it decides which expert should be used to make a prediction for the activation level at position xi. Using Bayes’ rule we write this term as P(c|xi) = p(xi|c)πc/(P c∈C p(xi|c)πc), where πc is a class prior probability P(c). p(xi|c) is defined as follows. For activation components, p(xi|cm) is a normal density with mean bm and covariance Σm. bm and Σm are shared with the Gaussian surface model for experts in Equation (8). This implies that the probability of activating the mth expert is highest at the center of the activation and gradually decays as xi moves away from the center. p(xi|cbg) for the background component is modeled as having a uniform distribution of 1/N for all positions in the brain. If xi is not close to the center of any activations, the gate function selects the background expert for the voxel. We place a hierarchical DP prior on πc, and let the location parameters bm and the height parameters hm vary in individual images according to a Normal prior distribution with a variance Ψbm and ψ2 hm using a random effects model. We define prior distributions for Ψbm and ψ2 hm as a half normal distribution with a 0 mean and a variance as suggested by Gelman (2006). Since the surface model for the activation component is a highly non-linear model, without conjugate prior distributions it is not possible to evaluate the integrals in Equations (5b)-(5c) and (6) analytically in the sampling algorithm. We rely on an approximation of the integrals by sampling new values for bm and hm from their priors and new values for image-specific random effects parameters from N(bm, Ψbm) and N(hm, ψ2 hm) and evaluating the likelihood of the data given these new values for the unknown parameters. 5 Experimental results on fMRI data We demonstrate the performance of the model and inference algorithm described above by using fMRI data collected from three subjects (referred to as Subjects 1, 2 and 3) performing the same sensorimotor task at two different fMRI scanners (Stanford and Duke). Each subject was scanned during eight separate fMRI experiments (“runs”) and for each run a β-map (a voxel image that summarizes the brain activation) was produced using standard fMRI preprocessing. In this experiment we analyze a 2D cross-section of the right precentral gyrus brain region, a region that is known to be activated by this sensorimotor task. We fit our model to each set of eight β-maps for each of the subjects at each scanner, and compare the results from the models obtained from the hierarchical DP without random effects. We also fit standard DP mixtures to individual images as a baseline, using Algorithm 7 from Neal (1998) to sample from the model. The concentration parameters for DP priors in all of the three models were given a prior distribution gamma(1.5, 1) and sampled from the posterior as described in Teh et al.(2006). For all of the models the MCMC sampling algorithm was run for 3000 iterations. 1 2 3 4 5 6 7 8 9 10 0 20 40 60 1 2 3 4 5 6 7 8 9 10 0 20 40 60 1 2 3 4 5 6 7 8 9 10 0 50 100 (a) (b) (c) Figure 5: Histogram of the number of components over the last 1000 iterations (Subject 2 at Stanford). (a) DP mixture, (b) hierarchical DP, and (c) hierarchical DP with random effects. Hierarchical DP Hierarchical DP Scanner Subject with random effects Avg. Standard Avg. Standard logP deviation logP deviation Stanford Subject 1 -1142.6 21.8 -1085.3 12.6 Subject 2 -1260.9 32.1 -1082.8 28.7 Subject 3 -1084.1 11.3 -1040.9 13.5 Duke Subject 1 -1154.9 12.5 -1166.9 13.1 Subject 2 -677.9 12.2 -559.9 15.8 Subject 3 -1175.6 13.6 -1086.8 13.2 Table 2: Predictive logP scores of test images averaged over eight cross-validation runs. The simulation errors are shown as standard deviations. Figure 4(a) shows β-maps from eight fMRI runs of Subject 2 at Stanford. From the eight images one can see three primary activation bumps, subsets of which appear in different images with variability in location and intensity. Figures 4 (b)-(d) each show a sample from the model learned on the data in Figure 4(a), where Figure 4(b) is for DP mixtures, Figure 4(c) for hierarchical DPs, and Figure 4(d) for hierarchical DPs with random effects. The sampled activation components are overlaid as ellipses using one standard deviation of the width parameters Σm. The thickness of ellipses indicates the estimated height hm of the bump. In Figures 4(b) and (c) ellipses for activation components shared across images are drawn with the same color. The DPs shown in Figure 4(b) seem to overfit with many bumps and show a relatively poor generalization capability because the model cannot borrow strength from other similar images. The hierarchical DP in Figure 4(c) is not flexible enough to account for bumps that are shared across images but that have variability in their parameters. By using one fixed set of component parameters shared across images, the hierarchical DPs are too constrained and are unable to detect the more subtle features of individual images. The random effects model finds the three main bumps and a few more bumps with lower intensity for the background. Thus, in terms of generalization, the model with random effects provides a good trade-off between the relatively unconstrained DP mixtures and overly-constrained hierarchical DPs. Histograms of the number of components (every 10 samples over the last 1000 iterations) for the three different models are shown in Figure 5. We also perform a leave-one-image-out cross-validation to compare the predictive performance of hierarchical DPs and our proposed model. For each subject at each scanner we fit a model from seven images and compute the predictive likelihood of the remaining one image. The predictive scores and simulation errors (standard deviations) averaged over eight cross-validation runs for both models are shown in Table 2. In all of the subjects except for Subject 1 at Duke, the proposed model shows a significant improvement over hierarchical DPs. For Subject 1 at Duke, the hierarchical DP gives a slightly better result but the difference in scores is not significant relative to the simulation error. Figure 6 shows the difference in the way the hierarchical DP and our proposed model fit the data in one cross-validation run for Subject 1 at Duke as shown in Figure 6(a). The hierarchical DP in Figure 6(b) models the common bump with varying intensity in the middle of each image as a mixture of two components—one for the bump in the first two images with relatively high intensity and another for the same bump in the rest of the images with lower intensity. Our proposed model recovers the correspondence in the bumps with different intensity across images as shown in Figure 6(c). (a) (b) (c) Figure 6: Results from one cross-validation run for subject 1 at Duke. (a) Raw images for a cross section of right precentral gyrus and surrounding area. Activation components estimated from the images are shown in (b) for hierarchical DPs, and in (c) for hierarchical DP with random effects. 6 Conclusions In this paper we proposed a hierarchical DP model with random effects that allows each group (or image) to have group-level mixture component parameters as well as group-level mixing proportions. Using fMRI brain activation images we demonstrated that our model can capture components shared across multiple groups with individual-level variation. In addition, we showed that our model is able to estimate the number of components more reliably due to the additional flexibility in the model compared to DP mixtures and hierarchical DPs. Possible future directions for this work include extensions to modeling differences between labeled groups of individuals, e.g., in studies of controls and patients for a particular disorder. Acknowledgments We would like to thank Hal Stern for useful discussions. We acknowledge the support of the following grants: the Functional Imaging Research in Schizophrenia Testbed, Biomedical Informatics Research Network (FIRST BIRN; 1 U24 RR021992, www.nbirn.net); the Transdisciplinary Imaging Genetics Center (P20RR020837-01); and the National Alliance for Medical Image Computing (NAMIC; Grant U54 EB005149), funded by the National Institutes of Health through the NIH Roadmap for Medical Research. Author PS was also supported in part by the National Science Foundation under awards number IIS-0431085 and number SCI-0225642. References Gelman, A., Carlin, J., Stern, H. & Rubin, D. (2004) Bayesian Data Analysis, New York: Chapman & Hall/CRC. Gelman, A. (2006). Prior distribution for variance parameters in hierarchical models. Bayesian Analysis, 1(3):515–533. Kim, S., Smyth, P., & Stern, H. (2006). A nonparametric Bayesian approach to detecting spatial activation patterns in fMRI data. Proceedings of the 9th International Conference on Medical Image Computing and Computer Assisted Intervention, vol. 2, pp.217–224. Neal, R.M. (1998) Markov chain sampling methods for Dirichlet process mixture models. Technical Report 4915, Department of Statistics, University of Toronto. Penny, W. & Friston, K. (2003) Mixtures of general linear models for functional neuroimaging. IEEE Transactions on Medical Imaging, 22(4):504–514. Rasmussen, C.E. (2000) The infinite Gaussian mixture model. Advances in Neural Information Processing Systems 12, pp. 554–560. MIT Press. Rasmussen, C.E. & Ghahramani, Z. (2002) Infinite mixtures of Gaussian process experts. Advances in Neural Information Processing Systems 14, pp. 881–888. MIT Press. Sethuraman, J. (1994) A constructive definition of Dirichlet priors. Statistica Sinica, 4:639–650. Sudderth, E., Torralba, A., Freeman, W. & Willsky, A. (2005). Describing visual scenes using transformed Dirichlet Processes. Advances in Neural Information Processing Systems 18, pp. 1297–1304. MIT Press. Teh, Y.W., Jordan, M.I., Beal, M.J. & Blei, D.M. (2006). Hierarchical Dirichlet processes. Journal of American Statistical Association, To appear.
2006
38
3,057
Bayesian Model Scoring in Markov Random Fields Sridevi Parise Bren School of Information and Computer Science UC Irvine Irvine, CA 92697-3425 sparise@ics.uci.edu Max Welling Bren School of Information and Computer Science UC Irvine Irvine, CA 92697-3425 welling@ics.uci.edu Abstract Scoring structures of undirected graphical models by means of evaluating the marginal likelihood is very hard. The main reason is the presence of the partition function which is intractable to evaluate, let alone integrate over. We propose to approximate the marginal likelihood by employing two levels of approximation: we assume normality of the posterior (the Laplace approximation) and approximate all remaining intractable quantities using belief propagation and the linear response approximation. This results in a fast procedure for model scoring. Empirically, we find that our procedure has about two orders of magnitude better accuracy than standard BIC methods for small datasets, but deteriorates when the size of the dataset grows. 1 Introduction Bayesian approaches have become an important modeling paradigm in machine learning. They offer a very natural setting in which to address issues such as overfitting which plague standard maximum likelihood approaches. A full Bayesian approach has its computational challenges as it often involves intractable integrals. While for Bayesian networks many of these challenges have been met successfully[3], the situation is quite reverse for Markov random field models. In fact, it is very hard to find any literature at all on model order selection in general MRF models. The main reason for this discrepancy is the fact that MRF models have a normalization constant that depends on the parameters but is in itself intractable to compute, let alone integrate over. In fact, the presence of this term even prevents one to draw samples from the posterior distribution in most situations except for some special cases1. In terms of approximating the posterior some new methods have become available recently. In [7] a number of approximate MCMC samplers are proposed. Two of them were reported to be most successful: one based on Langevin sampling with approximate gradients given by contrastive divergence and one where the acceptance probability is approximated by replacing the log partition function with the Bethe free energy. Both these methods are very general, but inefficient. In [2] MCMC methods are explored for the Potts model based on the reversible jump formalism. To compute acceptance ratios for dimension-changing moves they need to estimate the partition function 1If one can compute the normalization term exactly (e.g. graphs with small treewidth) or if one can draw perfect samples from the MRF [8](e.g. positive interactions only) then one construct a Markov chain for the posterior. using a separate estimation procedure making it rather inefficient as well. In [6] and [8] MCMC methods are proposed that use perfect samples to circumvent the calculation of the partition function altogether. This method is elegant but limited in its application due to the need to draw perfect samples. Moreover, two approaches that approximate the posterior by a Gaussian distribution are proposed in [11] (based on expectation propagation) and [13] (based on the Bethe-Laplace approximation). In this paper we focus on a different problem, namely that of approximating the marginal likelihood. This quantity is at the heart of Bayesian analysis because it allows one to compare models of different structure. One can use it to either optimize or average over model structures. Even if one has an approximation to the posterior distribution it is not at all obvious how to use it to compute a good estimate for the marginal likelihood. The most direct approach is to use samples from the posterior and compute importance weights, p(D) ≈1 N N X n=1 p(D|θn)p(θn)/Q(θn|D) θn ∼Q(θn|D) (1) where Q(θn|D) denotes the approximate posterior. Unfortunately, this importance sampler suffers from very high variance when the number of parameters becomes large. It is not untypical that the estimate is effectively based on a single example. We propose to use the Laplace approximation, including all O(1) terms where the intractable quantities of interest are approximated by either belief propagation (BP) or the linear response theorem based on the solution of BP. We show empirically that the O(1) terms are indispensable for small N. Their inclusion can improve accuracy to up to two orders of magnitude. At the same time we observe that as a function of N, the O(1)-term based on the covariance between features deteriorates and should be omitted for large N. We conjecture that this phenomenon is explained by the fact that the calculation of the covariance between features, which is equal to the second derivative of the log-normalization constant, becomes instable if the bias in the MAP estimate of the parameters is of the order of the variance in the posterior. For any biased estimate of the parameters this phenomenon is therefore bound to happen as we increase N because the variance of the posterior distribution is expected to decrease with N. In summary we present a very accurate estimate for the marginal likelihood where it is most needed, i.e. for small N. This work seems to be the first practical method for estimating the marginal evidence in undirected graphical models. 2 The Bethe-Laplace Approximation for log p(D) Without loss of generality we represent a MRF as a log-linear model, p(x|λ) = 1 Z(λ) exp h λT f(x) i (2) where f(x) represent features. In the following we will assume that the random variables x are observed. Generalizations to models with hidden variables exist in theory but we defer the empirical evaluation of this case to future research. To score a structure we will follow the Bayesian paradigm and aim to compute the log-marginal likelihood log p(D) where D represents a dataset of size N, log p(D) = log Z dλ p(D|λ) p(λ) (3) where p(λ) is some arbitrary prior on the parameters λ. In order to approximate this quantity we employ two approximations. Firstly, we expand the both log-likelihood and log-prior around the MAP value λMP. For the log-likelihood this boils down to expanding the log-partition function, log Z(λ) ≈log Z(λMP) + κT δλ + 1 2δλT Cδλ (4) with δλ = (λ −λMP) and C = E[f(x)f(x)T ]p(x) −E[f(x)]p(x)E[f(x)]T p(x), κ = E[f(x)]p(x) (5) and where all averages are taken over p(x|λMP). Similarly for the prior we find, log p(λ) = log p(λMP) + gT δλ + 1 2δλT Hδλ (6) where g is the first derivative of log p evaluated at λMP and H is the second derivative (or Hessian). The variables δλ represent fluctuations of the parameters around the MAP value λMP. The marginal likelihood can now be approximated by integrating out the fluctuations δλ, considering λMP as a hyper-parameter, log p(D) = log Z dδλ p(D|δλ, λMP) p(δλ|λMP) (7) Inserting the expansions eqns.4 and 6 into eqn.7 we arrive at the standard expression for the Laplace approximation applied to MRFs, log p(D) ≈ (8) X n λMPT f(xn) −N log Z(λMP) + log p(λMP) + 1 2F log(2π) −1 2F log(N) −1 2 log det(C −H N ) with F the number of features. The difference with Laplace approximations for Bayesian networks is the fact that many terms in the expression above can not be evaluated. First of all, determining λMP requires running gradient ascent or iterative scaling to maximize the penalized log-likelihood which requires the computation of the average sufficient statistics E[f(x)]p(x). Secondly, the expression contains the log-partition function Z(λMP) and the covariance matrix C which are both intractable quantities. 2.1 The BP-Linear Response Approximation To make further progress, we introduce a second layer of approximations based on belief propagation. In particular, we approximate the required marginals in the gradient for λMP with the ones obtained with BP. For fully observed MRFs the value for λMP will be very close to the solution obtained by pseudo-moment matching (PMM) [5]; the influence of the prior being the only difference between the two. Hence, we use λPMM to initialize gradient descent. The approximation incurred by PMM is not always small [10] in which case other approximations such as contrastive divergence may be substituted instead. The term −log Z(λMP) will be approximated with the Bethe free energy. This will involve running belief propagation on a model with parameters λMP and inserting the beliefs at their fixed points into the expression for the Bethe free energy [16]. To compute the covariance matrix between the features C (eqn.5), we use the linear response algorithm of [15]. This approximation is based on the observation that C is the Hessian of the logpartition function w.r.t. the parameters. This is approximated by the Hessian of the Bethe free energy w.r.t. the parameters which in turn depends to the partial derivatives of the beliefs from BP w.r.t. the parameters. Cαβ = ∂2 log Z(λ) ∂λα∂λβ ≈−∂2 log FBethe(λ) ∂λα∂λβ = X xα fα(xα)∂pBP α (xα|λ) ∂λβ (9) where λ = λMP, pBP α is the marginal computed using belief propagation and xα is the collection of variables in the argument of feature fα (e.g. nodes or edges). This approximate C is also guaranteed to be symmetric and positive semi-definite. In [15] two algorithms were discussed to compute C in the linear response approximation, one based on a matrix inverse, the other a local propagation algorithm. The main idea is to perform a Taylor expansion of the beliefs and messages in the parameters δλ = λ −λMP and keep track of first order terms in the belief propagation equations. One can show that the first order terms carry the information to compute the covariance matrix. We refer to [15] for more information. In appendix A we provide explicit equations for the case of Boltzman machines which is what is needed to reproduce the experiments in section 4. −2 0 2 4 6 8 10 12 −3.5 −3.45 −3.4 −3.35 −3.3 −3.25 −3.2 −3.15 −3.1 #edges (nested models) score/N true model−5 nodes, 6 edges; N=50 BIC−ML MAP BP−LR AIS BP−LR−ExactGrad Laplace−Exact −2 0 2 4 6 8 10 12 −3.11 −3.1 −3.09 −3.08 −3.07 −3.06 −3.05 −3.04 #edges (nested models) score/N true model−5 nodes, 6 edges; N=10000 BIC−ML MAP BP−LR AIS BP−LR−ExactGrad Laplace−Exact (a) (b) Figure 1: Comparision of various scores on synthetic data 3 Conditional Random Fields Perhaps the most practical class of undirected graphical models are the conditional random field (CRF) models. Here we jointly model labels t and input variables x. The most significant modification relative to MRFs is that the normalization term now depends on the input variable. The probability of label given input is given as, p(t|x, λ) = 1 Z(λ, x) exp h λT f(t, x) i (10) To approximate the log marginal evidence we obtain an expression very similar to eqn.8 with the following replacement, • C →1 N N X n=1 Cxn (11) • X n ³ λMPT f(xn) ´ −N log Z(λMP) → X n ³ λMPT f(tn, xn) −log Z(λMP, xn) ´ (12) where Cxn = E[f(t, xn)f(t, xn)T ]p(t|xn) −E[f(t, xn)]p(t|xn)E[f(t, xn)]T p(t|xn) (13) and where all averages are taken over distributions p(t|xn, λMP) at the MAP value λMP of the conditional log-likelihood P n log p(tn|xn, λ). 4 Experiments In the following experiments we probe the accuracy of the Bethe-Laplace(BP-LR) approximation. In these experiments we have focussed on comparing the value of the estimated log marginal likelihood with “annealed importance sampling” (AIS), which we treat as ground truth[9, 1]. We have focussed on this performance measure because the marginal likelihood is the relevant quantity for both Bayesian model averaging as well as model selection. We perform experiments on synthetic data as well as a real-world dataset. For the synthetic data, we use Boltzman machine models (binary undirected graphical models with pairwise interactions) because we believe that the results will be representative of multi-state models and because the implementation of the linear response approximation is straightforward in this case (see appendix A). −2 0 2 4 6 8 10 12 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 #edges (nested models) Abs. score diff. with AIS true model:5nodes, 6 edges; N=50 BIC−ML MAP BP−LR BP−LR−ExactGrad Laplace−Exact −2 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 x 10 −3 #edges (nested models) Abs. score diff. with AIS true model:5nodes, 6 edges; N=10000 BIC−ML MAP BP−LR BP−LR−ExactGrad Laplace−Exact (a) (b) Figure 2: Mean difference in scores with AIS (synthetic data). Error-bars are too small to see. Scores computed using the proposed method (BP-LR) were compared against MAP scores (or penalized log-likelihood) where we retain only the first three terms in equation (8) and the commonly used BIC-ML scores where we ignore all O(1) terms (i.e retain only terms 1, 2 and 5). BIC-ML uses the maximum likelihood value λML instead of λMP. We also evaluate two other scores - BPLR-ExactGrad where we use exact gradients to compute the λMP and Laplace-Exact which is same as BP-LR-ExactGrad but with C computed exactly as well. Note that these last two methods are practical only for models with small tree-width. Nevertheless they are useful here to illustrate the effect of the bias from BP. 4.1 Synthetic Data We generated 50 different random structures on 5 nodes. For each we sample 6 different sets of parameters with weights w ∼U{[−d, −d + ϵ] ∪[d, d + ϵ]}, d > 0, ϵ = 0.1 4 and biases b ∼U[−1, 1] and varying the edge strength d in [ 0.1 4 , 0.2 4 , 0.5 4 , 1.0 4 , 1.5 4 , 2.0 4 ]. We then generated N = 10000 samples from each of these (50 × 6) models using exact sampling by exhaustive enumeration. In the first experiment we picked a random dataset/model with d = 0.5 4 (the true structure had 6 edges) and studied the variation of different scores with model complexity. We define an ordering on models based on complexity by using nested model sequences. These are such that a model appearing later in the sequence contains all edges from models appearing earlier. Figure (1) shows the results for two such random nested sequences around the true model, for the number of datacases N = 50 and N = 10000 respectively. The error-bars for AIS are over 10 parallel annealing runs which we see are very small. We repeated the plots for multiple such model sequences and the results were similar. Figure (2) shows the average absolute difference of each score with the AIS score over 50 sequences. From these one can see that BP-LR is very accurate at low N. As known in the literature, BIC-ML tends to over-penalize model complexity. At large N, the performance of all methods improve but BP-LR does slightly worse than the BIC-ML. In order to better understand the performance of various scores with N, we took the datasets at d = 0.5 4 and computed scores at various values of N. At each value, we find the absolute difference in the score assigned to the true structure with the corresponding AIS score. These are then averaged over the 50 datasets. The results are shown in figure (3). We note that all BP-LR methods are about two orders of magnitude more accurate than methods that ignore the O(1) term based on C. However, as we increase N, BP-LR based on λMP computed using BP significantly deteriorates. This does not happen with both BP-LR methods based on λMP computed using exact gradients (i.e. BPLR-ExactGrad and Laplace-Exact). Since the latter two methods perform identically, we conclude that it is not the approximation of C by linear response that breaks down, but rather that the bias in λMP is the reason that the estimate of C becomes unreliable. We conjecture that this happens when the bias becomes of the order of the standard deviation of the posterior distribution. Since the bias is −2000 0 2000 4000 6000 8000 10000 12000 10 −4 10 −3 10 −2 10 −1 10 0 N (# samples) Mean absolute score diff. with AIS d=0.5/4 BIC−ML MAP BP−LR BP−LR−ExactGrad Laplace−Exact Figure 3: Variation of score accuracy with N 0 0.5 1 1.5 2 2.5 10 −4 10 −3 10 −2 10 −1 10 0 d*4 (edge strength) Mean Absolute score diff. with AIS N=10k BIC−ML MAP BP−LR BP−LR−ExactGrad Laplace−Exact Figure 4: Variation of score accuracy with d constant but the variance in the posterior decreases as O(1/N) this phenomenon is bound to happen for some value of N. Finally since our BP-LR method relies on the BP approximation which is known to break down at strong interactions, we investigated the performance of various scores with d. Again at each value of d we compute the average absolute difference in the scores assigned to the true structure by a method and AIS. We use N = 10000 to keep the effect of N minimal. Results are shown in figure (4). As expected all BP based methods deteriorate with increasing d. The exact methods show that one can improve performance by having a more accurate estimate of λMP. 4.2 Real-world Data To see the performance of the BP-LR on real world data, we implemented a linear chain CRF on the “newsgroup FAQ dataset”2 [4]. This dataset contains 48 files where each line can be either a header, a question or an answer. The problem is binarized by only retaining the question/answer lines. For each line we use 24 binary features ga(x) = 0/1, a = 1, .., 24 as provided by [4]. These are used to define state and transition features using: f a i (ti, xi) = tiga(xi) and f a i (ti, ti+1, xi) = titi+1ga(xi) where i denotes the line in a document and a indexes the 24 features. We generated a random sequence of models by incrementally adding some state features and then some transition features. We then score each model using MAP, BIC-MAP (which is same as BICML but with λMP), AIS and Laplace-Exact. Note that since the graph is a chain, BP-LR is equivalent to BP-LR-ExactGrad and Laplace-Exact. We use N = 2 files each truncated to 100 lines. The results are shown in figure (5). Here again, the Laplace-Exact agrees very closely with AIS compared to the other two methods. (Another less relevant observation is that the scores flatten out around the point where we stop adding the state features showing their importance compared to transition features). 5 Discussion The main conclusion from this study is that the Bethe-Laplace approximation can give an excellent approximation to the marginal likelihood for small datasets. We discovered an interesting phenomenon, namely that as N grows the error in the O(1) term based on the covariance between features increases. We found that this term can give an enormous boost in accuracy for small N (up to two orders of magnitude), but its effect can be detrimental for large N. We conjecture that this switch-over point takes place when the bias in λMP becomes of the order of the standard deviation in the posterior (which decreases as 1/N). At that point the second derivative of the log-likelihood in the Taylor expansion becomes unreliable. There are a number of ways to improve the accuracy of approximation. One approach is to use higher order Kikuchi approximations to replace the Bethe approximation. Linear response results are also 2Downloaded from: http://www.cs.umass.edu/∼mccallum/data/faqdata/ 0 10 20 30 40 50 −60 −55 −50 −45 −40 −35 −30 −25 −20 −15 # features score/N CRF N=2, Sequence Length=100 MAP BIC_MAP Laplace−Exact AIS Figure 5: Comparision of various scores on real-world dataset available for this case [12]. A second improvement could come from improving the estimate of λMP using alternative learning techniques such as contrastive divergence or alternative sample-based approaches. As discussed above, less bias in λMP will make the covariance term useful for larger N. Finally, the case of hidden variables needs to be addressed. It is not hard to imagine how to extend the techniques proposed in this paper to hidden variables in theory, but we haven’t run the experiments necessary to make claims about its performance. This, we leave for future study. A Computation of C for Boltzman Machines For binary variables and pairwise interactions we define the variables as λ = {θi, wij} where θi is a parameter multiplying the node-feature xi and wij the parameter multiplying the edge feature xixj. Moreover, we’ll define the following independent quantities qi = p(xi = 1) and ξij = p(xi = 1, xj = 1). Note that all other quantities, e.g. p(xi = 1, xj = 0) are functions of {qi, ξij}. In the following we will assume that {qi, ξij} are computed using belief propagation (BP). At the fixed points of BP the following relations hold [14], wij = log µξij(ξij + 1 −qi −qj) (qi −ξij)(qj −ξij) ¶ θi = log à (1 −qi)zi−1 Q j∈N(i)(qi −ξij) qzi−1 i Q j∈N(i)(ξij + 1 −qi −qj) ! (14) where N(i) are neighboring nodes of node i in the graph and zi = |N(i)| is the number of neighbors of node i. To compute the covariance matrix we first compute its inverse from eqns.14 as follows, C−1 = " ∂θ ∂q ∂θ ∂ξ ∂w ∂q ∂w ∂ξ # and subsequently take its inverse. The four terms in this matrix are given by, ∂θi ∂qk =   1 −zi qi(1 −qi) + X j∈N(i) µ 1 qi −ξij + 1 ξij + 1 −qi −qj ¶ δik (15) ∂θi ∂ξjk = · −1 qi −ξik + 1 ξik + 1 −qi −qk ¸ δij + · −1 qi −ξij + 1 ξij + 1 −qi −qj ¸ δik (16) ∂Wij ∂qk = · 1 qi −ξij − 1 ξij + 1 −qi −qj ¸ δik + · 1 qj −ξij − 1 ξij + 1 −qi −qj ¸ δjk (17) ∂Wij ∂ξkl = · 1 ξij + 1 ξij + 1 −qi −qj + 1 qi −ξij + 1 qj −ξij ¸ δikδjl (18) Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 0447903. References [1] M.J. Beal and Z. Ghahramani. The variational bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. In Bayesian Statistics, pages 453–464. Oxford University Press, 2003. [2] P. Green and S. Richardson. Hidden markov models and disease mapping. Journal of the American Statistical Association, 97(460):1055–1070, 2002. [3] D. Heckerman. A tutorial on learning with bayesian networks. pages 301–354, 1999. [4] A. McCallum and D. Freitag F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In Int’l Conf. on Machine Learning, pages p.591–598, San Francisco, 2000. [5] T.S. Jaakkola M.J. Wainwright and A.S. Willsky. Tree-reweighted belief propagation algorithms and approximate ml estimation via pseudo-moment matching. In AISTATS, 2003. [6] J. Møller, A. Pettitt, K. Berthelsen, and R. Reeves. An efficient Markov chain Monte Carlo method for distributions with intractable normalisation constants. Biometrica, 93, 2006. to appear. [7] I. Murray and Z. Ghahramani. Bayesian learning in undirected graphical models: approximate MCMC algorithms. In Proceedings of the 14th Annual Conference on Uncertainty in Artificial Intelligence (UAI-04), San Francisco, CA, 2004. [8] I. Murray, Z. Ghahramani, and D.J.C. MacKay. Mcmc for doubly-intractable distributions. In Proceedings of the 14th Annual Conference on Uncertainty in Artificial Intelligence (UAI-06), Pittsburgh, PA, 2006. [9] R.M. Neal. Annealed importance sampling. In Statistics and Computing, pages 125–139, 2001. [10] S. Parise and M. Welling. Learning in markov random fields: An empirical study. In Proc. of the Joint Statistical Meeting – JSM2005, 2005. [11] Y. Qi, M. Szummer, and T.P. Minka. Bayesian conditional random fields. In Artificial Intelligence and Statistics, 2005. [12] K. Tanaka. Probabilistic inference by means of cluster variation method and linear response theory. IEICE Transactions in Information and Systems, E86-D(7):1228–1242, 2003. [13] M. Welling and S. Parise. Bayesian random fields: The Bethe-Laplace approximation. In UAI, 2006. [14] M. Welling and Y.W. Teh. Approximate inference in boltzmann machines. Artificial Intelligence, 143:19–50, 2003. [15] M. Welling and Y.W. Teh. Linear response algorithms for approximate inference in graphical models. Neural Computation, 16 (1):197–221, 2004. [16] J.S. Yedidia, W. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. Technical report, MERL, 2002. Technical Report TR-2002-35.
2006
39
3,058
Kernel Maximum Entropy Data Transformation and an Enhanced Spectral Clustering Algorithm Robert Jenssen1∗, Torbjørn Eltoft1, Mark Girolami2 and Deniz Erdogmus3 1 Department of Physics and Technology, University of Tromsø, Norway 2 Department of Computing Science, University of Glasgow, Scotland 3Department of Computer Science and Engineering, Oregon Health and Science University, USA Abstract We propose a new kernel-based data transformation technique. It is founded on the principle of maximum entropy (MaxEnt) preservation, hence named kernel MaxEnt. The key measure is Renyi’s entropy estimated via Parzen windowing. We show that kernel MaxEnt is based on eigenvectors, and is in that sense similar to kernel PCA, but may produce strikingly different transformed data sets. An enhanced spectral clustering algorithm is proposed, by replacing kernel PCA by kernel MaxEnt as an intermediate step. This has a major impact on performance. 1 Introduction Data transformation is of fundamental importance in machine learning, and may greatly improve and simplify tasks such as clustering. Some of the most well-known approaches to data transformation are based on eigenvectors of certain matrices. Traditional techniques include principal component analysis (PCA) and classical multidimensional scaling. These are linear methods. Recent advanced non-linear techniques include locally-linear embedding [1] and isometric mapping [2]. Of special interest to this paper is kernel PCA [3], a member of the kernel-based methods [4]. Recently, it has been shown that there is a close connection between the kernel methods and information theoretic learning [5, 6, 7, 8]. We propose a new kernel-based data transformation technique based on the idea of maximum entropy preservation. The new method, named kernel MaxEnt, is based on Renyi’s quadratic entropy estimated via Parzen windowing. The data transformation is obtained using eigenvectors of the data affinity matrix. These eigenvectors are in general not the same as those used in kernel PCA. We show that kernel MaxEnt may produce strikingly different transformed data sets than kernel PCA. We propose an enhanced spectral clustering algorithm, by replacing kernel PCA by kernel MaxEnt as an intermediate step. This seemingly minor adjustment has a huge impact on the performance of the algorithm. This paper is organized as follows. In section 2, we briefly review kernel PCA. Section 3 is devoted to the kernel MaxEnt method. Some illustrations are given in section 4. The enhanced spectral clustering is discussed in section 5. Finally, we conclude the paper in section 6. 2 Kernel PCA PCA is a linear data transformation technique based on the eigenvalues and eigenvectors of the (d × d) data correlation matrix, where d is the data dimensionality. A dimensionality reduction from d to l < d is obtained by projecting a data point onto a subspace spanned by the eigenvectors (principal axes) corresponding to the l largest eigenvalues. It is well-known that this data ∗Corresponding author. Phone: (+47) 776 46493. Email: robertj@phys.uit.no. transformation preserves the maximum amount of variance in the l-dimensional data compared to the original d-dimensional data. Sch¨olkopf et al. [3] proposed a non-linear extension, by performing PCA implicitly in a kernel feature space which is non-linearly related to the input space via the mapping xi →Φ(xi), i = 1, . . . , N. Using the kernel-trick to compute inner-products, k(xi, xj) = ⟨Φ(xi), Φ(xj)⟩, it was shown that the eigenvalue problem in terms of the feature space correlation matrix is reduced to an eigenvalue problem in terms of the kernel matrix Kx, where element (i, j) of Kx equals k(xi, xj), i, j = 1, . . . , N. This matrix can be eigendecomposed as Kx = EDET , where D is a diagonal matrix storing the eigenvalues in descending order, and E is a matrix with the eigenvectors as columns. Let Φpca be a matrix where each column corresponds to the PCA projection of the data points Φ(xi), i = 1, . . . , N, onto the subspace spanned by the l largest kernel space principal axes. Then, Φpca = D 1 2 l ET l , where the (l × l) matrix Dl stores the l largest eigenvalues, and the (N × l) matrix El stores the corresponding eigenvectors. This is the kernel PCA transformed data set 1. Kernel PCA thus preserves variance in terms of the kernel induced feature space. However, kernel PCA is not easily interpreted in terms of the input space data set. How does variance preservation in the kernel feature space correspond to an operation on the input space data set? To the best of our knowledge, there are no such intuitive interpretations of kernel PCA. In the next section, we introduce kernel MaxEnt, which we show is related to kernel PCA. However, kernel MaxEnt may be interpreted in terms of the input space, and will in general perform a different projection in the kernel space. 3 Kernel MaxEnt The Renyi quadratic entropy is given by H2(x) = −log R f 2(x)dx [9], where f(x) is the density associated with the random variable X. A d-dimensional data set xi, i = 1, . . . , N, generated from f(x), is assumed available. A non-parametric estimator for H2(x) is obtained by replacing the actual pdf by its Parzen window estimator, given by [10] ˆf(x) = 1 N N X i=1 Wσ(x, xi), Wσ(x, xi) = 1 (2πσ2) d 2 exp  −||x −xi||2 2σ2  . (1) The Parzen window need not be Gaussian, but it must be a density itself. The following derivation assumes a Gaussian window. Non-Gaussian windows are easily incorporated. Hence, we obtain ˆH2(x) = −log Z ˆf 2(x)dx = −log 1 N 2 N X i=1 N X j=1 Z Wσ(x, xi)Wσ(x, xj)dx = −log 1 N 2 N X i=1 N X j=1 W√ 2σ(xi, xj), (2) where in the last step the convolution theorem for Gaussian functions has been employed. For notational simplicity, we denote W√ 2σ(xi, xj) as k(xi, xj). Note that since W√ 2σ(·, ·) is a Gaussian function, it is also a Mercer kernel, and so is k(·, ·). In the following, we construct the kernel matrix Kx, such that element (i, j) of Kx equals k(xi, xj), i, j = 1, . . . , N. It is easily shown that the Renyi quadratic entropy may be expressed compactly in terms of the kernel matrix as ˆH2(x) = −log 1 N 2 1T Kx1, where 1 is a (N × 1) ones-vector. Since the logarithm is a monotonic function, we will in the remainder of this paper focus on the quantity V (x) = 1 N 2 1T Kx1. It is thus clear that all the information regarding the Renyi entropy resides in the kernel matrix Kx. Hence, the kernel matrix is the input space quantity of interest in this paper. A well-known input space data transformation principle is founded on the idea of maximum entropy (MaxEnt), usually defined in terms of minimum model complexity. In this paper, we define MaxEnt differently, as a mapping X →Y, such that the entropy associated with Y is maximally similar 1In [3] the kernel feature space data was assumed centered, obtained by a centering operation of the kernel matrix. We do not assume centered data here. to the entropy of X. Since we are concerned with Renyi’s entropy, it is therefore clear that such a data mapping results in a V (y) = 1 N 2 1T Ky1, in terms of the Y data set, which should be as close as possible to V (x) = 1 N 2 1T Kx1. This means that the kernel matrix Ky must be maximally similar to Kx in some sense. Since our input space quantity of concern is the kernel matrix, we are only implicitly concerned with the Y data set (we do not actually want to obtain Y, nor is the dimensionality of interest). The kernel matrix can be decomposed as Kx = EDET . The kernel matrix is at the same time an inner-product matrix in the Mercer kernel induced feature space. Let Φx be a matrix such that each column represents an approximation to the corresponding kernel feature space data point in the set Φ(x1), . . . , Φ(xN). An approximation which preserves inner-products is given by Φx = D 1 2 ET , since then Kx = ΦT xΦx = EDET . Note that Φx = D 1 2 ET is the projection onto all the principal axes in the Mercer kernel feature space, hence defining a N-dimensional data set. We now describe a dimensionality reduction in the Mercer kernel space, obtaining the k-dimensional Φy from Φx, yielding Ky = ΦT y Φy such that V (y) ≈V (x). Notice that we may rewrite V (x) as follows [8] V (x) = 1 N 2 N X i=1 λi(1T ei)2 = 1 N 2 N X i=1 λiγ2 i , (3) where ei is the eigenvector corresponding to the i’th column of Kx, and 1T ei = γi. We also assume that the products λiγ2 i have been sorted in decreasing order, such that λ1γ2 1 ≥. . . ≥λNγ2 N. If we are to approximate V (x) using only k terms (eigenvalues/eigenvectors) of the sum Eq. (3), we must use the k first terms in order to achieve minimum approximation error. This corresponds to using the k largest λiγ2 i . Let us define the data set Φy = D 1 2 k ET k , using the k eigenvalues and eigenvectors of Kx corresponding to the k largest products λiγ2 i . Hence, Ky = ΦT yΦy = EkD 1 2 k D 1 2 k ET k = EkDkET k , and V (y) = 1 N 2 k X i=1 λiγ2 i = 1 N 2 1T Ky1, (4) the best approximation to the entropy estimate V (x) using k eigenvalues and eigenvectors. We thus refer to the mapping Φy = D 1 2 k ET k as a maximum entropy data transformation in a Mercer kernel feature space. Note that this is not the same as the PCA dimensionality reduction in Mercer kernel feature space, which is defined as Φpca = D 1 2 l ET l , using the eigenvalues and eigenvectors corresponding to the l largest eigenvalues of Kx. In terms of the eigenvectors of the kernel feature space correlation matrix, we project Φ(xi) onto a subspace spanned by different eigenvectors, which is possibly not the most variance preserving (remember that variance in the kernel feature space data set is given by the sum of the largest eigenvalues). The kernel MaxEnt procedure, as described above, is summarized in Table 1. It is important to realize that kernel MaxEnt outputs two quantities, which may be used for further data analysis. The kernel space output quantity is the transformed data set Φy = D 1 2 k ET k . The input space output quantity is the kernel matrix Ky = EkDkET k , which is an approximation to the original kernel matrix Kx. Input Space Kernel Space Kx = EDET → Φx = D 1 2 ET ≀ ↓ Ky = EkDkET k ← Φy = D 1 2 k ET k Table 1. Flow of the kernel MaxEnt procedure. There are two possible outputs; the input space kernel matrix Ky, and the kernel space data set Φy. 3.1 Interpretation in Terms of Cost Function Minimization Kernel MaxEnt produces a new kernel matrix Ky = EkDkET k , such that the sum of the elements of Ky is maximally equal to the sum of the elements of Kx. Hence, kernel MaxEnt picks eigenvectors and eigenvalues in order to minimize the cost function 1T (Kx −Ky)1. On the other hand, it is well known that the kernel PCA matrix Kpca = ElDlET l , based on the l largest eigenvalues, minimizes the Frobenius norm of (Kx −Kpca), that is 1T (Kx −Kpca).21 (where A.2 denotes elementwise squaring of matrix A.) 3.2 Kernel MaxEnt Eigenvectors Reveal Cluster Structure Under “ideal” circumstances, kernel MaxEnt and kernel PCA yield the same result, as shown in the following. Assume that the data consists of C = 2 different maximally compact subsets, such that k(xi, xj) = 1 for xi and xj in the same subset, and k(xi, xj) = 0 for xi and xj in different subsets (point clusters). Assume that subset one consists of N1 data points, and subset two consists of N2 data points. Hence, N = N1 + N2 and we assume N1 ≥N2. Then K =  1N1×N1 0N1×N2 0N2×N1 1N2×N2  , E = " 1 √N1 1N1 0N1 0N2 1 √N2 1N2 # , (5) where 1M×M (0M×M) is the (M×M) all ones (zero) matrix and D = diag(N1, N2). Hence, a data point xi in subgroup one will be represented by xi →[1 0]T and a data point xj in subgroup two will be represented by xj →[0 1]T both using Φy and Φpca. (see also [11] for a related analysis). Thus, kernel MaxEnt and kernel PCA yield the same data mapping, where each subgroup is mapped into mutually orthogonal points in the kernel space (the clusters were points also in the input space, but not necessarily orthogonal). Hence, in the “ideal” case, the clusters in the transformed data set is spread by 90 degrees angles. Also, the eigenvectors carry all necessary information about the cluster structure (cluster memberships can be assigned by a proper thresholding). This kind of “ideal” analysis has been used as a justification for the kernel PCA mapping, where the mapping is based on the C largest eigenvalues/eigenvectors. Such a situation will correspond to maximally concentrated eigenvalues of the kernel matrix. In practice, however, there will be more than C non-zero eigenvalues, not necessarily concentrated, and corresponding eigenvectors, because there will be no such “ideal” situation. Shawe-Taylor and Cristianini [4] note that kernel PCA can only detect stable patterns if the eigenvalues are concentrated. In practice, the first C eigenvectors may not necessarily be those which carry most information about the clustering structure of the data set. However, kernel MaxEnt will seek to pick those eigenvectors with the blockwise structure corresponding to cluster groupings, because this will make the sum of the elements in Ky as close as possible to the sum of the elements of Kx. Some illustrations of this property follow in the next section. 3.3 Parzen Window Size Selection The Renyi entropy estimate is directly connected to Parzen windowing. In theory, therefore, an appropriate window, or kernel, size, corresponds to an appropriate density estimate. Parzen window size selection has been thoroughly studied in statistics [12]. Many reliable data-driven methods exist, especially for data sets of low to moderate dimensionality. Silverman’s rule [12] is one of the simplest, given by ˆσ = σX h 4 (2d+1)N i 1 d+4 , where σ2 X = d−1 P i ΣXii, and ΣXii are the diagonal elements of the sample covariance matrix. Unless otherwise stated, the window size is determined using this rule. 4 Illustrations Fig. 1 (a) shows a ring-shaped data set consisting of C = 3 clusters (marked with different symbols for clarity). The vertical lines in (b) show the 10 largest eigenvalues (normalized). The largest eigenvalue is more than twice as large as the second largest. However, the values of the remaining eigenvalues are not significantly different. The bars in (b) shows the entropy terms λiγ2 i (normalized) corresponding to these largest eigenvalues. Note that the entropy terms corresponding to the first, fourth and seventh eigenvalues are significantly larger than the rest. This means that kernel MaxEnt is based on the first, fourth and seventh eigenvalue/eigenvector pair (yielding a 3-dimensional transformed data set). In contrast, kernel PCA is based on the eigenvalue/eigenvector pair corresponding to the three largest eigenvalues. In (c) the kernel MaxEnt data transformation is shown. Note that the clusters are located along different lines radially from the origin (illustrated by the lines in the figure). These lines are almost orthogonal to each other, hence approximating what would be expected in the “ideal” case. The kernel PCA data transformation is shown in (d). This data set is significantly different. In fact, the mean vectors of the clusters in the kernel PCA representation are not spread angularly. In (e), the first eight eigenvectors are shown. The original data set is ordered, such that the first 63 elements correspond to the innermost ring, the next 126 elements correspond to the ring in the middle, and the final 126 elements correspond to the outermost ring. Observe how eigenvectors one, four and seven are those which carry information about the cluster structure, with their blockwise appearance. The kernel matrix Kx is shown in (f). Ideally, this should be a blockwise matrix. It is not. In (g), the kernel MaxEnt approximation Ky to the original kernel matrix is shown, obtained from eigenvectors one, four and seven. Note the blockwise appearance. In contrast, (g) shows the corresponding Kpca. The same blockwise structure can not be observed. Fig. 2 (a) shows a ring-shaped data set consisting of two clusters. In (b) and (c) the kernel MaxEnt (eigenvalues/eigenvectors one and five) and kernel PCA transformations are shown, respectively. Again, kernel MaxEnt produces a data set where the clusters are located along almost orthogonal lines, in contrast to kernel PCA. The same phenomenon is observed for the data set shown in (d), with the kernel MaxEnt (eigenvalues/eigenvectors one and four) and kernel PCA transformations shown in (e) and (f), respectively. In addition, (g) and (h) shows the kernel MaxEnt (eigenvalues/eigenvectors one, two and five) and kernel PCA transformations of the 16-dimensional penbased handwritten digit recognition data set (three clusters, digits 0, 1 and 2), extracted from the UCI repository. Again, similar comments can be made. These illustrations show that kernel MaxEnt produces a different transformed data set than kernel PCA. Also, it produces a kernel matrix Ky having a blockwise appearance. Both the transformed data Φy and the new kernel matrix can be utilized for further data analysis. In the following, we focus on Φy. 5 An Enhanced Spectral Clustering Algorithm A recent spectral clustering algorithm [7] is based on the Cauchy-Schwarz (CS) pdf divergence measure, which is closely connected to the Renyi entropy. Let ˆf1(x) and ˆf2(x) be Parzen window estimators of the densities corresponding to two clusters. Then, an estimator for the CS measure can be expressed as [6] ˆD(f1, f2) = R ˆf1(x) ˆf2(x)dx qR ˆf 2 1 (x)dx R ˆf 2 2 (x)dx = cos ̸ (m1, m2), (6) where m1 and m2 are the kernel feature space mean vectors of the data points corresponding to the two clusters. Note that ˆD(f1, f2) ∈[0, 1], reaching its maximum value if m1 = m2 ( ˆf1(x) = ˆf2(x)), and its minimum value if the two vectors (densities) are orthogonal. The measure can easily be extended to more than two pdfs. The clustering is based on computing the cosine of the angle between a data point and the mean vector mi of each cluster ωi, i = 1, . . . , C, for then to assign the data point to the cluster corresponding to the maximum value. This procedure minimizes the CS measure as defined above. Kernel PCA was used for representing the data in the kernel feature space. As an illustration of the utility of kernel MaxEnt, we here replace kernel PCA by kernel MaxEnt. This adjustment has a major impact on the performance. The algorithm thus has the following steps: 1) Use some data-driven method from statistics to determine the Parzen window size. 2) Compute the kernel matrix Kx. 3) Obtain a C-dimensional kernel feature space representation using kernel MaxEnt. 4) Initialize mean vectors. 5) For all data points: xt →ωi : maxi cos ̸ (Φ(xt), mi). 6) Update mean vectors. 7) Repeat steps 5-7 until convergence. For further details (like mean vector initialization etc.) we refer to [7]. Fig. 3 (a) shows the clustering performance in terms of the percentage of correct labeling for the data set shown in Fig. 2 (d). There are three curves: Our spectral clustering algorithm using kernel MaxEnt (marked by the circle-symbol), and kernel PCA (star-symbol), and in addition we compare −30 −20 −10 0 10 20 30 −30 −20 −10 0 10 20 30 (a) 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 1 (b) 0 0.2 0.4 0.6 −0.2 0 0.2 −0.2 −0.1 0 0.1 0.2 (c) 0 0.2 0.4 0.6 −0.5 0 0.5 −0.4 −0.2 0 0.2 0.4 (d) 1 4 7 (e) (f) (g) (h) Figure 1: Examples of data transformations using kernel MaxEnt and kernel PCA. with the state-of-the-art Ng et al. method (NG) [11] using the Laplacian matrix. The clustering is performed over a range of kernel sizes. The vertical line indicates the “optimal” kernel size using Silverman’s rule. Over the whole range, kernel MaxEnt performs equally good as NG, and better than kernel PCA for small kernel sizes. Fig. 3 (b) shows a similar result for the data set shown in Fig. 1 (a). Kernel MaxEnt has the best performance over the most part of the kernel range. Fig. 3 (c) shows the mean result for the benchmark thyroid data set [13]. On this data set, kernel MaxEnt performs considerably better than the two other methods, over a wide range of kernel sizes. These preliminary experiments show the potential benefits of kernel MaxEnt in data analysis, especially when the kernel space cost function is based on an angular measure. Using kernel MaxEnt makes the algorithm competitive to spectral clustering using the Laplacian matrix. We note that kernel MaxEnt in theory requires the full eigendecomposition, thus making it more computationally complex than clustering based on only the C largest eigenvectors. −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 (a) −1 −0.8 −0.6 −0.4 −0.2 0 −0.8 −0.6 −0.4 −0.2 0 0.2 (b) −1 −0.8 −0.6 −0.4 −0.2 0 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 (c) −5 −4 −3 −2 −1 0 1 2 −3 −2 −1 0 1 2 3 (d) 0 0.2 0.4 0.6 0.8 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 (e) 0 0.2 0.4 0.6 0.8 −1 −0.5 0 0.5 1 (f) 0 0.2 0.4 −0.2 0 0.2 0.4 −0.3 −0.2 −0.1 0 0.1 (g) 0 0.2 0.4 −0.2 0 0.2 0.4 −0.2 −0.1 0 0.1 0.2 (h) Figure 2: Examples of data transformations using kernel MaxEnt and kernel PCA. 6 Conclusions In this paper, we have introduced a new data transformation technique, named kernel MaxEnt, which has a clear theoretical foundation based on the concept of maximum entropy preservation. The new method is similar in structure to kernel PCA, but may produce totally different transformed data sets. We have shown that kernel MaxEnt significantly enhances a recent spectral clustering algorithm. Kernel MaxEnt also produces a new kernel matrix, which may be useful for further data analysis. Kernel MaxEnt requires the kernel to be a valid Parzen window (i.e. a density). Kernel PCA requires the kernel to be a Mercer kernel (positive semidefinite), hence not necessarily a density. In that sense, kernel PCA may use a broader class of kernels. On the other hand, kernel MaxEnt may use Parzen windows which are not Mercer kernels (indefinite), such as the Epanechnikov kernel. Kernel MaxEnt based on indefinite kernels will be studied in future work. Acknowledgements RJ is supported by NFR grant 171125/V30 and MG is supported by EPSRC grant EP/C010620/1. References [1] S. Roweis and L. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, pp. 2323–2326, 2000. 0.5 1 1.5 2 0 20 40 60 80 100 σ Performance % KPCA ME NG (a) 1 2 3 4 0 20 40 60 80 100 σ Performance % KPCA ME NG (b) 0.5 1 1.5 2 2.5 3 0 20 40 60 80 100 σ Performance % KPCA ME NG (c) Figure 3: Clustering results. [2] J. Tenenbaum, V. de Silva, and J. C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol. 290, pp. 2319–2323, 2000. [3] B. Sch¨olkopf, A. J. Smola, and K. R. M¨uller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Computation, vol. 10, pp. 1299–1319, 1998. [4] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press, 2004. [5] R. Jenssen, D. Erdogmus, J. C. Principe, and T. Eltoft, “The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space,” in Advances in Neural Information Processing Systems 17, MIT Press, Cambridge, 2005, pp. 625–632. [6] R. Jenssen, D. Erdogmus, J. C. Principe, and T. Eltoft, “Some Equivalences between Kernel Methods and Information Theoretic Methods,” Journal of VLSI Signal Processing, to appear, 2006. [7] R. Jenssen, D. Erdogmus, J. C. Principe, and T. Eltoft, “Information Theoretic Angle-Based Spectral Clustering: A Theoretical Analysis and an Algorithm,” in Proceedings of International Joint Conference on Neural Networks, Vancouver, Canada, July 16-21, 2006, pp. 4904–4911. [8] M. Girolami, “Orthogonal Series Density Estimation and the Kernel Eigenvalue Problem,” Neural Computation, vol. 14, no. 3, pp. 669–688, 2002. [9] A. Renyi, “On Measures of Entropy and Information,” Selected Papers of Alfred Renyi, Akademiai Kiado, Budapest, vol. 2, pp. 565–580, 1976. [10] E. Parzen, “On the Estimation of a Probability Density Function and the Mode,” The Annals of Mathematical Statistics, vol. 32, pp. 1065–1076, 1962. [11] A. Y. Ng, M. Jordan, and Y. Weiss, “On Spectral Clustering: Analysis and an Algorithm,” in Advances in Neural Information Processing Systems, 14, MIT Press, Cambridge, 2002, pp. 849–856. [12] B. W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman and Hall, London, 1986. [13] G. R¨aetsch, T. Onoda, and K. R. M¨uller, “Soft Margins for Adaboost,” Machine Learning, vol. 42, pp. 287–320, 2001.
2006
4
3,059
Isotonic Conditional Random Fields and Local Sentiment Flow Yi Mao School of Elec. and Computer Engineering Purdue University - West Lafayette, IN ymao@ecn.purdue.edu Guy Lebanon Department of Statistics, and School of Elec. and Computer Engineering Purdue University - West Lafayette, IN lebanon@stat.purdue.edu Abstract We examine the problem of predicting local sentiment flow in documents, and its application to several areas of text analysis. Formally, the problem is stated as predicting an ordinal sequence based on a sequence of word sets. In the spirit of isotonic regression, we develop a variant of conditional random fields that is wellsuited to handle this problem. Using the M¨obius transform, we express the model as a simple convex optimization problem. Experiments demonstrate the model and its applications to sentiment prediction, style analysis, and text summarization. 1 Introduction The World Wide Web and other textual databases provide a convenient platform for exchanging opinions. Many documents, such as reviews and blogs, are written with the purpose of conveying a particular opinion or sentiment. Other documents may not be written with the purpose of conveying an opinion, but nevertheless they contain one. Opinions, or sentiments, may be considered in several ways, the simplest of which is varying from positive opinion, through neutral, to negative opinion. Most of the research in information retrieval has focused on predicting the topic of a document, or its relevance with respect to a query. Predicting the document’s sentiment would allow matching the sentiment, as well as the topic, with the user’s interests. It would also assist in document summarization and visualization. Sentiment prediction was first formulated as a binary classification problem to answer questions such as: “What is the review’s polarity, positive or negative?” Pang et al. [1] demonstrated the difficulties in sentiment prediction using solely the empirical rules (a subset of adjectives), which motivates the use of statistical learning techniques. The task was then refined to allow multiple sentiment levels, facilitating the use of standard text categorization techniques [2]. However, sentiment prediction is different from traditional text categorization: (1) in contrast to the categorical nature of topics, sentiments are ordinal variables; (2) several contradicting opinions might co-exist, which interact with each other to produce the global document sentiment; (3) context plays a vital role in determining the sentiment. Indeed, sentiment prediction is a much harder task than topic classification tasks such as Reuters or WebKB and current models achieve lower accuracy. Rather than using a bag of words multiclass classifier, we model the sequential flow of sentiment throughout the document using a sequential conditional model. Furthermore, we treat the sentiment labels as ordinal variables by enforcing monotonicity constraints on the model’s parameters. 2 Local and Global Sentiments Previous research on sentiment prediction has generally focused on predicting the sentiment of the entire document. A commonly used application is the task of predicting the number of stars assigned to a movie, based on a review text. Typically, the problem is considered as standard multiclass classification or regression using the bag of words representation. In addition to the sentiment of the entire document, which we call global sentiment, we define the concept of local sentiment as the sentiment associated with a particular part of the text. It is reasonable to assume that the global sentiment of a document is a function of the local sentiment and that estimating the local sentiment is a key step in predicting the global sentiment. Moreover, the concept of local sentiment is useful in a wide range of text analysis applications including document summarization and visualization. Formally, we view local sentiment as a function on the words in a document taking values in a finite partially ordered set, or a poset, (O, ≤). To determine the local sentiment at a particular word, it is necessary to take context into account. For example, due to context the local sentiment at each of the following words this is a horrible product is low (in the sense of (O, ≤)). Since sentences are natural components for segmenting document semantics, we view local sentiment as a piecewise constant function on sentences. Occasionally we encounter a sentence that violates this rule and conveys opposing sentiments in two different parts. In this situation we break the sentence into two parts and consider them as two sentences. We therefore formalize the problem as predicting a sequence of sentiments y = (y1, . . . , yn), yi ∈O based on a sequence of sentences x = (x1, . . . , xn). Modeling the local sentiment is challenging from several aspects. The sentence sequence x is discrete-time and high-dimensional categorical valued, and the sentiment sequence y is discretetime and ordinal valued. Regression models can be applied locally but they ignore the statistical dependencies across the time domain. Popular sequence models such as HMM or CRF, on the other hand, typically assume that y is categorical valued. In this paper we demonstrate the prediction of local sentiment flow using an ordinal version of conditional random fields, and explore the relation between the local and global sentiment. 3 Isotonic Conditional Random Fields Conditional random fields (CRF) [3] are parametric families of conditional distributions pθ(y|x) that correspond to undirected graphical models or Markov random fields pθ(y|x) = pθ(y, x) pθ(x) = Q c∈C φc(x|c, y|c) Z(θ, x) = exp P c∈C P k θc,kfc,k(x|c, y|c)  Z(θ, x) θc,k ∈R (1) where C is the set of cliques in the graph and x|c and y|c are the restriction of x and y to variables representing nodes in c ∈C. It is assumed above that the potentials φc are exponential functions of features modulated by decay parameters φc(x|c, y|c) = exp(P k θc,kfc,k(x|c, y|c)). CRF have been mostly applied to sequence annotation, where x is a sequence of words and y is a sequence of labels annotating the words, for example part-of-speech tags. The standard graphical structure in this case is a chain structure on y with noisy observations x. In other words, the cliques are C = {{yi−1, yi}, {yi, xi} : i = 1, . . . , n} (see Figure 1 left) leading to the model pθ(y|x) = 1 Z(x, θ) exp X i X k λkfk(yi−1, yi) + X i X k µkgk(yi, xi) ! θ = (λ, µ). (2) In sequence annotation a standard choice for the feature functions is f⟨σ,τ⟩(yi−1, yi) = δyi−1,σδyi,τ and g⟨σ,w⟩(yi, xi) = δyi,σδxi,w (note that we index the feature functions using pairs rather than k as in (2)). In our case, since xi are sentences we use instead the slightly modified feature functions g⟨σ,w⟩(yi, xi) = 1 if yi = σ, w ∈xi and 0 otherwise. Given a set of iid training samples the parameters are typically estimated by maximum likelihood or MAP using standard numerical techniques such as conjugate gradient or quasi-Newton. Despite the great popularity of CRF in sequence labeling, they are not appropriate for ordinal data such as sentiments. The ordinal relation is ignored in (2), and in the case of limited training data the parameter estimates will possess high variance resulting in poor predictive power. We therefore enforce a set of monotonicity constraints on the parameters that are consistent with the ordinal structure and domain knowledge. The resulting model is a restricted subset of the CRF (2) and, in accordance with isotonic regression [4], is named isotonic CRF. Since ordinal variables express a progression of some sort, it is natural to expect some of the binary features in (2) to correlate more strongly with some ordinal values than others. In such cases, we should expect the presence of such binary features to increase (or decrease) the conditional probability in a manner consistent with the ordinal relation. Since the parameters µ⟨σ,w⟩represent the effectiveness of the appearance of w with respect to increasing the probability of σ ∈O, they are natural candidates for monotonicity constraints. More specifically, for words w ∈M1 that are identified as strongly associated with positive sentiment, we enforce σ ≤σ′ =⇒ µ⟨σ,w⟩≤µ⟨σ′,w⟩ ∀w ∈M1. (3) Similarly, for words w ∈M2 identified as strongly associated with negative sentiment, we enforce σ ≤σ′ =⇒ µ⟨σ,w⟩≥µ⟨σ′,w⟩ ∀w ∈M2. (4) The motivation behind the above restriction is immediate for the non-conditional Markov random fields pθ(x) = Z−1 exp(P θifi(x)). Parameters θi are intimately tied to model probabilities through activation of the feature functions fi. In the case of conditional random fields, things get more complicated due to the dependence of the normalization term on x. The following propositions motivate the above parameter restriction for the case of linear structure CRF with binary features. Proposition 1. Let p(y|x) be a linear state-emission chain CRF with binary features f⟨σ,τ⟩, g⟨σ,w⟩ as above, and x a sentence sequence for which v ̸∈xj. Then, denoting x′ = (x1, . . . , xj−1, xj ∪ {v}, xj+1, . . . , xn), we have ∀y p(y|x) p(y|x′) = Ep(y′|x)  e µ⟨y′ j ,v⟩−µ⟨yj ,v⟩ . Proof. p(y|x) p(y|x′) = Z(x′) Z(x) e( P i P σ,τ λ⟨σ,τ⟩f⟨σ,τ⟩(yi−1,yi)+P i P σ,w µ⟨σ,w⟩g⟨σ,w⟩(yi,xi)) e( P i P σ,τ λ⟨σ,τ⟩f⟨σ,τ⟩(yi−1,yi)+P i P σ,w µ⟨σ,w⟩g⟨σ,w⟩(yi,x′ i)) = Z(x′) Z(x) e−µ⟨yj ,v⟩ = P y′ e( P i P σ,τ λ⟨σ,τ⟩f⟨σ,τ⟩(y′ i−1,y′ i)+P i P σ,w µ⟨σ,w⟩g⟨σ,w⟩(y′ i,x′ i)) P y′ e( P i P σ,τ λ⟨σ,τ⟩f⟨σ,τ⟩(y′ i−1,y′ i)+P i P σ,w µ⟨σ,w⟩g⟨σ,w⟩(y′ i,xi)) e−µ⟨yj ,v⟩ = P r∈O αreµ⟨r,v⟩ P r∈O αr e−µ⟨yj ,v⟩= X r∈O αr P r′∈O αr′ eµ⟨r,v⟩−µ⟨yj ,v⟩ = X y′ p(y′|x)e µ⟨y′ j ,v⟩−µ⟨yj ,v⟩ where αr = X y′:y′ j=r exp X i X σ,τ λ⟨σ,τ⟩f⟨σ,τ⟩(y′ i−1, y′ i) + X i X σ,w µ⟨σ,w⟩g⟨σ,w⟩(y′ i, xi) ! . Note that the specific linear CRF structure (Figure 1, left) and binary features are essential for the above result. Proposition 1 connects the probability ratio p(y|x) p(y|x′) to the model parameters in a relatively simple manner. Together with Proposition 2 below, it motivates the ordering of {µ⟨r,v⟩: r ∈O} determined by the restrictions (3)-(4) in terms of the ordering of probability ratios of transformed sequences. Proposition 2. Let p(y|x), x, x′ be as in Proposition 1. For all label sequences s, t, we have µ⟨tj,v⟩≥µ⟨sj,v⟩ =⇒ p(s|x) p(s|x′) ≥p(t|x) p(t|x′). (5) Proof. Since µ⟨tj,v⟩≥µ⟨sj,v⟩we have that ez−µ⟨sj ,v⟩−ez−µ⟨tj ,v⟩≥0 for all z and Ep(y′|x)  e µ⟨y′ j ,v⟩−µ⟨sj ,v⟩−e µ⟨y′ j ,v⟩−µ⟨tj ,v⟩ ≥0. By Proposition 1 the above expectation is p(s|x) p(s|x′) −p(t|x) p(t|x′) and Equation (5) follows. The restriction (3) may thus be interpreted as ensuring that adding a word w ∈M1 to transform x 7→ x′ will increase labeling probabilities associated with σ no less than with σ′ if σ′ ≤σ. Similarly, the restriction (4) may be interpreted in the opposite way. If these assumptions are correct, it is clear that they will lead to more accurate parameter estimates and better prediction accuracy. However, even if assumptions (3)-(4) are incorrect, enforcing them may improve prediction by trading off increased bias with lower variance. Conceptually, the parameter estimates for isotonic CRF may be found by maximizing the likelihood or posterior subject to the monotonicity constraints (3)-(4). Since such a maximization is relatively difficult for large dimensionality, we propose a re-parameterization that leads to a much simpler optimization problem. The re-parameterization, in the case of a fully ordered set, is relatively straightforward. In the more general case of a partially ordered set we need the mechanism of Mo¨bius inversions on finite partially ordered sets. We introduce a new set of features {g∗ ⟨σ,w⟩: σ ∈O} for w ∈M1 ∪M2 defined as g∗ ⟨σ,w⟩(yi, xi) = X τ:τ≥σ g⟨τ,w⟩(yi, xi) w ∈M1 ∪M2 and a new set of corresponding parameters {µ∗ ⟨σ,w⟩: σ ∈O}. If (O, ≤) is fully ordered, µ∗ ⟨σ,w⟩= µ⟨σ,w⟩−µ⟨σ′,w⟩, where σ′ is the largest element smaller than σ, or 0 if σ = min(O). In the more general case, µ∗ ⟨σ,w⟩is the convolution of µ⟨σ,w⟩with the M¨obius function of the poset (O, ≤) (see [5] for more details). By the M¨obius inversion theorem [5] we have that µ∗ ⟨σ,w⟩satisfy µ⟨σ,w⟩= X τ:τ≤σ µ∗ ⟨τ,w⟩ w ∈M1 ∪M2 (6) and that P τ µ⟨τ,w⟩g⟨τ,w⟩= P τ µ∗ ⟨τ,w⟩g∗ ⟨τ,w⟩leading to the re-parameterization of isotonic CRF p(y|x) = 1 Z(x) exp X i X σ,τ λ⟨σ,τ⟩f⟨σ,τ⟩(yi−1, yi) + X i X w̸∈M1∪M2 X σ µ⟨σ,w⟩g⟨σ,w⟩(yi, xi) + X i X w∈M1∪M2 X σ µ∗ ⟨σ,w⟩g∗ ⟨σ,w⟩(yi, xi) ! with µ∗ ⟨σ,w⟩≥0, w ∈M1 and µ∗ ⟨σ,w⟩≤0, w ∈M2 for all σ > min(O). The re-parameterized model has the benefit of simple constraints and its maximum likelihood estimates can be obtained by a trivial adaptation of conjugate gradient or quasi-Newton methods. 3.1 Author Dependent Models Thus far, we have ignored the dependency of the labeling model p(y|x) on the author, denoted here by the variable a. We now turn to account for different sentiment-authoring styles by incorporating this variable into the model. The word emissions yi →xi in the CRF structure are not expected to vary much across different authors. The sentiment transitions yi−1 →yi, on the other hand, typically vary across different authors as a consequence of their individual styles. For example, the review of an author who sticks to a list of self-ranked evaluation criteria is prone to strong sentiment variations. In contrast, the review of an author who likes to enumerate pros before he gets to cons (or vice versa) is likely to exhibit more local homogeneity in sentiment. Accounting for author-specific sentiment transition style leads to the graphical model in Figure 1 right. The corresponding author-dependent CRF model p(y|x, a) = 1 Z(x, a)e( P i,a′ P σ,τ(λ⟨σ,τ⟩+λ⟨σ,τ,a′⟩)f⟨σ,τ,a′⟩(yi−1,yi,a)+P i P σ,w µ⟨σ,w⟩g⟨σ,w⟩(yi,xi)) uses features f⟨σ,τ,a′⟩(yi−1, yi, a) = f⟨σ,τ⟩(yi−1, yi)δa,a′ and transition parameters that are authordependent λ⟨σ,τ,a⟩as well as author-independent λ⟨σ,τ⟩. Setting λ⟨σ,τ,a⟩= 0 reduces the model to the standard CRF model. The author-independent parameters λ⟨σ,τ⟩allow parameter sharing across multiple authors in case the training data is too scarce for proper estimation of λ⟨σ,τ,a⟩. For simplicity, the above ideas are described in the context of non-isotonic CRF. However, it is straightforward to combine author-specific models with isotonic restrictions. Experiments demonstrating author-specific isotonic models are described in Section 4.3. Yi Xi Yi+1 Xi+1 Yi-1 Xi-1 Xi Yi+1 Xi+1 Yi-1 Xi-1 Yi a Figure 1: Graphical models corresponding to CRF (left) and author-dependent CRF (right). 3.2 Sentiment Flows as Smooth Curves The sentence-based definition of sentiment flow is problematic when we want to fit a model (for example to predict global sentiment) that uses sentiment flows from multiple documents. Different documents have different number of sentences and it is not clear how to compare them or how to build a model from a collection of discrete flows of different lengths. We therefore convert the sentence-based flow to a smooth length-normalized flow that can meaningfully relate to other flows. We assume from now on that the ordinal set O is realized as a subset of R and that its ordering coincides with the standard ordering on R. In order to account for different lengths, we consider the sentiment flow as a function h : [0, 1] →O ⊂R that is piecewise constant on the intervals [0, l), [l, 2l), . . . , [(k −1)l, 1] where k is the number of sentences in the document and l = 1/k. Each of the intervals represents a sentence and the function value on it is its sentiment. To create a more robust representation we smooth out the discontinuous function by convolving it with a smoothing kernel. The resulting sentiment flow is a smooth curve f : [0, 1] →R that can be easily related or compared to similar sentiment flows of other documents (see Figure 3 for an example). We can then define natural distances between two flows, for example the Lp distance dp(f1, f2) = Z 1 0 |f1(r) −f2(r)|p dr 1/p (7) for use in a k-nearest neighbor model for relating the local sentiment flow to the global sentiment. 4 Experiments To examine the ideas proposed in this paper we implemented isotonic CRF, and the normalization and smoothing procedure, and experimented with a small dataset of 249 movie reviews, randomly selected from the Cornell sentence polarity dataset v1.01, all written by the same author. The code for isotonic CRF is a modified version of the quasi-Newton implementation in the Mallet toolkit. In order to check the accuracy and benefit of the local sentiment predictor, we hand-labeled the local sentiments of each of these reviews. We assigned for each sentence one of the following values in O ⊂R: 2 (highly praised), 1 (something good), 0 (objective description), −1 (something that needs improvement) and −2 (strong aversion). 4.1 Sentence Level Prediction To evaluate the prediction quality of the local sentiment we compared the performance of naive Bayes, SVM (using the default parameters of SVMlight), CRF and isotonic CRF. Figure 2 displays the testing accuracy and distance of predicting the sentiment of sentences as a function of the training data size averaged over 20 cross-validation train-test split. The dataset presents one particular difficulty where more than 75% of the sentences are labeled objective (or 0). As a result, the prediction accuracy for objective sentences is over-emphasized. To correct for this fact, we report our test-set performance over a balanced (equal number of sentences for different labels) sample of labeled sentences. Note that since there are 5 labels, random guessing yields a baseline of 0.2 accuracy and guessing 0 always yields a baseline of 1.2 distance. 1Available at http://www.cs.cornell.edu/People/pabo/movie-review-data 25 50 75 100 125 150 175 0.2 0.25 0.3 0.35 0.4 balanced testing accuracy isotonic CRFs CRFs SVM naive Bayes 25 50 75 100 125 150 175 0.95 1 1.05 1.1 1.15 1.2 1.25 balanced testing distance isotonic CRFs CRFs SVM naive Bayes Figure 2: Local sentiment prediction: balanced test results for naive Bayes, SVM, CRF and iso-CRF. As described in Section 3, for isotonic CRF, we obtained 300 words to enforce monotonicity constraints. The 150 words that achieved the highest correlation with the sentiment were chosen for positivity constraints. Similarly, the 150 words that achieved the lowest correlation were chosen for negativity constraints. Table 1 displays the top 15 words of the two lists. great superb memorable enjoyable mood perfection outstanding performance enjoyed certain considerable wonderfully worth beautifully delightfully too didnt just failed unnecessary couldnt i no satire contrived wasnt uninspired lacked boring tended Table 1: Lists of 15 words with the largest positive (top) and negative (bottom) correlations. The results in Figure 2 indicate that by incorporating the sequential information, the two versions of CRF perform consistently better than SVM and naive Bayes. The advantage of setting the monotonicity constraints in CRF is elucidated by the average absolute distance performance criterion (Figure 2, right). This criterion is based on the observation that in sentiment prediction, the cost of misprediction is influenced by the ordinal relation on the labels, rather than the 0-1 error rate. 4.2 Global Sentiment Prediction We also evaluated the contribution of the local sentiment analysis in helping to predict the global sentiment of documents. We compared a nearest neighbor classifier for the global sentiment, where the representation varied from bag of words to smoothed length-normalized local sentiment representation (with and without objective sentences). The smoothing kernel was a bounded Gaussian density (truncated and renormalized) with σ2 = 0.2. Figure 3 displays discrete and smoothed local sentiment labels, and the smoothed sentiment flow predicted by isotonic CRF. Figure 4 and Table 2 display test-set accuracy of global sentiments as a function of the train set size. The distance in the nearest neighbor classifier was either L1 or L2 for the bag of words representation or their continuous version (7) for the smoothed sentiment curve representation. The results indicate that the classification performance of the local sentiment representation is better than the bag of words representation. In accordance with the conclusion of [6], removing objective sentences (that correspond to sentiment 0) increased the local sentiment analysis performance by 20.7%. We can thus conclude that for the purpose of global sentiment prediction, local sentiment flow of the nonobjective sentences holds most of the relevant information. Performing local sentiment analysis on non-objective sentences improves performance as the model estimates possess lower variance. 4.3 Measuring the rate of sentiment change We examine the rate of sentiment change as a characterization of the author’s writing style using the isotonic author-dependent model of Section 3.1. We assume that the CRF process is a discrete 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 labels curve rep of labels predicted curve rep      −2 −1 0 1 2 Figure 3: Sentiment flow and its smoothed curve representation. The blue circles indicate the labeled sentiment of each sentence. The blue solid curve and red dashed curve are smoothed representations of the labeled and predicted sentiment flows. Only non-objective labels are kept in generating the two curves. The numberings correspond to sentences displayed in Section 4.4. 25 50 75 100 125 150 175 0.28 0.3 0.32 0.34 0.36 0.38 nearest neighbor classifier with L1 sentiment flow w/o objective sentiment flow w/ objective vocabulary 25 50 75 100 125 150 175 0.26 0.28 0.3 0.32 0.34 0.36 0.38 nearest neighbor classifier with L2 sentiment flow w/o objective sentiment flow w/ objective vocabulary Figure 4: Accuracy of global sentiment prediction (4-class labeling) as a function of train set size. sampling of a corresponding continuous time Markov jump process. A consequence of this assumption is that the time T the author stays in sentiment σ before leaving is modeled by the exponential distribution pσ(T > t) = e−qσ(t−1), t > 1. Here, we assume T > 1 and qσ is interpreted as the rate of change of the sentiment σ ∈O: the larger the value, the more likely the author will switch to other sentiments in the near future. To estimate the rate of change qσ of an author we need to compute pσ(T > t) based on the marginal probabilities p(s|a) of sentiment sequences s of length l. The probability p(s|a) may be approximated by p(s|a) = X x p(x|a)p(s|x, a) (8) ≈ X x ˜p′(x|a) n −l + 1 X i αi(s1|x, a) Qi+(l−1) j=i+1 Mj(sj−i, sj−i+1|x, a)βi+(l−1)(sl|x, a) Z(x, a) ! where ˜p′ is the empirical probability function ˜p′(x|a) = 1 |C| P x′∈C δx,x′ for the set C of documents written by author a of length no less than l. α, M, β are the forward, transition and backward probabilities analogous to the dynamic programming method in [3]. Using the model p(s|a) we can compute pσ(T > t) for different authors at integer values of t which would lead to the quantity qσ associated with each author. However, since (8) is based on an approximation, the calculated values of pσ(T > t) will be noisy resulting in slightly different values of qσ for different time points t and cross validation iterations. A linear regression fit for qσ based on the approximated values of pσ(T > t) for two authors using 10-fold cross validation is displayed in Figure 5. The data was the 249 movie reviews from the previous experiments written by one author, and additional 201 movie reviews from a second author. Interestingly, the author associated with the red dashed line has a consistent lower qσ value in all those figures, and thus is considered as more “static” and less prone to quick sentiment variations. L1 L2 vocabulary 0.3095 0.3068 sentiment flow with objective sentences 0.3189 3.0% 0.3128 1.95% sentiment flow without objective sentences 0.3736 20.7% 0.3655 19.1% Table 2: Accuracy results and relative improvement when training size equals 175. 1 2 3 4 5 0 1 2 3 4 5 6 7 8 1.8388→ ←1.3504 1 2 3 4 5 0 1 2 3 4 5 6 7 1.6808→ ←1.143 1 2 3 4 5 0 1 2 3 4 5 1.2181→ ←0.76685 1 2 3 4 5 0 2 4 6 8 10 1.8959→ ←1.2231 Figure 5: Linear regression fit for qσ, σ = 2, 1, −1, −2 (left to right) based on approximated values of pσ(T > t) for two different authors. X-axis: time t; Y-axis: negative log-probability of T > t. 4.4 Text Summarization We demonstrate the potential usage of sentiment flow for text summarization with a very simple example. The text below shows the result of summarizing the movie review in Figure 3 by keeping only sentences associated with the start, the end, the top, and the bottom of the predicted sentiment curve. The number before each sentence relates to the circled number in Figure 3. 1 What makes this film mesmerizing, is not the plot, but the virtuoso performance of Lucy Berliner (Ally Sheedy), as a wily photographer, retired from her professional duties for the last ten years and living with a has-been German actress, Greta (Clarkson). 2 The less interesting story line involves the ambitions of an attractive, baby-faced assistant editor at the magazine, Syd (Radha Mitchell), who lives with a boyfriend (Mann) in an emotionally chilling relationship. 3 We just lost interest in the characters, the film began to look like a commercial for a magazine that wouldn’t stop and get to the main article. 4 Which left the film only somewhat satisfying; it did create a proper atmosphere for us to view these lost characters, and it did have something to say about how their lives are being emotionally torn apart. 5 It would have been wiser to develop more depth for the main characters and show them to be more than the superficial beings they seemed to be on screen. Alternative schemes for extracting specific sentences may be used to achieve different effects, depending on the needs of the user. We plan to experiment further in this area by combining local sentiment flow and standard summarization techniques. 5 Discussion In this paper, we address the prediction and application of the local sentiment flow concept. As existing models are inadequate for a variety of reasons, we introduce the isotonic CRF model that is suited to predict the local sentiment flow. This model achieves better performance than the standard CRF as well as non-sequential models such as SVM. We also demonstrate the usefulness of the local sentiment representation for global sentiment prediction, style analysis and text summarization. References [1] B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP-02. [2] B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL-05. [3] J. Lafferty, F. Pereira, and A. McCallum. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International Conference on Machine Learning, 2001. [4] R. E. Barlow, D.J. Bartholomew, J. M. Bremner, and H. D. Brunk. Statistical inference under order restrictions; the theory and application of isotonic regression. Wiley, 1972. [5] R. P. Stanley. Enumerative Combinatorics. Wadsworth & Brooks/Cole Mathematics Series, 1986. [6] B. Pang and L. Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL-04.
2006
40
3,060
Logistic Regression for Single Trial EEG Classification Ryota Tomioka∗ Kazuyuki Aihara† Dept. of Mathematical Informatics, IST, The University of Tokyo, 113-8656 Tokyo, Japan. ryotat@first.fhg.de aihara@sat.t.u-tokyo.ac.jp Klaus-Robert M¨uller∗ Dept. of Computer Science, Technical University of Berlin, Franklinstr. 28/29, 10587 Berlin, Germany. klaus@first.fhg.de Abstract We propose a novel framework for the classification of single trial ElectroEncephaloGraphy (EEG), based on regularized logistic regression. Framed in this robust statistical framework no prior feature extraction or outlier removal is required. We present two variations of parameterizing the regression function: (a) with a full rank symmetric matrix coefficient and (b) as a difference of two rank=1 matrices. In the first case, the problem is convex and the logistic regression is optimal under a generative model. The latter case is shown to be related to the Common Spatial Pattern (CSP) algorithm, which is a popular technique in Brain Computer Interfacing. The regression coefficients can also be topographically mapped onto the scalp similarly to CSP projections, which allows neuro-physiological interpretation. Simulations on 162 BCI datasets demonstrate that classification accuracy and robustness compares favorably against conventional CSP based classifiers. 1 Introduction The goal of Brain-Computer Interface (BCI) research [1, 2, 3, 4, 5, 6, 7] is to provide a direct control pathway from human intentions reflected in brain signals to computers. Such a system will not only provide disabled people more direct and natural control over a neuroprosthesis or over a computer application (e.g. [2]) but also opens up a further channel of man machine interaction for healthy people to communicate solely by their intentions. Machine learning approaches to BCI have proven to be effective by requiring less subject training and by compensating for the high inter-subject variability. In this field, a number of studies have focused on constructing better low dimensional representations that combine various features of brain activities [3, 4], because the problem of classifying EEG signals is intrinsically high dimensional. In particular, efforts have been made to reduce the number of electrodes by eliminating electrodes recursively [8] or by decomposition techniques e.g., ICA, which only uses the marginal distribution, or Common Spatial Patterns (CSP) [9] which additionally takes the labels into account. In practice, often a BCI system has been constructed by combining a feature extraction step and a classification step. Our contribution is a logistic regression classifier that integrates both steps under the roof of a single minimization problem and uses well controlled regularization. Moreover, the classifier output has a probabilistic interpretation. We study a BCI based on the motor ∗Fraunhofer FIRST.IDA, Kekul´estr. 7, 12489 Berlin, Germany. †ERATO Aihara Complexity Modeling Project, JST, 153-8505 Tokyo, Japan imagination paradigm. Motor imagination can be captured through spatially localized bandpower modulation in the µ- (10-15Hz) or β- (20-30Hz) band characterized by the secondorder statistics of the signal; the underlying neuro-physiology is well known as Event Related Desynchronization (ERD) [10]. 1.1 Problem setting Let us denote by X ∈Rd×T the EEG signal of a single trial of an imaginary motor movement1, where d is the number of electrodes and T is the number of sampled time-points in a trial. We consider a binary classification problem where each class, e.g. right or left hand imaginary movement, is called positive (+) or negative (−) class. Let y ∈{+1, −1} be the class label. Given a set of trials and labels {Xi, yi}n i=1, the task is to predict the class label y for an unobserved trial X. 1.2 Conventional method: classifying with CSP features In the motor-imagery EEG signal classification, Common Spatial Pattern (CSP) based classifiers have proven to be powerful [11, 3, 6]. CSP is a decomposition method proposed by Koles [9] that finds a set of projections that simultaneously diagonalize the covariance matrices corresponding to two brain states. Formally, the covariance matrices2 are defined as: Σc = 1 |Ic| X i∈Ic XiX⊤ i (c ∈{+, −}), (1) where Ic is the set of indices belonging to a class c ∈{+, −}; thus I+ ∪I−= {1, . . . , n}. Then, the simultaneous diagonalization is achieved by solving the following generalized eigenvalue problem: Σ+w = λΣ−w. (2) Note that for each pair of eigenvector and eigenvalue (wj, λj), the equality λj = w⊤ j Σ+wj w⊤ j Σ−wj holds. Therefore, the eigenvector with the largest eigenvalue corresponds to the projection with the maximum ratio of power for the “+” class and the “−” class, and the otherway-around for the eigenvector with the smallest eigenvalue. In this paper, we call these eigenvectors filters3; we call the eigenvector of an eigenvalue smaller (or larger) than one a filter for the “+” class (or the “−” class), respectively, because the signal projected with them optimally (in the spirit of eigenvalues) captures the task related de-synchronization in each class. It is common practice that only the first nof largest eigenvectors and the last nof smallest eigenvectors are used to construct a low dimensional feature representation. The feature vector consists of logarithms of the projected signal powers and a Linear Discriminant Analysis (LDA) classifier is trained on the resulting feature vector. To summarize, the conventional CSP based classifier can be constructed as follows: How to build a CSP based classifier: 1. Solve the generalized eigenvalue problem Eq. (2). 2. Take the nof largest and smallest eigenvectors {wj}J j=1 (J = 2nof). 3. xi := © log w⊤ j XiX⊤ i wj ªJ j=1 (i = 1, . . . , n). 4. Train an LDA classifier on {xi, yi}n i=1. 1For simplicity, we assume that the signal is already band-pass filtered and each trial is centered and scaled as X = 1 √ T Xoriginal ` IT −1 T 11⊤´ . 2Although it is convenient to call Eq. (1) a covariance matrix, calling it an averaged cross power matrix gives better insight into the nature of the problem, because we are focusing on the task related modulation of rhythmic activities. 3according to the convention by [12]. 2 Theory 2.1 The model We consider the following discriminative model; we model the symmetric logit transform of the posterior class probability to be a linear function with respect to the second order statistics of the EEG signal: log P(y = +1|X) P(y = −1|X) = f(X; θ) := tr £ WXX⊤¤ + b, (3) where θ := (W, b) ∈Sym(d) × R, W is a symmetric d × d matrix and b is the bias term. The model (3) can be derived by assuming a zero-mean Gaussian distribution with no temporal correlation with a covariance matrix Σ± for each class as follows: log P(y = +1|X) P(y = −1|X) = 1 2tr £¡ −Σ+ −1 + Σ− −1¢ XX⊤¤ + const.. (4) However training of a discriminative model is robust to misspecification of the marginal distribution P(X) [13]. In another words, the marginal distribution P(X) is a nuisance parameter; we maximize the joint log-likelihood, which is decomposed as log P(y, X|θ) = log P(y|X, θ) + log P(X), only with respect to θ [14]. Therefore, no assumption about the generative model is necessary. Note that from Eq. (4) normally the optimal W has both positive and negative eigenvalues. 2.2 Logistic regression 2.2.1 Linear logistic regression We minimize the negative log-likelihood of Eq. (3) with an additional regularization term, which is written as follows: min W ∈Sym(d),b∈R 1 n n X i=1 log ³ 1 + e−yif(Xi; θ)´ + C 2n ¡ trΣP WΣP W + b2¢ . (5) Here, the pooled covariance matrix ΣP := 1 n Pn i=1 XiX⊤ i is introduced in the regularization term in order to make the regularization invariant to linear transformation of the data; if we rewrite W as W := Σ−1/2 P ˜WΣ−1/2 P , one can easily see that the regularization term is simply the Frobenius norm of a symmetric matrix ˜W; the transformation corresponds to the whitening of the signal ˜X = Σ−1/2 P X. By simple calculation, one can see that the loss term is the negative logarithm of the conditional likelihood Qn i=1 1/(1 + e−yif(Xi;θ)), in another words the probability of observing head (yi = +1) or tail (yi = −1) by tossing n coins with probability P(y = +1|X = Xi, θ) (i = 1, . . . , n) for the head. From a general point of view, the loss term of Eq. (5) converges asymptotically to the true loss where the empirical average is replaced by the expectation over X and y, whose minimum over functions in L2(PX) is achieved by the symmetric logit transform of P(y = +1|X) [15]. Note that the problem Eq. (5) is convex. The problem of classifying motor imagery EEG signals is now addressed under a single loss function. Based on the criterion (Eq. (5)) we can say how good a solution is and we know how to properly regularize it. 2.2.2 Rank=2 approximation of the linear logistic regression Here we present a rank=2 approximation of the regression function (3). Using this approximation we can greatly reduce the number of parameters to be estimated from a symmetric matrix coefficient to a pair of projection coefficients and additionally gain insight into the relevant feature the classifier has found. The rank=2 approximation of the regression function (3) is written as follows: ¯f(X; ¯θ) := 1 2tr £¡ −w1w⊤ 1 + w2w⊤ 2 ¢ XX⊤¤ + b, (6) where ¯θ := (w1, w2, b) ∈Rd × Rd × R. The rationale for choosing this special form of function is that the Bayes optimal regression coefficients in Eq. (4) is the difference of two positive definite matrices; therefore two bases with opposite signs are at least necessary in capturing the nature of Eq. (4) (incorporating more bases goes beyond the scope of this contribution). The rank=2 parameterized logistic regression can be obtained by minimizing the sum of the logistic regression loss and regularization terms similarly to Eq. (5): min w1,w2∈Rd,b∈R 1 n n X i=1 log ³ 1 + e−yi ¯f(Xi; ¯θ)´ + C 2n ¡ w⊤ 1 ΣP w1 + w⊤ 2 ΣP w2 + b2¢ . (7) Here, again the pooled covariance matrix ΣP is used as a metric in order to ensure the invariance to linear transformations. Note that the bases {w1, w2} give projections of the signal into a two dimensional feature space in a similar manner as CSP (see Sec. 1.2). We call w1 and w2 filters corresponding to “+” and “−” classes, respectively, similarly to CSP. The filters can be topographically mapped onto the scalp, from which insight into the classifier can be obtained. However, the major difference between CSP and the rank=2 parameterized logistic regression (Eq. (7)) is that in our new approach, there is no distinction between the feature extraction step and the classifier training step. The coefficient that linearly combines the features (i.e., the norm of w1 and w2) is optimized in the same optimization problem (Eq. (7)). 3 Results 3.1 Experimental settings We compare the logistic regression classifiers (Eqs. (3) and (6)) against CSP based classifiers with nof = 1 (total 2 filters) and nof = 3 (total 6 filters). The comparison is a chronological validation. All methods are trained on the first half of the samples and applied on the second half. We use 60 BCI experiments [6] from 29 subjects where the subjects performed three imaginary movements, namely “right hand” (R), “left hand” (L) and “foot” (F) according to the visual cue presented on the screen, except 9 experiments where only two classes were performed. Since we focus on binary classification, all the pairwise combination of the performed classes produced 162 (= 51 · 3 + 9) datasets. Each dataset contains 70 to 600 trials (at median 280) of imaginary movements. All the recordings come from the calibration measurements, i.e. no feedback was presented to the subjects. The signal was recorded from the scalp with multi-channel EEG amplifiers using 32, 64 or 128 channels. The signal was sampled at 1000Hz and down-sampled to 100Hz before the processing. The signal is band-pass filtered at 7-30Hz and the interval 500-3500ms after the appearance of visual cue is cut out from the continuous EEG signal as a trial X. The training data is whitened before minimizing Eqs. (5) and (7) because both problems become considerably simpler when ΣP is an identity matrix. For the prediction of test data, coefficients including the whitening operation W = Σ−1/2 P ˜WΣ−1/2 P for Eq. (3) and wj = Σ−1/2 P ˜wj (j = 1, 2) for Eq. (6) are used, where ˜W and ˜wj denote the minimizer of Eqs. (5) and (7) for the whitened data. Note that we did not whitened the training and test data jointly, which could have improved the performance. The regularization constant C for the proposed method is chosen by 5×10 cross-validation on the training set. 3.2 Classification performance In Fig. 1, logistic regression (LR) classifiers with the full rank parameterization (Eq. (3); left column) and the rank=2 parameterization (Eq. (6); right column) are compared against CSP based classifiers with 6 filters (top row) and 2 filters (bottom row). Each plot shows the bit-rates achieved by CSP (horizontal) and LR (vertical) for each dataset as a circle. Here the bit-rate (per decision) is defined based on the classification test error perr as the capacity of a binary symmetric channel with the same error probability: 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 CSP (6 filters) LR (full rank) 43% 48% 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 CSP (6 filters) LR (rank=2) 52% 38% 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 CSP (2 filters) LR (full rank) 52% 43% 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 CSP (2 filters) LR (rank=2) 64% 28% Figure 1: Comparison of bit-rates achieved by the CSP based classifiers and the logistic regression (LR) classifiers. The bit-rates achieved by the conventional CSP based classifier and the proposed LR classifier are shown as a circle for each dataset. The proportion of datasets lying above/below the diagonal is shown at top-left/bottom-right corners of each plot, respectively. Only the difference between CSP with 2 filters and rank=2 approximated LR (lower right) is significant based on Fisher sign test at 5% level. 1 − ³ perr log2 1 perr + (1 −perr) log2 1 1−perr ´ . The proposed method improves upon the conventional method for datasets lying above the diagonal. Note that our proposed logistic regression ansatz is significantly better only in the lower right plot. Figure 2 shows examples of spatial filter coefficients obtained by CSP (6 filters) and rank=2 parameterized logistic regression. The CSP filters for subject A (see Fig. 2(a)) include typical cases (the first filter for the “left hand” class and the first two filters for the “right hand” class) of filters corrupted by artifacts, e.g., muscle movements. The CSP filters for the “foot” class in subject B (see Fig. 2(b)) are corrupted by strong occipital α-activity, which might have been weakly correlated to the labels by chance. Note that CSP with 2 filters only use the first filter for each class, which corresponds to the first row in Figs. 2(a) and 2(b). On the other hand the filter coefficients obtained by the logistic regression are clearly focused on the area physiologically corresponding to ERD in the motor cortex (see Figs. 2(c) and (d)). 4 Discussion 4.1 Relation to CSP Here, we show that at the optimum of Eq. (7) the regression coefficients w1 and w2 are generalized eigenvectors of two uncertainty weighted covariance matrices corresponding to two motor imagery classes, which are weighted by the uncertainty of the decision 1 −P(y = yi|X = Xi) for each sample. Samples that are easily explained by the regression function are weighed low whereas those lying close to the decision boundary or those lying on the wrong side of the boundary are highly weighted. Although, both CSP and the rank=2 approximated logistic regression can be understood as generalized eigenvalue decomposition, the classification-optimized weighting in the logistic regression yields filters that focus on the task related modulation of rhythmic activities more clearly when compared to CSP, as shown in Fig. 2. Differentiating Eq. (7) with either w1 or w2, we obtain the following equality which holds at the optimum. ± n X i=1 e−zi 1 + e−zi yiXiX⊤ i w∗ j + CΣP w∗ j = 0 (j = 1, 2), (8) where we define the short hand zi := yi ¯f(Xi; ¯θ ∗) and ± denotes + and −for j = 1, 2, respectively. Moreover, Eq. (8) can be rewritten as follows: Σ−(¯θ ∗, 0)w∗ 1 = Σ+(¯θ ∗, C)w∗ 1, (9) Σ+(¯θ ∗, 0)w∗ 2 = Σ−(¯θ ∗, C)w∗ 2, (10) where we define the uncertainty weighted covariance matrix as: Σ±(¯θ ∗, C) = X i∈I± e−zi 1 + e−zi XiX⊤ i + C n n X i=1 XiX⊤ i . Note that increasing the regularization constant C biases the uncertainty weighted covariance matrix to the pooled covariance matrix ΣP ; the regularization only affects the righthand side of Eqs. (9) and (10). If C > 0, the optimal filter coefficients w∗ j (j = 1, 2) are the generalized eigenvectors of Eqs. (9) and (10), respectively. 4.2 CSP is not optimal When first proposed, CSP was rather a decomposition technique than a classification technique (see [9]). After being introduced to the BCI community by [11], it has proved to be also powerful in classifying imaginary motor movements [3, 6]. However, since it is not optimized for the classification problem, there are two major drawbacks. Firstly, the selection of “good” CSP components is usually done somewhat arbitrarily. A widely used heuristic is to choose several generalized eigenvectors from both ends of the eigenvalue spectrum. However, as in subject B in Fig. 2, it is often observed that filters corresponding to overwhelming strong power come to the top of the spectrum though they are not correlated to the label so strongly. In practice, an experienced investigator can choose good filters by looking at them, however the validity of the selection cannot be assessed because the manual selection cannot be done inside the cross-validation. Secondly, simultaneous diagonalization of covariance matrices can suffer greatly from a few outlier trials as seen in subject A in Fig. 2. Again, in practice one can inspect the EEG signals to detect outliers, however a manual outlier detection is also a somewhat arbitrary, non-reproducible process, which cannot be validated. 5 Conclusion In this paper, we have proposed an unified framework for single trial classification of motorimagery EEG signals. The problem is addressed as a single minimization problem without any prior feature extraction or outlier removal steps. The task is to minimize a logistic regression loss with a regularization term. The regression function is a linear function with respect to the second order statistics of the EEG signal. We have tested the proposed method on 162 BCI datasets. By parameterizing the whole regression coefficients directly, we have obtained comparable classification accuracy with CSP based classifiers. By parameterizing the regression coefficients as the difference of two rank-one matrices, improvement against CSP based classifiers was obtained. We have shown that in the rank=2 parameterization of the logistic regression function, the optimal filter coefficients has an interpretation as a solution to a generalized eigenvalue problem similarly to CSP. However, the difference is that in the case of logistic regression every sample is weighted according to the importance to the overall classification problem whereas in CSP all the samples have uniform importance. The proposed framework provides a basis for various future directions. For example, incorporating more than two filters will connect the two parameterizations of the regression function shown in this paper and it may allow us to investigate how many filters are sufficient for good classification. Since the classifier output is the logit transform of the class probability, it is straightforward to generalize the method to multi-class problems. Also non-stationarities, e.g. caused by a covariate shift (see [16, 17]) in the density P(X) from one session to another, could be corrected by adapting the likelihood model. Acknowledgments: This research was partially supported by MEXT, Grant-in-Aid for JSPS fellows, 17-11866 and Grant-in-Aid for Scientific Research on Priority Areas, 17022012, by BMBF-grant FKZ 01IBE01A, and by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors’ views. References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Braincomputer interfaces for communication and control”, Clin. Neurophysiol., 113: 767–791, 2002. [2] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K¨ubler, J. Perelmouter, E. Taub, and H. Flor, “A spelling device for the paralysed”, Nature, 398: 297–298, 1999. [3] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schl¨ogl, B. Obermaier, and M. Pregenzer, “Current Trends in Graz Brain-computer Interface (BCI)”, IEEE Trans. Rehab. Eng., 8(2): 216–219, 2000. [4] B. Blankertz, G. Curio, and K.-R. M¨uller, “Classifying Single Trial EEG: Towards Brain Computer Interfacing”, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157–164, 2002. [5] B. Blankertz, G. Dornhege, C. Sch¨afer, R. Krepki, J. Kohlmorgen, K.-R. M¨uller, V. Kunzmann, F. Losch, and G. Curio, “Boosting Bit Rates and Error Detection for the Classification of Fast-Paced Motor Commands Based on Single-Trial EEG Analysis”, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 127–131, 2003. [6] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. M¨uller, V. Kunzmann, F. Losch, and G. Curio, “The Berlin Brain-Computer Interface: EEG-based communication without subject training”, IEEE Trans. Neural Sys. Rehab. Eng., 14(2): 147–152, 2006. [7] G. Dornhege, J. del R. Mill´an, T. Hinterberger, D. McFarland, and K.-R. M¨uller, eds., Towards Brain-Computer Interfacing, MIT Press, 2006, in press. [8] T. N. Lal, M. Schr¨oder, T. Hinterberger, J. Weston, M. Bogdan, N. Birbaumer, and B. Sch¨olkopf, “Support Vector Channel Selection in BCI”, IEEE Transactions Biomedical Engineering, 51(6): 1003–1010, 2004. [9] Z. J. Koles, “The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG”, Electroencephalogr. Clin. Neurophysiol., 79: 440–447, 1991. [10] G. Pfurtscheller and F. H. L. da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles”, Clin. Neurophysiol., 110(11): 1842–1857, 1999. [11] H. Ramoser, J. M¨uller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement”, IEEE Trans. Rehab. Eng., 8(4): 441–446, 2000. [12] N. J. Hill, J. Farquhar, T. N. Lal, and B. Sch¨olkopf, “Time-dependent demixing of task-relevant EEG sources”, in: Proceedings of the 3rd International Brain-Computer Interface Workshop and Training Course 2006, Verlag der Technischen Universit¨at Graz, 2006. [13] B. Efron, “The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis”, J. Am. Stat. Assoc., 70(352): 892–898, 1975. [14] T. Minka, “Discriminative models, not discriminative training”, Tech. Rep. TR-2005-144, Microsoft Research Cambridge, 2005. [15] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, SpringerVerlag, 2001. [16] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the loglikelihood function”, Journal of Statistical Planning and Inference, 90: 227–244, 2000. [17] S. Sugiyama and K.-R. M¨uller, “Input-Dependent Estimation of Generalization Error under Covariate Shift”, Statistics and Decisions, 23(4): 249–279, 2005. [2.40] left hand [2.04] [1.88] [0.33] right hand [0.41] [0.59] (a) Subject A. CSP filter coefficients [7.11] left hand [4.74] [3.19] [0.61] foot [0.67] [0.70] (b) Subject B. CSP filter coefficients left hand right hand (c) Subject A. Logistic regression (rank=2) filter coefficients left hand foot (d) Subject B. Logistic regression (rank=2) filter coefficients Figure 2: Examples of spatial filter coefficients obtained by CSP and the rank=2 parameterized logistic regression. (a) Subject A. Some CSP filters are corrupted by artifacts. (b) Subject B. Some CSP filters are corrupted by strong occipital α-activity. (c) Subject A. Logistic regression coefficients are focusing on the physiologically expected “left hand” and “right hand” areas. (d) Subject B. Logistic regression coefficients are focusing on the “left hand” and “foot” areas. Electrode positions are marked with crosses in every plot. For CSP filters, the generalized eigenvalues (Eq. (2)) are shown inside brackets.
2006
41
3,061
A Local Learning Approach for Clustering Mingrui Wu, Bernhard Sch¨olkopf Max Planck Institute for Biological Cybernetics 72076 T¨ubingen, Germany {mingrui.wu, bernhard.schoelkopf}@tuebingen.mpg.de Abstract We present a local learning approach for clustering. The basic idea is that a good clustering result should have the property that the cluster label of each data point can be well predicted based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated such that its solution has the above property. Relaxation and eigen-decomposition are applied to solve this optimization problem. We also briefly investigate the parameter selection issue and provide a simple parameter selection method for the proposed algorithm. Experimental results are provided to validate the effectiveness of the proposed approach. 1 Introduction In the multi-class clustering problem, we are given n data points, x1, ..., xn, and a positive integer c. The goal is to partition the given data xi (1 ≤i ≤n) into c clusters, such that different clusters are in some sense “distinct” from each other. Here xi ∈X ⊆Rd is the input data, X is the input space. Clustering has been widely applied for data analysis tasks. It identifies groups of data, such that data in the same group are similar to each other, while data in different groups are dissimilar. Many clustering algorithms have been proposed, including the traditional k-means algorithm and the currently very popular spectral clustering approach [3, 10]. Recently the spectral clustering approach has attracted increasing attention due to its promising performance and easy implementation. In spectral clustering, the eigenvectors of a matrix are used to reveal the cluster structure in the data. In this paper, we propose a clustering method that also has this characteristic. But it is based on the local learning idea. Namely, the cluster label of each data point should be well estimated based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated whose solution can satisfy this property. Relaxation and eigen-decomposition are applied to solve this problem. As will be seen later, the proposed algorithm is also easy to implement while it shows better performance than the spectral clustering approach in the experiments. The local learning idea has already been successfully applied in supervised learning problems [1]. This motivates us to incorporate it into clustering, an important unsupervised learning problem. Adapting valuable supervised learning ideas for unsupervised learning problems can be fruitful. For example, in [9] the idea of large margin, which has proved effective in supervised learning, is applied to the clustering problem and good results are obtained. The remaining of this paper is organized as follows. In section 2, we specify some notation that will be used in later sections. The details of our local learning based clustering algorithm are presented in section 3. Experimental results are then provided in section 4, where we also briefly investigate the parameter selection issue for the proposed algorithm. Finally we conclude the paper in the last section. 2 Notations In the following, “neighboring points” or “neighbors” of xi simply refers the nearest neighbors of xi according to some distance metric. n the total number of data. c the number of clusters to be obtained. Cl the set of points contained in the l-th cluster, 1 ≤l ≤c. Ni the set of neighboring points of xi, 1 ≤i ≤n, not including xi itself. ni |Ni| , i.e. the number of neighboring points of xi. Diag(M) the diagonal matrix with the same size and the same diagonal elements as M, where M is an arbitrary square matrix. 3 Clustering via Local Learning 3.1 Local Learning in Supervised Learning In supervised learning algorithms, a model is trained with all the labeled training data and is then used to predict the labels of unseen test data. These algorithms can be called global learning algorithms as the whole training dataset is used for training. In contrast, in local learning algorithms [1], for a given test data point, a model is built only with its neighboring training data, and then the label of the given test point is predicted by this locally learned model. It has been reported that local learning algorithms often outperform global ones [1] as the local models are trained only with the points that are related to the particular test data. And in [8], it is proposed that locality is a crucial parameter which can be used for capacity control, in addition to other capacity measures such as the VC dimension. 3.2 Representation of Clustering Results The procedure of our clustering approach largely follows that of the clustering algorithms proposed in [2, 10]. We also use a Partition Matrix (PM) P = [pil] ∈{0, 1}n×c to represent a clustering scheme. Namely pil = 1 if xi (1 ≤i ≤n) is assigned to cluster Cl (1 ≤l ≤c), otherwise pil = 0. So in each row of P, there is one and only one element that equals 1, all the others equal 0. As in [2, 10], instead of computing the PM directly to cluster the given data, we compute a Scaled Partition Matrix (SPM) F defined by: F = P(P⊤P)−1 2 . (The reason for this will be given later.) As P⊤P is diagonal, the l-th (1 ≤l ≤c) column of F is just the l-th column of P multiplied by 1/ p |Cl|. Clearly we have F⊤F = (P⊤P)−1 2 P⊤P(P⊤P)−1 2 = I (1) where I is the unit matrix. Given a SPM F, we can easily restore the corresponding PM P with a mapping P(·) defined as P = P(F) = Diag(FF⊤)−1 2 F (2) In the following, we will also express F as: F = [f 1, . . . , f c] ∈Rn×c, where f l = [f l 1, . . . , f l n]⊤∈ Rn, 1 ≤l ≤c, is the l-th column of F. 3.3 Basic Idea The good performance of local learning methods indicates that the label of a data point can be well estimated based on its neighbors. Based on this, in order to find a good SPM F (or equivalently a good clustering result), we propose to solve the following optimization problem: min F∈Rn×c c X l=1 n X i=1 (f l i −ol i(xi))2 = c X l=1 f l −ol 2 (3) subject to F is a scaled partition matrix (4) where ol i(·) denotes the output function of a Kernel Machine (KM), trained with some supervised kernel learning algorithms [5], using the training data {(xj, f l j)}xj∈Ni, where f l j is used as the label of xj for training this KM. In (3), ol = [ol 1(x1), . . . , ol n(xn)]⊤∈Rn. Details on how to compute ol i(xi) will be given later. For the function ol i(·), the superscript l indicates that it is for the l-th cluster, and the subscript i means the KM is trained with the neighbors of xi. Hence apart from xi, the training data {(xj, f l j)}xj∈Ni also influence the value of ol i(xi). Note that f l j (xj ∈Ni) are also variables of the problem (3)–(4). To explain the idea behind problem (3)–(4), let us consider the following problem: Problem 1. For a data point xi and a cluster Cl, given the values of f l j at xj ∈Ni, what should be the proper value of f l i at xi? This problem can be solved by supervised learning. In particular, we can build a KM with the training data {(xj, f l j)}xj∈Ni. As mentioned before, let ol i(·) denote the output function of this locally learned KM, then the good performance of local learning methods mentioned above implies that ol i(xi) is probably a good guess of f l i, or the proper f l i should be similar as ol i(xi). Therefore, a good SPM F should have the following property: For any xi (1 ≤i ≤n) and any cluster Cl (1 ≤l ≤c), the value of f l i can be well estimated based on the neighbors of xi. That is, f l i should be similar to the output of the KM that is trained locally with the data {(xj, f l j)}xj∈Ni. This suggests that in order to find a good SPM F, we can solve the optimization problem (3)–(4). We can also explain our approach intuitively as follows. A good clustering method will put the data into well separated clusters. This implies that it is easy to predict the cluster membership of a point based on its neighbors. If, on the other hand, a cluster is split in the middle, then there will be points at the boundary for which it is hard to predict which cluster they belong to. So minimizing the objective function (3) favors the clustering schemes that do not split the same group of data into different clusters. Moreover, it is very difficult to construct local clustering algorithms in the same way as for supervised learning. In [1], a local learning algorithm is obtained by running a standard supervised algorithm on a local training set. This does not transfer to clustering. Rather than simply applying a given clustering algorithm locally and facing the difficulty to combine the local solution into a global one, problem (3)–(4) seeks a global solution with the property that locally for each point, its cluster assignment looks like the solution that we would obtain by local learning if we knew the cluster assignment of its neighbors. 3.4 Computing ol i(xi) Having explained the basic idea, now we have to make the problem (3)–(4) more specific to build a concrete clustering algorithm. So we consider, based on xi and {(xj, f l j)}xj∈Ni, how to compute ol i(xi) with kernel learning algorithms. It is well known that applying many kernel learning algorithms on {(xj, f l j)}xj∈Ni will result in a KM, according to which ol i(xi) can be calculated as: ol i(xi) = X xj∈Ni βl ijK(xi, xj) (5) where K : X × X →R is a positive definite kernel function [5], and βl ij are the expansion coefficients. In general, any kernel learning algorithms can be applied to compute the coefficients βl ij. Here we choose the ones that make the problem (3)–(4) easy to solve. To this end, we adopt the Kernel Ridge Regression (KRR) algorithm [6], with which we can obtain an analytic expression of ol i(xi) based on {(xj, f l j)}xj∈Ni. Thus for each xi, we need to solve the following KRR training problem: min βl i∈Rni λ(βl i)⊤Kiβl i + Kiβl i −f l i 2 (6) where βl i ∈Rni is the vector of the expansion coefficients, i.e. βl i = [βl ij]⊤for xj ∈Ni, λ > 0 is the regularization parameter, f l i ∈Rni denotes the vector  f l j ⊤for xj ∈Ni, and Ki ∈Rni×ni is the kernel matrix over xj ∈Ni, namely Ki = [K(xu, xv)], for xu, xv ∈Ni. Solving problem (6) leads to βl i = (Ki + λI)−1f l i. Substituting it into (5), we have ol i(xi) = k⊤ i (Ki + λI)−1f l i (7) where ki ∈Rni denotes the vector [K(xi, xj)]⊤for xj ∈Ni. Equation (7) can be written as a linear equation: ol i(xi) = α⊤ i f l i (8) where αi ∈Rni is computed as α⊤ i = k⊤ i (Ki + λI)−1 (9) It can be seen that αi is independent of f l i and the cluster index l, and it is different for different xi. Note that f l i is a sub-vector of f l, so equation (8) can be written in a compact form as: ol = Af l (10) where ol and f l are the same as in (3), while the matrix A = [aij] ∈Rn×n is constructed as follows: ∀xi and xj, 1 ≤i, j ≤n, if xj ∈Ni, then aij equals the corresponding element of αi in (9), otherwise aij equals 0. Similar as αi, the matrix A is also independent of f l and the cluster index l. Substituting (10) into (3) results in a more specific optimization problem: min F∈Rn×c c X l=1 f l −Af l 2 = c X l=1 (f l)⊤Tf l = trace(F⊤TF) (11) subject to F is a scaled partition matrix (12) where T = (I −A)⊤(I −A) (13) Thus, based on the KRR algorithm, we have transformed the objective function (3) into the quadratic function (11). 3.5 Relaxation Following the method in [2, 10], we relax F into the continuous domain and combine the property (1) into the problem (11)–(12), so as to turn it into a tractable continuous optimization problem: min F∈Rn×c trace(F⊤TF) (14) subject to F⊤F = I (15) Let F⋆∈Rn×c denote the matrix whose columns consist of c eigenvectors corresponding to the c smallest eigenvalues of the symmetric matrix T. Then it is known that the global optimum of the above problem is not unique, but a subspace spanned by the columns of F⋆through orthonormal matrices [10]: {F⋆R : R ∈Rc×c, R⊤R = I} (16) Now we can see that working on the SPM F allows us to make use of the property (1) to construct a tractable continuous optimization problem (14)–(15), while working directly on the PM P does not have this advantage. 3.6 Discretization: Obtaining the Final Clustering Result According to [10], to get the final clustering result, we need to find a true SPM F which is close to the subspace (16). To this end, we apply the mapping (2) on F⋆to obtain a matrix P⋆= P(F⋆). It can be easily proved that for any orthogonal matrix R ∈Rc×c, we have P(F⋆R) = P⋆R. This equation implies that if there exists an orthogonal matrix R such that F⋆R is close to a true SPM F, then P⋆R should also be near to the corresponding discrete PM P. To find such an orthogonal matrix R and the discrete PM P, we can solve the following optimization problem [10]: min P∈Rn×c,R∈Rc×c ∥P −P⋆R∥2 (17) subject to P ∈{0, 1}n×c, P1c = 1n (18) R⊤R = I (19) where 1c and 1n denote the c dimensional and the n dimensional vectors of all 1’s respectively. Details on how to find a local minimum of the above problem can be found in [10]. In [3], a method using k-means algorithm is proposed to find a discrete PM P based on P⋆. In this paper, we adopt the approach in [10] to get the final clustering result. 3.7 Comparison with Spectral Clustering Our Local Learning based Clustering Algorithm (LLCA) also uses the eigenvalues of a matrix (T in (13)) to reveal the cluster structure in the data, therefore it can be regarded as belonging to the category of spectral clustering approaches. The matrix whose eigenvectors are used for clustering plays the key role in spectral clustering. In LLCA, this matrix is computed based on the local learning idea: a clustering result is obtained based on whether the label of each point can be well estimated base on its neighbors with a well established supervised learning algorithm. This is different from the graph partitioning based spectral clustering method. As will be seen later, LLCA and spectral clustering have quite different performance in the experiments. LLCA needs one additional step: computing the matrix T in the objective function (14). The remaining steps, i.e. computing the eigenvectors of T and discretization (cf. section 3.6) are the same as in the spectral clustering approach. According to equation (13), to compute T, we need to compute the matrix A in (10), which in turn requires calculating αi in (9) for each xi. We can see that this is very easy to implement and A can be computed with time complexity O(Pn i=1 n3 i ). In practice, just like in the spectral clustering method, the number of neighbors ni is usually set to a fixed small value k for all xi in LLCA. In this case, A can be computed efficiently with complexity O(nk3), which scales linearly with the number of data n. So in this case the main calculation is to obtain the eigenvectors of T. Furthermore, according to (13), the eigenvectors of T are identical to the right singular vectors of I −A, which can be calculated efficiently because now I −A is sparse, each row of which contains just k + 1 nonzero elements. Hence in this case, we do not need to compute T explicitly. We conclude that LLCA is easy to implement, and in practice, the main computational load is to compute the eigenvectors of T, therefore the LLCA and the spectral clustering approach have the same order of time complexity in most practical cases.1 4 Experimental Results In this section, we empirically compare LLCA with the spectral clustering approach of [10] as well as with k-means clustering. For the last discretization step of LLCA (cf. section 3.6), we use the same code contained in the implementation of the spectral clustering algorithm, available at http://www.cis.upenn.edu/∼jshi/software/. 4.1 Datasets The following datasets are used in the experiments. • USPS-3568: The examples of handwritten digits 3, 5, 6 and 8 from the USPS dataset. • USPS-49: The examples of handwritten digits 4 and 9 from the USPS dataset. • UMist: This dataset consists of face images of 20 different persons. • UMist5: The data from the UMist dataset, belonging to class 4, 8, 12, 16 and 20. 1Sometimes we are also interested in a special case: ni = n −1 for all xi, i.e. all the data are neighboring to each other. In this case, it can be proved that T = Q⊤Q, where Q = (Diag(B))−1B with B = I − K(K + λI)−1, where K is the kernel matrix over all the data points. So in this case T can be computed with time complexity O(n3). This is the same as computing the eigenvectors of the non-sparse matrix T. Hence the order of the overall time complexity is not increased by the step of computing T, and the above statements still hold. • News4a: The text documents from the 20-newsgroup dataset, covering the topics in rec.∗, which contains autos, motorcycles, baseball and hockey. • News4b: The text documents from the 20-newsgroup dataset, covering the topics in sci.∗, which contains crypt, electronics, med and space. Further details of these datasets are provided in Table 1. Table 1: Descriptions of the datasets used in the experiments. For each dataset, the number of data n, the data dimensionality d and the number of classes c are provided. Dataset USPS-3568 USPS-49 UMist UMist5 News4a News4b n 3082 1673 575 140 3840 3874 d 256 256 10304 10304 4989 5652 c 4 2 20 5 4 4 In News4a and New4b, each document is represented by a feature vector, the elements of which are related to the frequency of occurrence of different words. For these two datasets, we extract a subset of each of them in the experiments by ignoring the words that occur in 10 or fewer documents and then removing the documents that have 10 or fewer words. This is why the data dimensionality are different in these two datasets, although both of them are from the 20-newsgroup dataset. 4.2 Performance Measure In the experiments, we set the number of clusters equal to the number of classes c for all the clustering algorithms. To evaluate their performance, we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures. 4.2.1 Normalized Mutual Information The Normalized Mutual Information (NMI) [7] is widely used for determining the quality of clusters. For two random variable X and Y, the NMI is defined as [7]: NMI(X, Y) = I(X, Y) p H(X)H(Y) (20) where I(X, Y) is the mutual information between X and Y, while H(X) and H(Y) are the entropies of X and Y respectively. One can see that NMI(X, X) = 1, which is the maximal possible value of NMI. Given a clustering result, the NMI in (20) is estimated as [7]: NMI = Pc l=1 Pc h=1 nl,hlog  n·nl,h nlˆnh  qPc l=1 nllog nl n  Pc h=1 ˆnhlog ˆnh n  (21) where nl denotes the number of data contained in the cluster Cl (1 ≤l ≤c), ˆnh is the number of data belonging to the h-th class (1 ≤h ≤c), and nl,h denotes the number of data that are in the intersection between the cluster Cl and the h-th class. The value calculated in (21) is used as a performance measure for the given clustering result. The larger this value, the better the performance. 4.2.2 Clustering Error Another performance measure is the Clustering Error. To compute it for a clustering result, we need to build a permutation mapping function map(·) that maps each cluster index to a true class label. The classification error based on map(·) can then be computed as: err = 1 − Pn i=1 δ(yi, map(ci)) n where yi and ci are the true class label and the obtained cluster index of xi respectively, δ(x, y) is the delta function that equals 1 if x = y and equals 0 otherwise. The clustering error is defined as the minimal classification error among all possible permutation mappings. This optimal matching can be found with the Hungarian algorithm [4], which is devised for obtaining the maximal weighted matching of a bipartite graph. 4.3 Parameter Selection In the spectral clustering algorithm, first a graph of n nodes is constructed, each node of which corresponds to a data point, then the clustering problem is converted into a graph partition problem. In the experiments, for the spectral clustering algorithm, a weighted k-nearest neighbor graph is employed, where k is a parameter searched over the grid: k ∈{5, 10, 20, 40, 80}. On this graph, the edge weight between two connected data points is computed with a kernel function, for which the following two kernel functions are tried respectively in the experiments. The cosine kernel: K1(xi, xj) = x⊤ i xj ∥xi∥∥xj∥ (22) and the Gaussian kernel: K2(xi, xj) = exp(−1 γ ∥xi −xj∥2) (23) The parameter γ in (23) is searched in: γ ∈{σ2 0/16, σ2 0/8, σ2 0/4, σ2 0/2, σ2 0, 2σ2 0, 4σ2 0, 8σ2 0, 16σ2 0}, where σ0 is the mean norm of the given data xi, 1 ≤i ≤n. For LLCA, the cosine function (22) and the Gaussian function (23) are also adopted respectively as the kernel function in (5). The number of neighbors ni for all xi is set to a single value k. The parameters k and γ are searched over the same grids as mentioned above. In LLCA, there is another parameter λ (cf. (6)), which is selected from the grid: λ ∈{0.1, 1, 1.5}. Automatic parameter selection for unsupervised learning is still a difficult problem. We propose a simple parameter selection method for LLCA as follows. For a clustering result obtained with a set of parameters, which in our case consists of k and λ when the cosine kernel (22) is used, or k, γ and λ when the Gaussian kernel (23) is used, we compute its corresponding SPM F and then use the objective value (11) as the evaluation criteria. Namely, the clustering result corresponding to the smallest objective value is finally selected for LLCA. For simplicity, on each dataset, we will just report the best result of spectral clustering. For LLCA, both the best result (LLCA1) and the one obtained with the above parameter selection method (LLCA2) will be provided. No parameter selection is needed for the k-means algorithm, since the number of clusters is given. 4.4 Numerical Results Numerical results are summarized in Table 2. The results on News4a and News4b datasets show that different kernels may lead to dramatically different performance for both spectral clustering and LLCA. For spectral clustering, the results on USPS-3568 are also significantly different for different kernels. It can also be observed that different performance measures may result in different performance ranks of the clustering algorithms being investigated. This is reflected by the results on USPS-3568 when the cosine kernel is used and the results on News4b when the Gaussian kernel is used. Despite all these phenomena, we can still see from Table 2 that both LLCA1 and LLCA2 outperform the spectral clustering and the k-means algorithm in most cases. We can also see that LLCA2 fails to find good parameters on News4a and News4b when the Gaussian kernel is used, while in the remaining cases, LLCA2 is either slightly worse than or identical to LLCA1. And analogous to LLCA1, LLCA2 also improves the results of the spectral clustering and the k-means algorithm on most datasets. This illustrates that our parameter selection method for LLCA can work well in many cases, and clearly it still needs improvement. Finally, it can be seen that the k-means algorithm is worse than spectral clustering, except on USPS3568 with respect to the clustering error criteria when the cosine kernel is used for spectral clustering. This corroborates the advantage of the popular spectral clustering approach over the traditional k-means algorithm. 5 Conclusion We have proposed a local learning approach for clustering, where an optimization problem is formulated leading to a solution with the property that the label of each data point can be well estimated Table 2: Clustering results. Both the normalized mutual information and the clustering error are provided. Two kernel functions (22) and (23) are tried for both spectral clustering and LLCA. On each dataset, the best result of the spectral clustering algorithm is reported (Spec-Clst). For LLCA, both the best result (LLCA1) and the one obtained with the parameter selection method described before (LLCA2) are provided. In each group, the best results are shown in boldface, the second best is in italics. Note that the results of k-means algorithm are independent of the kernel function. USPS-3568 USPS-49 UMist UMist5 News4a News4b Spec-Clst 0.6575 0.3608 0.7483 0.8810 0.6468 0.5765 NMI, LLCA1 0.8720 0.6241 0.8003 1 0.7587 0.7125 cosine LLCA2 0.8720 0.6241 0.7889 1 0.7587 0.7125 k-means 0.5202 0.2352 0.6479 0.7193 0.0800 0.0380 Spec-Clst 0.8245 0.4319 0.8099 0.8773 0.4039 0.1861 NMI, LLCA1 0.8493 0.5980 0.8377 1 0.2642 0.1776 Gaussian LLCA2 0.8467 0.5493 0.8377 1 0.0296 0.0322 k-means 0.5202 0.2352 0.6479 0.7193 0.0800 0.0380 Spec-Clst 32.93 16.56 46.26 9.29 28.26 21.73 Error (%), LLCA1 3.57 8.01 36.00 0 7.99 9.65 cosine LLCA2 3.57 8.01 38.43 0 7.99 9.65 k-means 22.16 22.30 56.35 36.43 70.62 74.08 Spec-Clst 5.68 13.51 41.74 10.00 42.34 64.71 Error (%), LLCA1 4.61 8.43 33.91 0 47.24 53.25 Gaussian LLCA2 4.70 9.80 37.22 0 74.38 72.97 k-means 22.16 22.30 56.35 36.43 70.62 74.08 based on its neighbors. We have also provided a parameter selection method for the proposed clustering algorithm. Experiments show encouraging results. Future work may include improving the proposed parameter selection method and extending this work to other applications such as image segmentation. References [1] L. Bottou and V. Vapnik. Local learning algorithms. Neural Computation, 4:888–900, 1992. [2] P. K. Chan, M. D. F. Schlag, and J. Y. Zien. Spectral k-way ratio-cut partitioning and clustering. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 13:1088– 1096, 1994. [3] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [4] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Dover, New York, 1998. [5] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, MA, 2002. [6] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK, 2004. [7] A. Strehl and J. Ghosh. Cluster ensembles – a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3:583–617, 2002. [8] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [9] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005. [10] S. X. Yu and J. Shi. Multiclass spectral clustering. In L. D. Raedt and S. Wrobel, editors, International Conference on Computer Vision. ACM, 2003.
2006
42
3,062
Temporal and Cross-Subject Probabilistic Models for fMRI Prediction Tasks Alexis Battle Gal Chechik Daphne Koller Department of Computer Science Stanford University Stanford, CA 94305-9010 {ajbattle,gal,koller}@cs.stanford.edu Abstract We present a probabilistic model applied to the fMRI video rating prediction task of the Pittsburgh Brain Activity Interpretation Competition (PBAIC) [2]. Our goal is to predict a time series of subjective, semantic ratings of a movie given functional MRI data acquired during viewing by three subjects. Our method uses conditionally trained Gaussian Markov random fields, which model both the relationships between the subjects’ fMRI voxel measurements and the ratings, as well as the dependencies of the ratings across time steps and between subjects. We also employed non-traditional methods for feature selection and regularization that exploit the spatial structure of voxel activity in the brain. The model displayed good performance in predicting the scored ratings for the three subjects in test data sets, and a variant of this model was the third place entrant to the 2006 PBAIC. 1 Introduction In functional Magnetic Resonance Imaging, or fMRI, an MR scanner measures a physiological signal known to be correlated with neural activity, the blood-oxygenation-level dependent (BOLD) signal [12]. Functional scans can be taken during a task of interest, such as the subject viewing images or reading text, thus providing a glimpse of how brain activity changes in response to certain stimuli and tasks. An fMRI session produces scans of the brain volume across time, obtaining BOLD measurements from thousands of small sub-volumes, or voxels at each time step. Much of the current fMRI research focuses on the goal of identifying brain regions activated in response to some task or stimulus (e.g., [7]). The fMRI signal is typically averaged over many repeated stimulus presentations, multiple time points and even different subjects, in order to find brain regions with statistically significant response. However, in recent years, there has been growing interest in an alternative task, whose goal is to develop models which predict stimuli from functional data, in effect demonstrating the ability to ‘read’ information from the scans. For instance, Tong et al. [9] demonstrated the ability to predict the orientation of edges in a subject’s visual field from functional scans of visual cortex, and Mitchell et al. [13] successfully applied machine learning techniques to a predict a variety of stimuli, such as the semantic category of words presented to a subject. Such prediction work has demonstrated that, despite the relatively low spatial resolution of fMRI, functional data contains surprisingly reliable and detailed signal [9, 6, 13], even on time scales as short as a few seconds. Going beyond identifying the location of responsive regions, these models begin to demonstrate how the brain encodes states and stimuli [3], often capturing distributed patterns of activation across multiple brain regions simultaneously. This line of research could also eventually provide a mechanism for accurately tracking cognitive processes in a non-invasive way. Another recent innovation is the use of long and rich stimuli in fMRI experiments, such as a commercial movie [8], rather than the traditional controlled, repeating simple stimuli. These experiments present more difficulty in analysis, but more closely mirror natural stimulation of the brain, which may evoke different brain activity patterns from traditional experiments. The recent Pittsburgh Brain Activity Interpretation Competition [2] (PBAIC), featured both the use of complex stimuli and a prediction task, presenting a unique data set for predicting subjective experiences given functional MRI sessions. Functional scans from three subjects were taken while the subjects watched three video segments. Thus, during the scan, subjects were exposed to rich stimuli including rapidly changing images of people, meaningful sounds such as dialog and music, and even emotional stimuli, all overlapping in time. Each subject also re-viewed each movie multiple times, to rate over a dozen characteristics of the videos over time, such as Amusement, presence of Faces or Body Parts, Language, and Music. Given this data set, the goal was to predict these real-valued subjective ratings for each subject based only on the fMRI scans. In this paper, we present an approach to the PBAIC problem, based on the application of machine learning methods within the framework of probabilistic graphical models. The structured probabilistic framework allowed us to represent many relevant relationships in the data, including evolution of subjective ratings over time, the likelihood of different subjects rating experiences similarly, and of course the relationship between voxels and ratings. We also explored novel feature selection methods, which exploit the spatial characteristics of brain activity. In particular, we incorporate a bias in favor of jointly selecting nearby voxels. We demonstrate the performance of our model by training from a subset of the movie sessions and predicting ratings for held out movies. An earlier variant of our model was the third place entrant to the 2006 PBAIC out of forty entries. We demonstrated very good performance in predicting many of the ratings, suggesting that probabilistic modeling for the fMRI domain is a promising approach. An analysis of our learned models, in particular our feature selection results, also provides some insight into the regions of the brain activated by different stimuli and states. 2 Probabilistic Model Our system for prediction from fMRI data is based on a dynamic, undirected graphical probabilistic model, which defines a large structured conditional Gaussian over time and subjects. The backbone of the model is a conditional linear Gaussian model, capturing the dependence of ratings on voxel measurements. We then extend the basic model to incorporate dependencies between labels across time and between subjects. The variables in our model are voxel activations and ratings. For each subject s and each time point t, we have a collection of ratings Rs(·, t), with Rs(j, t) representing the jth rating type (for instance Language) for s at time t. Note that the rating sequences given by the subjects are actually convolved with a standard hemodynamic response function before use, to account for the delay inherent in the BOLD signal response [4]. For each s and t, we also have the voxel activities Vs(·, t). Both voxels and ratings are continuous variables. For mathematical convenience, we recenter the data such that all variables (ratings and voxels) have mean 0. Each rating Rs(j, t) is modeled as a linear Gaussian random variable, dependent only on voxels from that subject’s brain as features. We can express Rs(j, t) ∼N(ws(j)T Vs(·, t), σ2 s). We assume that the dependence of the rating on the voxels is time-invariant, so that the same parameters ws(j) and σs are used for every time point. Importantly, however, each rating should not depend on all of the subject’s voxels, as this is neither biologically likely nor statistically plausible given the large number of voxels. In Sec. 3.1 we explore a variety of feature selection and regularization methods relevant to this problem. The linear regression model forms a component in a larger model that accounts for dependencies among labels across time and across subjects. This model takes the form of a (dynamic) Gaussian Markov Random Field (GMRF) [15, 11]. A GMRF is an undirected graphical probabilistic model that expresses a multi-dimensional joint Gaussian distribution in a reduced parameter space by making use of conditional independences. Specifically, we employ a standard representation of a GMRF derived from the inverse covariance matrix, or precision matrix Q = Σ−1 of the underlying Gaussian distribution: For X = (X1, . . . , Xn), a zero-mean joint Gaussian distribution over X can be written as P(X) ∝exp(−1 2XT QX). The precision matrix maps directly to a Markov network V1( , t ) 1 . R1( , t ) 1 j R2( , t ) 1 j R3( , t ) 1 j V2( , t ) 1 . V3( , t ) 1 . V1( , t ) 2 . R1( , t ) 2 j R2( , t ) 2 j R3( , t ) 2 j V2( , t ) 2 . V3( , t ) 2 . V1( , t ) 3 . R1( , t ) 3 j R2( , t ) 3 j R3( , t ) 3 j V2( , t ) 3 . V3( , t ) 3 . Figure 1: GMRF model for one rating, R·(j, t), over three subjects and three time steps. representation, as Q(i, j) = 0 exactly when Xi is independent of Xj given the remaining variables, corresponding to the absence of an edge between Xi and Xj in the Markov network. In our setting, we want to express a conditional linear Gaussian of the ratings given the voxels. A distribution P(X | Y ) can also be parametrized using the joint precision matrix: P(X | Y ) = 1 Z(Y ) Y i exp  −1 2QXX(i, i)X2 i  Y i,j∈EX exp (−QXX(i, j)XiXj) Y i,k∈EY exp (−QXY (i, k)XiYk), where EX is the set of edges between nodes in X, and EY represents edges from Y to X. Our particular GMRF is a joint probabilistic model that encompasses, for a particular rating type j, the value of the rating Rs(j, t) for all of the subjects s across all time points t. Our temporal model assumes a stationary distribution, so that both node and edge potentials are invariant across time. This means that several entries in the full precision matrix Q are tied to a single free parameter. We will treat each rating type separately. Thus, the variables in the model are: all of the voxel measurements Vs(l, t), for all s, t and voxels l selected to be relevant to rating j; and all of the ratings Rs(j, t) (for all s, t). As we discuss below, the model is trained conditionally, and therefore encodes a joint distribution over the rating variables, conditional on all of the voxel measurements. Thus, there will be no free parameters corresponding to the voxel nodes due to the use of a conditional model, while rating nodes Rs(j, ·) have an associated node potential parameter Qnode(s, j). Each rating node Rs(j, t) has edges connecting it to a subset of relevant voxels from Vs(·, t) at the same time slice. The set of voxels can vary for different ratings or subjects, but is consistent across time. The precision matrix entry Qvoxel(s, j, v) parametrizes the edge from voxel v to rating j. To encode the dependencies between the rating at different time points, our dynamic model includes edges between each rating Rs(j, t) and the previous and following ratings, Rs(j, t −1) and Rs(j, t + 1). The corresponding edge parameters are Qtime(s, j). We also use the GMRF to encode the dependencies between the ratings of different subjects, in a way that does not assume that the subjects gave identical ratings, by introducing appropriate edges in the model. Thus, we also have an edge between Rs(j, t) and Rs′(j, t) for all subject pairs s, s′, parametrized by Qsubj(s, s′, j). Overall, our model encodes the following conditional distribution: P(R·(j, ·) | V·(·, ·)) = 1 Z(V·(·, ·)) Y s,t exp  −1 2Qnode(s, j)Rs(j, t)2  Y t,s,s′ exp (−Qsubj(s, s′, j)Rs(j, t)Rs′(j, t)) Y s,t,t+1 exp (−Qtime(s, j)Rs(j, t)Rs(j, t + 1)) Y s,l,t exp (−Qvoxel(s, j, l)Rs(j, t)Vs(l, t)). (1) 3 Learning and Prediction We learn the parameters of the model above from a data set consisting of all of the voxels and all the subjective ratings for all three subjects. We train the parameters discriminatively [10], to maximize the conditional likelihood of the observed ratings given the observed voxel measurements, as specified in Eq. (1). Conditional training is appropriate in our setting, as our task is precisely to predict the ratings given the voxels; importantly, this form of training allows us to avoid modeling the highly noisy, high-dimensional voxel activation distribution. We split parameter learning into two phases, first learning the dependence of ratings on voxels, and then learning the parameters between rating nodes. The entire joint precision matrix over all voxels and ratings would be prohibitively large for our learning procedure, and this approximation was computationally much more efficient. In the first phase, we learn linear models to predict each rating given only the voxel activations. We then modify our graph, replacing the very large set of voxel nodes with a new, much smaller set of nodes representing the linear combinations of the voxel activations which we just learned. Using the reduced graph, we learn a much smaller precision matrix. We describe each of these steps below. 3.1 From Voxels to Ratings To learn the dependencies of ratings on voxels for a single subject s, we find parameters ws(j), using linear regression, which optimize P(Rs(j, ·) | Vs(·, ·)) ∝ Y t exp  −1 2σ2s (Rs(j, t) −ws(j)T Vs(·, t))2  . (2) However, to deal with the high dimensionality of the feature space relative to the number of training instances, we utilize feature selection; we also introduce regularization terms into the objective that can be viewed as a spatially-based prior over ws(j). First, we reduce the number of voxels involved in the objective for each rating using a simple feature selection method — we compute the Pearson correlation coefficient for each voxel and each rating, and select the most highly correlated features. The number of voxels to select is a setting which we tuned, for each rating type individually, using five-fold cross-validation on the training set. We chose to use the same number of voxels across subjects, which is more restrictive but increases the amount of data available for cross-validation. Even following this feature selection process, we often still have a large number (perhaps hundreds) of relevant voxels as features, and these features are quite noisy. We therefore employ additional regularization over the parameters associated with these voxels. We explored both L2 (ridge) and L1 (Lasso) regularization, corresponding to a Gaussian and a Laplacian prior respectively. Introducing both types of regularization, we end up with a log-likelihood objective of the form: X t (Rs(j, t) −ws(j)T Vs(·, t))2 + α X i ws(j, i)2 + β X i |ws(j, i)| (3) Finally, we introduce a novel form of regularization, intended to model spatial regularities. Brain activity associated with some types of stimuli, such as language, is believed to be localized to some number of coherent regions, each of which contains multiple activated voxels. We therefore want to bias our feature selection process in favor of selecting multiple voxels that are nearby in space; more precisely, we would prefer to select a voxel which is in the vicinity of other correlated voxels, over a more strongly correlated voxel which is isolated in the brain, as the latter is more likely to result from noise. We therefore define a robust “hinge-loss”-like distance function for voxels. Letting ∥vi −vk∥2 denote the Euclidean distance between voxels vi and vk in the brain, we define: D(i, k) =      1 if ∥vi −vk∥2 < dmin, 0 if ∥vi −vk∥2 > dmax, dmax−∥vi−vk∥2 dmax−dmin otherwise. We now introduce an additional regularization term −λ X ik |ws(j, i)|D(i, k)|ws(j, k)| into the objective Eq. (3). This term can offset the L1 term by co-activating voxels that are spatially nearby. Thus, it encourages, but does not force, co-selection of nearby voxels. Note that this regularization term is applied to the absolute values of the voxel weights, hence allowing nearby voxels to have opposite effects on the rating; we do observe such cases in our learned model. Note that, according to our definition, the spatial prior uses simple Euclidean distance in the brain. This is clearly too simplistic, as it ignores the structure of the brain, particularly the complex folding of the cortex. A promising extension of this idea would be to apply a geodesic version of distance instead, measuring distance over gray matter only. 3.2 Training the Joint Model We now describe the use of regression parameters, as learned in Sec. 3.1, to reduce the size of our joint precision matrix, and learn the final parameters including the inter-rating edge weights. Given ws(j), which we consider the optimal linear combination of Vs(·, j) for predicting Rs(j), we remove the voxel nodes Vs(·, t) from our model, and introduce new ‘summary’ nodes Uj(t) = ws(j)T Vs(·, t). Now, instead of finding Qvoxel(s, j, v) parameters for every voxel v individually, we only have to find a single parameter Qu(s, j). Given the structure of our original linear Gaussian model, there is a direct relationship between optimization in the reduced formulation and optimizing using the original formulation. Assuming ws(j) is the optimal set of regression parameters, the optimal Qvoxel(s, j, l) in the full form would be proportional to Qu(s, j)ws(j, l), optimized in the reduced form. This does not guarantee that our two phase learning results in globally optimal parameter settings, but simply that given ws(j), the reduction described is valid. The joint optimization of Qu(s, j), Qnode(s, j), Qtime(s, j), and Qsubj(s, s′, j) is performed according to the reduced conditional likelihood. The reduced form of Eq. (1) simply replaces the final terms containing Qvoxel(s, j,) with: Y s,t exp (−Qu(s, j)Rs(j, t)Uj(t)). (4) The final objective is computationally feasible due to the reduced parameter space. The log likelihood is a convex function of all our parameters, with the final joint precision matrix constrained to be positive semi-definite to ensure a legal Gaussian distribution. Thus, we can solve the problem with semi-definite programming using a standard convex optimization package [1]. Last, we combine all learned parameters from both steps, repeated across time steps, for the final joint model. 3.3 Prediction Prediction of unseen ratings given new fMRI scans can be obtained through probabilistic inference on the models learned for each rating type. We incorporate the observed voxel data from all three subjects as observed values in our GMRF, which induces a Gaussian posterior over the joint set of ratings. We only need to predict the most likely assignment to ratings, which is the mean (or mode) of this Gaussian posterior. The mean can be easily computed using coordinate ascent over the log likelihood of our joint Gaussian model. More precisely, we iterate over the nodes (recall there is one node for each subject at each time step), and update its mean to the most likely value given the current estimated means of its neighbors in the GMRF. Let QRR be the joint precision matrix, over all nodes over time and subject, constructed from Qu(·, ·), Qtime(·, ·), Qsubj(·, ·, ·), and Qnode(·, ·). Then for each node k and neighbors Nk according to the graph structure of our GMRF, we update µk ⇐−P j∈Nk µjQRR(k, j). As the objective is convex, this process is guaranteed to converge to the mode of the posterior Gaussian, providing the most likely ratings for all subjects, at all time points, given the functional data from scans during a new movie. 4 Experimental Results As described, the fMRI data collected for the PBAIC included fMRI scans of three different subjects, and three sessions each. In each of the sessions, a subject viewed a movie approximately 20 minutes in length, constructed from clips of the Home Improvement sitcom. All three subjects watched the same movies — referred to as Movie1, Movie2 and Movie3. The scans produced volumes with approximately 30, 000 brain voxels, each approximately 3.28mm by 3.28mm by 3.5mm, with one volume produced every 1.75 seconds. Subsequently, the subject watched the movie again multiple times (not during an fMRI session), and rated a variety of characteristics at time intervals corresponding to the fMRI volume rate. Before use in prediction, the rating sequences were convolved with a standard hemodynamic response function [4]. The core ratings used in the competition were Amusement, Attention, Arousal, Body Parts, Environmental Sounds, Faces, Food, Language, Laughter, Motion, Music, Sadness, and Tools. Since the ratings are continuous values, competition scoring was based on the correlation (for frames where the movie is playing) of predicted Linear Time Subj GMRF Sp 0 0.1 0.2 0.3 0.4 0.5 Average Correlation Model Type 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Subj 3 Correlation Amusement Attention Arousal BodyParts Env Sounds Faces Food Language Laughter Motion Music Sadness Tools GMRF LinReg 25 50 75 100 200 300 500 700 0.4 0.6 0.8 1 Number Voxels Used Correlation/Best Body Amus Lang (a) (b) (c) Figure 2: (a) Average correlation of predicted ratings and true ratings, for simple models, the full GMRF, finally including the spatial (Sp) prior. (b) Correlations for individual ratings, for subject 3. (c) Effect of varying the number of voxels used for Language, Amusement, and BodyParts. ratings with true ratings, across rating types and all subjects, combined using a z′-transform. For consistency, we adhere to the use of correlation as our performance metric. To train our model, we use the fMRI measurements along with all ratings from all subjects’ sessions for some set of movies, holding out other movies for testing. We chose to use an entire held out movie session because the additional variance between fMRI sessions is an important aspect of the prediction task. The training set is used both to learn the model parameters and for the crossvalidation step used to select regularization settings. The learned model is then used to predict ratings for the held out test movies for all subjects, from fMRI data alone. Our GMRF model shows significant improvement over simpler models on the prediction task, and a version of this model was used in our submission to the PBAIC. We also evaluate the results of our feature selection steps, examining which regions of the brain are used for each rating prediction. 4.1 Rating Prediction For our own evaluation outside of the competition, given that we did not have access to Movie3 ratings for testing, we trained a full model using functional data and ratings from the three subjects viewing Movie1, and then made predictions using the scans from all subjects for Movie2. The predictions made by the dynamic GMRF model were highly correlated with the true ratings. The best overall average correlation achieved for held out Movie2 was 0.482. For all subjects, the correlation for both Language and Faces was above 0.7, and we achieved correlations of above 0.5 on 19 of the 39 core tasks (three subjects time 13 ratings). To evaluate the contribution of various components of our model, we also tested simpler versions, beginning with a regularized linear regression model. We also constructed two simplified versions of our GMRF, one which includes edges between subjects but not time interactions, and conversely one which includes time interactions but removed subject edges. Finally, we tested our full GMRF model, plus our GMRF model along with the spatial prior. As shown in Fig. 2(a), both the time dependencies and the cross-subject interactions help greatly over the linear regression model. The final combined model, which includes both time and subject edges, demonstrates significant improvement over including either alone. We also see that the addition of a spatial prior (using cross-validation to select which ratings to apply it to), results in a small additional improvement, which we explore further in Sec. 4.2. Performance on each of the rating types individually is shown in Fig. 2(b) for subject 3, for both linear regression and our GMRF. One interesting note is that the relative ordering of rating type accuracy for the different models is surprisingly consistent. As mentioned, we submitted the third place entry to the 2006 PBAIC. For the competition, we used our joint GMRF model, but had not developed the spatial prior presented here. We trained the model using data from Movie1 and Movie2 and the corresponding ratings from all three subjects. We submitted predictions for the unseen Movie3 predictions. Around 40 groups made final submissions. Our final score in the competition was 0.493, whereas 80% of the entries fell below 0.4000. The first place group, Olivetti et al. [14], employed recurrent neural networks with mutual information based feature selection, scored 0.515. The second group, scoring 0.509, was Chigirev et al. [5] — they applied regularized linear models with smoothing across time, spatially nearby voxels and (a) Motion (b) Faces (c) Arousal Figure 3: Voxels selected for various rating predictions, all for Subject 3. averaging across subjects. Some groups employed machine learning techniques such as Support Vector Regression, while others focused on defined Regions of Interest as features in prediction. 4.2 Voxel Selection and Regularization We also examined the results of feature selection and regularization, looking at the location of voxels used for each rating, and the differences resulting from various techniques. Starting with the approximately 30, 000 brain voxels per subject, we apply our feature selection techniques, using cross-validation on training sessions to determine the number of voxels used to predict each rating. The optimal number did vary significantly by rating, as the graph of performance in Fig. 2(c) demonstrates. For instance, a small voxel set (less than 100) performs well for the Body Parts rating, while the Language rating does well with several hundred voxels, and Amusement uses an intermediate number. This may reflect the actual size and number of brain regions activated by such stimuli, but likely also reflects voxel noise and the difficulty of the individual predictions. Visualization demonstrates that our selected voxels often occur in regions known to be responsive to relevant stimuli. For instance, voxels selected for Motion in all subjects include voxels in cortical areas known to respond to motion in the visual field (Fig. 3(a)). Likewise, many voxels selected for Language occur in regions linked to language processing (Fig. 4(b)). However, many other voxels were not from expected brain regions, attributable in part to noise in the data, but also due to the intermixed and correlated stimuli in the videos. For instance, the ratings Language and Faces for subject 1 in Movie1 have correlation 0.68, and we observed that the voxels selected for Faces and Language overlapped significantly. Voxels in the language centers of the brain improve the prediction of Faces since the two stimuli are causally related, but it might be preferable to capture this correlation by adding edges between the rating nodes of our GMRF. Interestingly, there was some consistency in voxel selection between subjects, even though our model did not incorporate cross-subject voxel selection. Comparing Faces voxels for Subject 3 Fig. 3(b), to voxels for Subject 2 Fig. 4(a), we see that the respective voxels do come from similar regions. This provides further evidence that the feature selection methods are finding real patterns in the fMRI data. Finally, we discuss the results of applying our spatial prior. We added the prior for ratings which it improved in cross-validation trials for all subjects — Motion, Language, and Faces. Comparison of the voxels selected with and without our spatial prior reveal that it does result in more spatially coherent groups of voxels. Note the total number of voxels selected does not rise in general. As shown in Fig. 4(a), the voxels for Faces for subject 3 include a relevant group of voxels even without the prior, but including the spatial prior results in inluding additional voxels voxels near this region. Similar results for Language are shown for subject 1. Arousal prediction was actually hurt by including the spatial prior, and looking at the voxels selected for subject 2 for Arousal Fig. 3(c), we see that there is almost no spatial grouping originally, so perhaps here the spatial prior is implausible. 5 Discussion This work, and the other PBAIC entries, demonstrated that a wide range of subjective experiences can be predicted from fMRI data collected during subjects’ exposure to rich stimuli. Our probabilistic model in particular demonstrated the value of time-series and multi-subject data, as the use of edges representing correlations across time and correlations between subjects each improved the (a) Faces, Subject 2 (b) Language, Subject 1 Figure 4: Effect of applying the spatial prior — each left image is without, right is with prior applied. accuracy of our predictions significantly. Further, while voxels are very noisy, with appropriate regularization and the use of a spatially-based prior, reliable prediction was possible using individual voxels as features. Although voxels were selected from the whole brain, many of the voxels selected as features in our model were located in brain regions known to be activated by relevant stimuli. One natural extension to our work would include the addition of interactions between distinct rating types, such as Language and Faces, which are likely to be correlated. This may improve predictions, and could also result in more targeted voxel selection for each rating. More broadly, though, the PBAIC experiments provided an extremely rich data set, including complex spatial and temporal interactions among brain voxels and among features of the stimuli. There are many aspects of this data we have yet to explore, including modeling the relationships between the voxels themselves across time, perhaps identifying interesting cascading patterns of voxel activity. Another interesting direction would be to determine which temporal aspects of the semantic ratings are best encoded by brain activity — for instance it is possible that brain activity may respond more strongly to changes in some stimuli rather than simply stimulus presence. Such investigations could provide further insight into brain activity in response to complex stimuli in addition to improving our ability to make accurate predictions from fMRI data. Acknowledgments This work was supported by NSF grant DBI-0345474. References [1] Cvx matlab software. http://www.stanford.edu/ boyd/cvx/. [2] Pittsburgh brain activity interpretation competition inferring experience based cognition from fmri. http://www.ebc.pitt.edu/competition.html. [3] What’s on your mind? Nature Neuroscience, 9(981), 2006. [4] G. M. Boynton, S. A. Engel, G. H. Glover, and D. J. Heeger. Linear systems analysis of functional magnetic resonance imaging in human v1. J. Neurosci, 16:4207–4221, 1996. [5] D. Chigirev, G. Stephens, and T. P. E. team. Predicting base features with supervoxels. Abstract presented, 12th HBM meeting, Florence, Italy, 2006. [6] D. D. Cox and R. L. Savoya. Functional magnetic resonance imaging (fmri) brain reading: detecting and classifying distributed patterns of fmri activity in human visual cortex. NeuroImage, 19:261–270, 2003. [7] K. J. Friston, A. P. Holmes, K. J. Worsley, J. P. Poline, C. D. Frith, and R. S. J. Frackowiak. Statistical parametric maps in functional imaging: A general linear approach. HBM, 2(4):189–210, 1995. [8] U. Hasson, Y. Nir, I. Levy, G. Fuhrmann, and R. Malach. Intersubject synchronization of cortical activity during natural vision. Science, 303(1634), 2004. [9] Y. Kamitani and F. Tong. Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8:679–685, 2005. [10] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [11] S. Lauritzen. Graphical Models. Oxford University Press, New York, 1996. [12] N. K. Logothetis. The underpinnings of the bold functional magnetic resonance imaging signal. The Journal of Neuroscience, 23(10):3963–3971, 2003. [13] T. Mitchell, R. Hutchinson, R. Niculescu, F.Pereira, X. Wang, M. Just, and S. Newman. Learning to decode cognitive states from brain images. Machine Learning, 57(1–2):145–175, 2004. [14] E. Olivetti, D. Sona, and S. Veeramachaneni. Gaussian process regression and recurrent neural networks for fmri image classification. Abstract presented, 12th HBM meeting, Florence, Italy, 2006. [15] T. P. Speed and H. T. Kiiveri. Gaussian Markov distributions over finite graphs. Annals of Statistics, 14.
2006
43
3,063
Particle Filtering for Nonparametric Bayesian Matrix Factorization Frank Wood Department of Computer Science Brown University Providence, RI 02912 fwood@cs.brown.edu Thomas L. Griffiths Department of Psychology University of California, Berkeley Berkeley, CA 94720 tom griffiths@berkeley.edu Abstract Many unsupervised learning problems can be expressed as a form of matrix factorization, reconstructing an observed data matrix as the product of two matrices of latent variables. A standard challenge in solving these problems is determining the dimensionality of the latent matrices. Nonparametric Bayesian matrix factorization is one way of dealing with this challenge, yielding a posterior distribution over possible factorizations of unbounded dimensionality. A drawback to this approach is that posterior estimation is typically done using Gibbs sampling, which can be slow for large problems and when conjugate priors cannot be used. As an alternative, we present a particle filter for posterior estimation in nonparametric Bayesian matrix factorization models. We illustrate this approach with two matrix factorization models and show favorable performance relative to Gibbs sampling. 1 Introduction One of the goals of unsupervised learning is to discover the latent structure expressed in observed data. The nature of the learning problem will vary depending on the form of the data and the kind of latent structure it expresses, but many unsupervised learning problems can be viewed as a form of matrix factorization – i.e. decomposing an observed data matrix, X, into the product of two or more matrices of latent variables. If X is an N × D matrix, where N is the number of D-dimensional observations, the goal is to find a low-dimensional latent feature space capturing the variation in the observations making up X. This can be done by assuming that X ≈ZY, where Z is a N × K matrix indicating which of (and perhaps the extent to which) K latent features are expressed in each of the N observations and Y is a K × D matrix indicating how those K latent features are manifest in the D dimensional observation space. Typically, K is less than D, meaning that Z and Y provide an efficient summary of the structure of X. A standard problem for unsupervised learning algorithms based on matrix factorization is determining the dimensionality of the latent matrices, K. Nonparametric Bayesian statistics offers a way to address this problem: instead of specifying K a priori and searching for a “best” factorization, nonparametric Bayesian matrix factorization approaches such as those in [1] and [2] estimate a posterior distribution over factorizations with unbounded dimensionality (i.e. letting K →∞). This remains computationally tractable because each model uses a prior that ensures that Z is sparse, based on the Indian Buffet Process (IBP) [1]. The search for the dimensionality of the latent feature matrices thus becomes a problem of posterior inference over the number of non-empty columns in Z. Previous work on nonparametric Bayesian matrix factorization has used Gibbs sampling for posterior estimation [1, 2]. Indeed, Gibbs sampling is the standard inference algorithm used in nonparametric Bayesian methods, most of which are based on the Dirichlet process [3, 4]. However, recent work has suggested that sequential Monte Carlo methods such as particle filtering can provide an efficient alternative to Gibbs sampling in Dirichlet process mixture models [5, 6]. In this paper we develop a novel particle filtering algorithm for posterior estimation in matrix factorization models that use the IBP, and illustrate its applicability to two specific models – one with a conjugate prior, and the other without a conjugate prior but tractable in other ways. Our particle filtering algorithm is by nature an “on-line” procedure, where each row of X is processed only once, in sequence. This stands in comparison to Gibbs sampling, which must revisit each row many times to converge to a reasonable representation of the posterior distribution. We present simulation results showing that our particle filtering algorithm can be significantly more efficient than Gibbs sampling for each of the two models, and discuss its applicability to the broad class of nonparametric matrix factorization models based on the IBP. 2 Nonparametric Bayesian Matrix Factorization Let X be an observed N × D matrix. Our goal is to find a representation of the structure expressed in this matrix in terms of the latent matrices Z (N × K) and Y (K × D). This can be formulated as a statistical problem if we view X as being produced by a probabilistic generative process, resulting in a probability distribution P(X|Z, Y). The critical assumption necessary to make this a matrix factorization problem is that the distribution of X is conditionally dependent on Z and Y only through the product ZY. Although defining P(X|Z, Y) allows us to use methods such as maximum-likelihood estimation to find a point estimate, our goal is to instead compute a posterior distribution over possible values of Z and Y. To do so we need to specify a prior over the latent matrices P(Z, Y), and then we can use Bayes’ rule to find the posterior distribution over Z and Y P(Z, Y|X) ∝ P(X|Z, Y)P(Z, Y). (1) This constitutes Bayesian matrix factorization, but two problems remain: the choice of K, and the computational cost of estimating the posterior distribution. Unlike standard matrix factorization methods that require an a priori choice of K, nonparametric Bayesian approaches allow us to estimate a posterior distribution over Z and Y where the size of these matrices is unbounded. The models we discuss in this paper place a prior on Z that gives each “left-ordered” binary matrix (see [1] for details) probability P(Z) = αK+ Q2N−1 h=1 Kh! exp{−αHN} K+ Y k=1 (N −mk)!(mk −1)! N! (2) where K+ is the number of columns of Z with non-zero entries, mk is the number of 1’s in column k, N is the number of rows, HN = PN i=1 1/i is the N th harmonic number, and Kh is the number of columns in Z that when read top-to-bottom form a sequence of 1’s and 0’s corresponding to the binary representation of the number h. This prior on Z is a distribution on sparse binary matrices that favors those that have few columns with many ones, with the rest of the columns being all zeros. This distribution can be derived as the outcome of a sequential generative process called the Indian buffet process (IBP) [1]. Imagine an Indian restaurant into which N customers arrive one by one and serve themselves from the buffet. The first customer loads her plate from the first Poisson(α) dishes. The ith customer chooses dishes proportional to their popularity, choosing a dish with probability mk/i where mk is the number of people who have choosen the kth dish previously, then chooses Poisson(α/i) new dishes. If we record the choices of each customer on one row of a matrix whose columns correspond to a dishes on the buffet (1 if chosen, 0 if not) then (the left-ordered form of) that matrix constitutes a draw from the distribution in Eqn. 2. The order in which the customers enter the restaurant has no bearing on the distribution of Z (up to permutation of the columns), making this distribution exchangeable. In this work we assume that Z and Y are independent, with P(Z, Y) = P(Z)P(Y). As shown in Fig. 1, since we use the IBP prior for P(Z), Y is a matrix with an infinite number of rows and D columns. We can take any appropriate distribution for P(Y), and the infinite number of rows will not pose a problem because only K+ rows will interact with non-zero elements of Z. A posterior distribution over Z and Y implicitly defines a distribution over the effective dimensionality of these ~ * Z Y X 1 1 D N K+ D N Figure 1: Nonparametric Bayesian matrix factorization. The data matrix X is the product of Z and Y, which have an unbounded number of columns and rows respectively. matrices, through K+. This approach to nonparametric Bayesian matrix factorization has been used for both continuous [1, 7] and binary [2] data matrices X. Since the posterior distribution defined in Eqn. 1 is generally intractable, Gibbs sampling has previously been employed to construct a sample-based representation of this distribution. However, generally speaking, Gibbs sampling is slow, requiring each entry in Z and Y to be repeatedly updated conditioned on all of the others. This problem is compounded in contexts where the the number of rows of X increases as a consequence of new observations being introduced, where the Gibbs sampler would need to be restarted after the introduction of each new observation. 3 Particle Filter Posterior Estimation Our approach addresses the problems faced by the Gibbs sampler by exploiting the fact that the prior on Z is recursively decomposable. To explain this we need to introduce new notation, let X(i) be the ith row of X, and X(1:i) and Z(1:i) be all the rows of X and Z up to i respectively. Note that because the IBP prior is recursively decomposable it is easy to sample from P(Z(1:i)|Z(1:i−1)); to do so simply follow the IBP in choosing dishes for the ith customer given the record of which dishes were chosen by the first i −1 customers (see Algorithm 1). Applying Bayes’ rule, we can write the posterior on Z(1:i) and Y given X(1:i) in the following form P(Z(1:i), Y|X(1:i)) ∝P(X(i)|Z(1:i), Y, X(1:i−1))P(Z(1:i), Y|X(1:i−1)). (3) Here we do not index Y as it is always an infinite matrix.1 If we could evaluate P(Z(1:i−1), Y|X(1:i−1)), we could obtain weighted samples (or “particles”) from P(Z(1:i), Y|X(1:i)) using importance sampling with a proposal distribution of P(Z(1:i), Y|X(1:i−1)) = X Z(1:i−1) P(Z(1:i)|Z(1:i−1))P(Z(1:i−1), Y|X(1:i−1)) (4) and taking wℓ∝P(X(i)|Z(1:i) (ℓ) , Y(ℓ), X(1:i−1)) (5) as the weight associated with the ℓth particle. However, we could also use a similar scheme to approximate P(Z(1:i−1), Y|X(1:i−1)) if we could evaluate P(Z(1:i−2), Y|X(1:i−2)). Following Eq. 4, we could then approximately generate a set of weighted particles from P(Z(1:i), Y|X(1:i−1)) by using the IBP to sample a value from P(Z(1:i)|Z(1:i−1) (ℓ) ) for each particle from P(Z(1:i−1), Y|X(1:i−1)) and carrying forward the weights associated with those particles. This “particle filtering” procedure defines a recursive importance sampling scheme for the full posterior P(Z, Y|X), and is known as sequential importance sampling [8]. When applied in its basic form this procedure can produce particles with extreme weights, so we resample the particles at each iteration of the recursion from the distribution given by their normalized weights and set wℓ= 1/L for all ℓ, which is a standard method known as sequential importance resampling [8]. The procedure defined in the previous paragraphs is a general-purpose particle filter for matrixfactorization models based on the IBP. This procedure will work even when the prior defined on 1In practice, we need only keep track of the rows of Y that correspond to the non-empty columns of Z, as the posterior distribution for the remaining entries is just the prior. Thus, if new non-empty columns are added in moving from Z(i−1) to Z(i), we need to expand the number of rows of Y that we represent accordingly. Algorithm 1 Sample P(Z(1:i)|Z(1:i−1), α) using the Indian Buffet process 1: Z ←Z(1:i−1) 2: if i = 1 then 3: sample Knew i ∼Poisson(α) 4: Zi,1:Knew i ←1 5: else 6: K+ ←number of non-zero columns in Z 7: for k = 1, . . . , K+ do 8: sample zi,k according to P(zi,k = 1) ∼Bernoulli( m−i,k i ) 9: end for 10: sample Knew i ∼Poisson( α i ) 11: Zi,K++1:K++Knew i ←1 12: end if 13: Z(1:i) ←Z Y is not conjugate to the likelihood (and is much simpler than other algorithms for using the IBP with non-conjugate priors, e.g. [9]). However, the procedure can be simplified further in special cases. The following example applications illustrate the particle filtering approach for two different models. In the first case, the prior over Y is conjugate to the likelihood which means that Y need not be represented. In the other case, although the prior is not conjugate and thus Y does need to be explicitly represented, we present a way to improve the efficiency of this general particle filtering approach by taking advantage of certain analytic conditionals. The particle filtering approach results in significant improvements in performance over Gibbs sampling in both models. 4 A Conjugate Model: Infinite Linear-Gaussian Matrix Factorization In this model, explained in detail in [1], the entries of both X and Y are continuous. We report results on the modeling of image data of the same kind as was originally used to demonstrate the model in [1]. Here each row of X is an image, each row of Z indicates the “latent features” present in that image, such as the objects it contains, and each column of Y indicates the pixel values associated with a latent feature. The likelihood for this image model is matrix Gaussian P(X|Z, Y, σx) = 1 (2πσ2 X)ND/2 exp{−1 2σ2 X tr((X −ZY)T (X −ZY))} where σ2 X is the noise variance. The prior on the parameters of the latent features is also Gaussian P(Y|σY ) = 1 (2πσ2 Y )KD/2 exp{−1 2σ2 Y tr(YT Y)} with each element having variance σ2 Y . Because both the likelihood and the prior are matrix Gaussian, they form a conjugate pair and Y can be integrated out to yield the collapsed likelihood, P(X|Z, σx) = 1 (2π)ND/2σ(N−K+)D X σK+D Y |ZT +Z+ σ2 X σ2 Y IK+|D/2 exp{−1 2σ2 X tr(XT Σ−1X)} (6) which is matrix Gaussian with covariance Σ−1 = I −Z+(ZT +Z + σ2 X σ2 Y IK+)−1ZT +. Here Z+ = Z1:i,1:K+ is the first K+ columns of Z and K+ is the number of non-zero columns of Z. 4.1 Particle Filter The use of a conjugate prior means that we do not need to represent Y explicitly in our particle filter. In this case the particle filter recursion shown in Eqns. 3 and 4 reduces to P(Z(1:i)|X(1:i)) ∝P(X(i)|Z(1:i), X(1:i−1)) X Z(1:i−1) P(Z(1:i)|Z(1:i−1))P(Z(1:i−1)|X(1:i−1)) and may be implemented as shown in Algorithm 2. Algorithm 2 Particle filter for Infinite Linear Gaussian Model 1: initialize L particles [Z(0) ℓ], ℓ= 1, . . . , L 2: for i = 1, . . . , N do 3: for ℓ= 1, . . . , L do 4: sample Z(1:i) ℓ from Z(1:i−1) ℓ using Algorithm 1 5: calculate wℓusing Eqns. 5 and 7 6: end for 7: normalize particle weights 8: resample particles according to weight cumulative distribution 9: end for y1,: y2,: y3,: y4,: z(i,:)Y noise xi,: Figure 2: Generation of X under the linear Gaussian model. The first four images (left to right) correspond to the true latent features, i.e. rows of Y. The fifth shows how the images get combined, with two source images added together by multiplying by a single row of Z, zi,: = [1 0 0 1]. The sixth is Gaussian noise. The seventh image is the resulting row of X. Reweighting the particles requires computing P(X(i)|Z(1:i), X(1:i−1)), the conditional probability of the most recent row of X given all the previous rows and Z. Since P(X(1:i)|Z(1:i)) is matrix Gaussian we can find the required conditional distribution by following the standard rules for conditioning in Gaussians. Letting Σ−1 ∗ = Σ−1/σ2 X be the covariance matrix for X(1:i) given Z(1:i), we can partition this matrix into four parts Σ−1 ∗ = " A c cT b # where A is a matrix, c is a vector, and b is a scalar. Then the conditional distribution of X(i) is X(i)|Z(1:i), X(1:i−1) ∼Gaussian(cT A−1X(1:i−1), b −cT A−1c). (7) This requires inverting a matrix A which grows linearly with the size of the data; however, A is highly structured and this can be exploited to reduce the cost of this inversion [10]. 4.2 Experiments We compared the particle filter in Algorithm 2 with Gibbs sampling on an image dataset similar to that used in [1]. Due to space limitations we refer the reader to [1] for the details of the Gibbs sampler for this model. As illustrated in Fig. 2, our ground-truth Y consisted of four different 6 × 6 latent images. A 100 × 4 binary ground-truth matrix Z was generated with by sampling from P(zi,k = 1) = 0.5. The observed matrix X was generated by adding Gaussian noise with σX = 0.5 to each entry of ZY. Fig. 3 compares results from the particle filter and Gibbs sampler for this model. The performance of the models was measured by comparing a general error metric computed over the posterior distributions estimated by each approach. The error metric (the vertical axis in Figs. 3 and 5) was computed by taking the expectation of the matrix ZZT over the posterior samples produced by each algorithm and taking the summed absolute difference (i.e. L1 norm) between the upper triangular portion of E[ZZT ] computed over the samples and the upper triangular portion of the true ZZT (including the diagonal). See Fig. 4 for an illustration of the information conveyed by ZZT . This error metric measures the distance of the mean of the posterior to the ground-truth. It is zero if the mean of the distribution matches the ground truth. It grows as a function of the difference between the ground truth and the posterior mean, accounting both for any difference in the number of latent factors that are present in each observation and for any difference in the number of latent factors that are shared between all pairs of observations. The particle filter was run using many different numbers of particles, P. For each value of P, the particle filter was run 10 times. The horizontal axis location of each errorbar in the plot is the mean 0 1000 2000 3000 4000 5000 1 10 100 1000 2500 5000 10000 25000 50000 Error Wallclock runtime in sec. Gibbs Sampler Particle Filter Figure 3: Performance results for particle filter vs. Gibbs sampling posterior estimation for the infinite linear Gaussian matrix factorization. Each point is an average over 10 runs with a particular number of particles or sweeps of the sampler P = [1, 10, 100, 500, 1000, 2500, 5000] left to right, and error bars indicate the standard deviation of the error. wall-clock computation time on 2 Ghz Athlon 64 processors running Matlab for the corresponding number of particles P while the error bars indicate the standard deviation of the error. The Gibbs sampler was run for varying numbers of sweeps, with the initial 10% of samples being discarded. The number of Gibbs sampler sweeps was varied and the results are displayed in the same way as described for the particle filter above. The results show that the particle filter attains low error in significantly less time than the Gibbs sampler, with the difference being an order or magnitude or more in most cases. This is a result of the fact that the particle filter considers only a single row of X on each iteration, reducing the cost of computing the likelihood. 5 A Semi-Conjugate Model: Infinite Binary Matrix Factorization In this model, first presented in the context of learning hidden causal structure [2], the entries of both X and Y are binary. Each row of X represents the values of a single observed variable across D trials or cases, each row of Y gives the values of a latent variable (a “hidden cause”) across those trials or cases, and Z is the adjacency matrix of a bipartite Bayesian network indicating which latent variables influence which observed variables. Learning the hidden causal structure then corresponds to inferring Z and Y from X. The model fits our schema for nonparametric Bayesian matrix factorization model (and hence is amenable to the use of our particle filter) since the likelihood function it uses depends only on the product ZY. The likelihood function for this model assumes that each entry of X is generated independently P(X|Z, Y) = Q i,d P(xi,d|Z, Y), with its probability given by the “noisy-OR” [11] of the causes that influence that variable (identified by the corresponding row of Z) and are active for that case or trial (expressed in Y). The probability that xi,d takes the value 1 is thus P(xi,d = 1|Z, Y) = 1 −(1 −λ)zi,:·y:,d(1 −ǫ) (8) where zi,: is the ith row of Z, y:,d is the dth column of Y, and zi,: · y:,d = PK k=1 zi,kyk,d. The parameter ǫ sets the probability that xi,d = 1 when no relevant causes are active, and λ determines how this probability changes as the number of relevant active hidden causes increases. To complete the model, we assume that the entries of Y are generated independently from a Bernoulli process with parameter p, to give P(Y) = Q k,d pyk,d(1 −p)1−yk,d, and use the IBP prior for Z. 5.1 Particle Filter In this model the prior over Y is not conjugate to the likelihood, so we are forced to explicitly represent Y in our particle filter state, as outlined in Eqns. 3 and 4. However, we can define a more efficient algorithm than the basic particle filter due to the tractability of some integrals. This is why we call this model a “semi-conjugate” model. The basic particle filter defined in Section 3 requires drawing the new rows of Y from the prior when we generate new columns of Z. This can be problematic since the chance of producing an assignment of values to Y that has high probability under the likelihood can be quite low, in effect wasting many particles. However, if we can analytically marginalize out the new rows of Y, we can avoid sampling those values from the prior and instead sample them from the posterior, in Algorithm 3 Particle filter for Infinite Binary Matrix Factorization 1: initialize L particles [Z(0) ℓ, Y(0) ℓ], ℓ= 1, . . . , L 2: for i = 1, . . . , N do 3: for ℓ= 1, . . . , L do 4: sample Z(i) ℓ from Z(i−1) ℓ using Algorithm 1 5: calculate wℓusing Eqns. 5 and 8 6: end for 7: normalize particle weights 8: resample particles according to weight CDF 9: for ℓ= 1, . . . , L do 10: sample Y(i) ℓ from P(Y(i) ℓ|Z(1:i) ℓ , Y(1:i−1) ℓ , X(1:i)) 11: end for 12: end for Figure 4: Infinite binary matrix factorization results. On the left is ground truth, the causal graph representation of Z and ZZT . The middle and right are particle filtering results; a single random particle Z and E[ZZT ] from a 500 and 10000 particle run middle and right respectively. effect saving many of the potentially wasted particles. If we let Y(1:i) denote the rows of Y that correspond to the first i columns of Z and Y(i) denote the rows (potentially more than 1) of Y that are introduced to match the new columns appearing in Z(i), then we can write P(Z(1:i), Y(1:i)|X(1:i)) = P(Y(i)|Z(1:i), Y(1:i−1), X(1:i))P(Z(1:i), Y(1:i−1)|X(1:i)) (9) where P(Z(1:i), Y(1:i−1)|X(1:i)) ∝P(X(i)|Z(1:i), Y(1:i−1), X(1:i−1))P(Z(1:i), Y(1:i−1)|X(1:i−1)). (10) Thus, we can use the particle filter to estimate P(Z(1:i), Y(1:i−1)|X(1:i)) (vs. P(Z(1:i), Y(1:i)|X(1:i))) provided that we can find a way to compute P(X(i)|Z(1:i), Y(1:i−1)) and sample from the distribution P(Y(i)|Z(1:i), Y(1:i−1), X(1:i)) to complete our particles. The procedure described in the previous paragraph is possible in this model because, while our prior on Y is not conjugate to the likelihood, it is still possible to compute P(X(i)|Z(1:i), Y(1:i−1)). The entries of X(i) are independent given Z(1:i) and Y(1:i). Since the entries in each column of Y(i) will influence only a single entry in X(i), this independence is maintained when we sum out Y(i). So we can derive an analytic solution to P(X(i)|Z(1:i), Y(1:i−1)) = Q d P(xi,d|Z(1:i), Y(1:i−1)) where P(xi,d = 1|Z(1:i), Y(1:i−1)) = 1 −(1 −ǫ)(1 −λ)η (1 −λp)Knew i (11) with Knew i being the number of new columns in Z(i), and η = zi,1:K(1:i) + · y1:K(1:i) + ,d. For a detailed derivation see [2]. This gives us the likelihood we need for reweighting particles Z(1:i) and Y(1:i−1). The posterior distribution on Y(i) is straightforward to compute by combining the likelihood in Eqn. 8 with the prior P(Y). The particle filtering algorithm for this model is given in Algorithm 3. 5.2 Experiments We compared the particle filter in Algorithm 3 with Gibbs sampling on a dataset generated from the model described above, using the same Gibbs sampling algorithm and data generation procedure as developed in [2]. We took K+ = 4 and N = 6, running the IBP multiple times with α = 3 until a matrix Z of correct dimensionality (6 × 4) was produced. This matrix is shown in Fig. 4 as a bipartite graph, where the observed variables are shaded. A 4×250 random matrix Y was generated with p = 0.1. The observed matrix X was then sampled from Eqn. 8 with parameters λ = .9 and ǫ = .01. Comparison of the particle filter and Gibbs sampling was done using the procedure outlined in Section 4.2, producing similar results: the particle filter gave a better approximation to the posterior distribution in less time, as shown in Fig. 5. 0 10 20 30 40 50 0.25 0..5 1 2 5 10 50 100 500 Error Wallclock runtime in sec. Gibbs Sampler Particle Filter Figure 5: Performance results for particle filter vs. Gibbs sampling posterior estimation for the infinite binary matrix factorization model. Each point is an average over 10 runs with a particular number of particles or sweeps of the sampler P = [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000] from left to right, and error bars indicate the standard deviation of the error. 6 Conclusion In this paper we have introduced particle filter posterior estimation for non-parametric Bayesian matrix factorization models based on the Indian buffet process. This approach is applicable to any Bayesian matrix factorization model with a sparse recursively decomposable prior. We have applied this approach with two different models, one with a conjugate prior and one with a non-conjugate prior, finding significant computational savings over Gibbs sampling for each. However, more work needs to be done to explore the strengths and weakneses of these algorithms. In particular, simple sequential importance resampling is known to break down when applied to datasets with many observations, although we are optimistic that methods for addressing this problem that have been developed for Dirichlet process mixture models (e.g., [5]) will also be applicable in this setting. By exploring the strengths and weaknesses of different methods for approximate inference in these models, we hope to come closer to our ultimate goal of making nonparametric Bayesian matrix factorization into a tool that can be applied on the scale of real world problems. Acknowledgements This work was supported by both NIH-NINDS R01 NS 50967-01 as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program and NSF grant 0631518. References [1] T. L. Griffiths and Z. Ghahramani, “Infinite latent feature models and the Indian buffet process,” Gatsby Computational Neuroscience Unit, Tech. Rep. 2005-001, 2005. [2] F. Wood, T. L. Griffiths, and Z. Ghahramani, “A non-parametric Bayesian method for inferring hidden causes,” in Proceeding of the 22nd Conference on Uncertainty in Artificial Intelligence. in press, 2006. [3] T. Ferguson, “A Bayesian analysis of some nonparametric problems,” The Annals of Statistics, vol. 1, pp. 209–230, 1973. [4] R. M. Neal, “Markov chain sampling methods for Dirichlet process mixture models,” Department of Statistics, University of Toronto, Tech. Rep. 9815, 1998. [5] P. Fearnhead, “Particle filters for mixture models with an unknown number of components,” Journal of Statistics and Computing, vol. 14, pp. 11–21, 2004. [6] S. N. MacEachern, M. Clyde, and J. Liu, “Sequential importance sampling for nonparametric Bayes models: the next generation,” The Canadian Journal of Statistics, vol. 27, pp. 251–267, 1999. [7] T. Griffiths and Z. Ghahramani, “Infinite latent feature models and the Indian buffet process,” in Advances in Neural Information Processing Systems 18, Y. Weiss, B. Sch¨olkopf, and J. Platt, Eds. Cambridge, MA: MIT Press, 2006. [8] A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice. Springer, 2001. [9] D. G¨or¨ur, F. J¨akel, and C. R. Rasmussen, “A choice model with infinitely many latent features,” in Proceeding of the 23rd International Conference on Machine Learning, 2006. [10] S. Barnett, Matrix Methods for Engineers and Scientists. McGraw-Hill, 1979. [11] J. Pearl, Probabilistic reasoning in intelligent systems. San Francisco, CA: Morgan Kaufmann, 1988.
2006
44
3,064
Large-Scale Sparsified Manifold Regularization Ivor W. Tsang James T. Kwok Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {ivor,jamesk}@cse.ust.hk Abstract Semi-supervised learning is more powerful than supervised learning by using both labeled and unlabeled data. In particular, the manifold regularization framework, together with kernel methods, leads to the Laplacian SVM (LapSVM) that has demonstrated state-of-the-art performance. However, the LapSVM solution typically involves kernel expansions of all the labeled and unlabeled examples, and is slow on testing. Moreover, existing semi-supervised learning methods, including the LapSVM, can only handle a small number of unlabeled examples. In this paper, we integrate manifold regularization with the core vector machine, which has been used for large-scale supervised and unsupervised learning. By using a sparsified manifold regularizer and formulating as a center-constrained minimum enclosing ball problem, the proposed method produces sparse solutions with low time and space complexities. Experimental results show that it is much faster than the LapSVM, and can handle a million unlabeled examples on a standard PC; while the LapSVM can only handle several thousand patterns. 1 Introduction In many real-world applications, collection of labeled data is both time-consuming and expensive. On the other hand, a large amount of unlabeled data are often readily available. While traditional supervised learning methods can only learn from the limited amount of labeled data, semi-supervised learning [2] aims at improving the generalization performance by utilizing both the labeled and unlabeled data. The label dependencies among patterns are captured by exploiting the intrinsic geometric structure of the data. The underlying smoothness assumption is that two nearby patterns in a high-density region should share similar labels [2]. When the data lie on a manifold, it is common to approximate this manifold by a weighted graph, leading to graph-based semi-supervised learning methods. However, many of these are designed for transductive learning, and thus cannot be easily extended to out-of-sample patterns. Recently, attention is drawn to the development of inductive methods, such as harmonic mixtures [15] and Nystr¨om-based methods [3]. In this paper, we focus on the manifold regularization framework proposed in [1]. By defining a data-dependent reproducing kernel Hilbert space (RKHS), manifold regularization incorporates an additional regularizer to ensure that the learned function is smooth on the manifold. Kernel methods, which have been highly successful in supervised learning, can then be integrated with this RKHS. The resultant Laplacian SVM (LapSVM) demonstrates state-of-the-art semi-supervised learning performance [10]. However, a deficiency of the LapSVM is that its solution, unlike that of the SVM, is not sparse and so is much slower on testing. Moreover, while the original motivation of semi-supervised learning is to utilize the large amount of unlabeled data available, existing algorithms are only capable of handling a small to moderate amount of unlabeled data. Recently, attempts have been made to scale up these methods. Sindhwani et al. [9] speeded up manifold regularization by restraining to linear models, which, however, may not be flexible enough for complicated target functions. Garcke and Griebel [5] proposed to use discretization with a sparse grid. Though it scales linearly with the sample size, its time complexity grows exponentially with data dimensionality. As reported in a recent survey [14], most semisupervised learning methods can only handle 100 – 10,000 unlabeled examples. More recently, G¨artner et al. [6] presented a solution in the more restrictive transductive setting. The largest graph they worked with involve 75,888 labeled and unlabeled examples. Thus, no one has ever been experimented on massive data sets with, say, one million unlabeled examples. On the other hand, the Core Vector Machine (CVM) is recently proposed for scaling up kernel methods in both supervised (including classification [12] and regression [13]) and unsupervised learning (e.g., novelty detection). Its main idea is to formulate the learning problem as a minimum enclosing ball (MEB) problem in computational geometry, and then use an (1 + ϵ)-approximation algorithm to obtain a close-to-optimal solution efficiently. Given m samples, the CVM has an asymptotic time complexity that is only linear in m and a space complexity that is even independent of m for a fixed ϵ. Experimental results on real world data sets with millions of patterns demonstrated that the CVM is much faster than existing SVM implementations and can handle much larger data sets. In this paper, we extend the CVM to semi-supervised learning. To restore sparsity of the LapSVM solution, we first introduce a sparsified manifold regularizer based on the ϵ-insensitive loss. Then, we incorporate manifold regularization into the CVM. It turns out that the resultant QP can be casted as a center-constrained MEB problem introduced in [13]. The rest of this paper is organized as follows. In Section 2, we first give a brief review on manifold regularization. Section 3 then describes the proposed algorithm for semi-supervised classification and regression. Experimental results on very large data sets are presented in Section 4, and the last section gives some concluding remarks. 2 Manifold Regularization Given a training set {(xi, yi)}m i=1 with input xi ∈X and output yi ∈R. The regularized risk functional is the sum of the empirical risk (corresponding to a loss function ℓ) and a regularizer Ω. Given a kernel k and its RKHS Hk, we minimize the regularized risk over function f in Hk: min f∈Hk 1 m m X i=1 ℓ(xi, yi, f(xi)) + λΩ(∥f∥Hk). (1) Here, ∥· ∥Hk denotes the RKHS norm and λ > 0 is a regularization parameter. By the representer theorem, the minimizer f admits the representation f(x) = Pm i=1 αik(xi, x), where αi ∈R. Therefore, the problem is reduced to the optimization over the finite-dimensional space of αi’s. In semi-supervised learning, we have both labeled examples {(xi, yi)}m i=1 and unlabeled examples {xi}m+n i=m+1. Manifold regularization uses an additional regularizer ∥f∥2 I to ensure that the function f is smooth on the intrinsic structure of the input. The objective function in (1) is then modified to: 1 m m X i=1 ℓ(xi, yi, f(xi)) + λΩ(∥f∥Hk) + λI∥f∥2 I, (2) where λI is another tradeoff parameter. It can be shown that the minimizer f is of the form f(x) = Pm i=1 αik(xi, x)+ R M α(x′)k(x, x′)dPX (x′), where M is the support of the marginal distribution PX of X [1]. In practice, we do not have access to PX . Now, assume that the support M of PX is a compact submanifold, and take ∥f∥2 I = R M⟨∇f, ∇f⟩where ∇is the gradient of f. It is common to approximate this manifold by a weighted graph defined on all the labeled and unlabeled data, as G = (V, E) with V and E being the sets of vertices and edges respectively. Denote the weight function w and degree d(u) = P v∼u w(u, v). Here, v ∼u means that u, v are adjacent. Then, ∥f∥2 I is approximated as1 ∥f∥2 I = X e∈E p w(ue, ve) f(xue) s(ue) −f(xve) s(ve)  2 , (3) 1When the set of labeled and unlabeled data is small, a function that is smooth on this small set may not be interesting. However, this is not an issue here as our focus is on massive data sets. where ue and ve are vertices of the edge e, and s(u) = p d(u) when the normalized graph Laplacian is used, and s(u) = 1 with the unnormalized one. As shown in [1], the minimizer of (2) becomes f(x) = Pm+n i=1 αik(xi, x), which depends on both labeled and unlabeled examples. 2.1 Laplacian SVM Using the hinge loss ℓ(xi, yi, f(xi)) = max(0, 1 −yif(xi)) in (2), we obtain the Laplacian SVM (LapSVM) [1]. Its training involves two steps. First, solve the quadratic program (QP) max β′1 −1 2β′Qβ : β′y = 0, 0 ≤β ≤ 1 m1 to obtain the β∗. Here, β = [β1, . . . , βm]′, 1 = [1, . . . , 1]′, Qm×m = YJK(2λI + 2λILK)−1J′Y, Ym×m is the diagonal matrix with Yii = yi, K(m+n)×(m+n) is the kernel matrix over both the labeled and unlabeled data, L(m+n)×(m+n) is the graph Laplacian, and Jm×(m+n) with Jij = 1 if i = j and xi is a labeled example, and Jij = 0 otherwise. The optimal α = [α1, . . . , αm+n]′ solution is then obtained by solving the linear system: α∗= (2λI + 2λILK)−1J′Yβ∗. Note that the matrix 2λI + 2λILK is of size (m + n) × (m + n), and so its inversion can be very expensive when n is large. Moreover, unlike the standard SVM, the α∗obtained is not sparse and so evaluation of f(x) is slow. 3 Proposed Algorithm 3.1 Sparsified Manifold Regularizer To restore sparsity of the LapSVM solution, we replace the square function in the manifold regularizer (3) by the ϵ-insensitive loss function2, as ∥f∥2 I = X e∈E p w(ue, ve) f(xue) s(ue) −f(xve) s(ve)  2 ¯ε , (4) where |z|¯ε = 0 if |z| ≤¯ε; and |z| −¯ε otherwise. Obviously, it reduces to (3) when ¯ε = 0. As will be shown in Section 3.3, the α solution obtained will be sparse. Substituting (4) into (2), we have: min f∈Hk ( 1 m m X i=1 ℓ(xi, yi, f(xi))+λI X e∈E p w(ue, ve) f(xue) s(ue) −f(xve) s(ve)  2 ¯ε ) +λΩ(∥f∥Hk). By treating the terms inside the braces as the “loss function”, this can be regarded as regularized risk minimization and, using the standard representer theorem, the minimizer f then admits the form f(x) = Pm+n i=1 αik(xi, x), same as that of the original manifold regularization. Moreover, putting f(x) = w′ϕ(x) + b into (4), we obtain ∥f∥2 I = P e∈E |w′ψe + bτe|2 ¯ε, where ψe = p w(ue, ve)  ϕ(xue) s(ue) −ϕ(xve) s(ve)  , and τe = p w(ue, ve)  1 s(ue) − 1 s(ve)  . The primal of the LapSVM can then be formulated as: min ∥w∥2 + b2 + C mµ m X i=1 ξ2 i + 2C¯ε + Cθ |E|µ X e∈E (ζ2 e + ζ∗ e 2) (5) s.t. yi(w′ϕ(xi) + b) ≥1 −¯ε −ξi, i = 1, . . . , m, (6) −(w′ψe + bτe) ≤¯ε + ζe, w′ψe + bτe ≤¯ε + ζ∗ e , e ∈E. (7) Here, |E| is the number of edges in the graph, ξi is the slack variable for the error, ζe, ζ∗ e are slack variables for edge e, and C, µ, θ are user-defined parameters. As in previous CVM formulations [12, 13], the bias b is penalized and the two-norm errors (ξ2 i , ζ2 ij and ζ∗2 ij ) are used. Moreover, the constraints ξi, ζij, ζ∗ ij, ¯ε ≥0 are automatically satisfied. When ¯ε = 0, (5) reduces to the original LapSVM (using two-norm errors). When θ is also zero, it becomes the Lagrangian SVM. The dual can be easily obtained as the following QP: max [β′ γ′ γ∗′][ 2 C 1′ 0′ 0′]′ −[β′ γ′ γ∗′] ˜K[β′ γ′ γ∗′]′ : [β′ γ′ γ∗′]1 = 1, β, γ, γ∗≥0, (8) 2To avoid confusion with the ϵ in the (1 + ϵ)-approximation, we add a bar to the ε here. where β = [β1, . . . , βm]′, γ = [γ1, . . . , γ|E|]′, γ∗= [γ∗ 1, . . . , γ∗ |E|]′ are the dual variables, and ˜K =   (Kℓ+ 11′ + µm C I) ⊙yy′ V −V V′ U + |E|µ Cθ I −U −V′ −U U + |E|µ Cθ I  is the transformed “kernel matrix”. Here, Kℓis the kernel matrix defined using kernel k on the m labeled examples, U|E|×|E| = [ψ′ eψf + τeτf], and Vm×|E| = [yiϕ(xi)′ψe + τe]. Note that while each entry of the matrix Q in LapSVM (Section 2.1) requires O((m + n)2) kernel k(xi, xj) evaluations, each entry in ˜K here takes only O(1) kernel evaluations. This is particularly favorable to decomposition methods such as SMO as most of the CPU computations are typically dominated by kernel evaluations. Moreover, it can be shown that µ is a parameter that controls the size of ¯ε, analogous to the ν parameter in ν-SVR. Hence, only µ, but not ¯ε, appears in (8). Moreover, the primal variables can be easily recovered from the dual variables by the KKT conditions. In particular, w = C Pm i=1 βiyiϕ(xi) + P e∈E(γe −γ∗ e)ψe  and b = C Pm i=1 βiyi + P e∈E(γe −γ∗ e)τe  . Subsequently, the decision function f(x) = w′ϕ(x) + b is a linear combination of k(xi, x)’s defined on both the labeled and unlabeled examples, as in standard manifold regularization. 3.2 Transforming to a MEB Problem We now show that CVM can be used for solving the possibly very large QP in (8). In particular, we will transform this QP to the dual of a center-constrained MEB problem [13], which is of the form: max α′(diag(K) + ∆−η1) −α′Kα : α ≥0, α′1 = 1, (9) for some 0 ≤∆∈Rm and η ∈R. From the variables in (8), define ˜α =  β′ γ′ γ∗′′ and ∆= −diag( ˜K)+η1+ 2 C [1′ 0′ 0]′ s.t. ∆≥0 for some sufficiently large η. (8) can then be written as max ˜α′(diag( ˜K) + ∆−η1) −˜α′ ˜K˜α : ˜α ≥0, ˜α′1 = 1, which is of the form in (9). The above formulation can be easily extended to the regression case, with the pattern output changed from ±1 to yi ∈R, and the hinge loss replaced by the ϵ-insensitive loss. Converting the resultant QP to the form in (9) is also straightforward. 3.3 Sparsity In Section 3.3.1, we first explain why a sparse solution can be obtained by using the KKT conditions. Alternatively, by building on [7], we show in Section 3.3.2 that the ϵ-insensitive loss achieves a similar effect as the ℓ1 penalty in LASSO [11], which is known to produce sparse approximation. 3.3.1 KKT Perspective Basically, this follows from the standard argument as for sparse solutions with the ϵ-insensitive loss in SVR. From the KKT condition associated with (6): βi(yi(w′ϕ(xi) + b) −1 + ¯ε + ξi) = 0. As for the SVM, most patterns are expected to lie outside the margin (i.e. yi(w′ϕ(xi) + b) > 1 −¯ε) and so most βi’s are zero. Similarly, manifold regularization finds a f that is locally smooth. Hence, from the definition of ψe and τe, many values of (w′ψe +bτe)’s will be inside the ¯ε-tube. Using the KKT conditions associated with (7), the corresponding γe’s and γ∗ e’s are zero. As f(x) is a linear combination of the k(xi, x)’s weighted by βi and γe −γ∗ e (Section 3.1), f is thus sparse. 3.3.2 LASSO Perspective Our exposition will be along the line pioneered by Girosi [7], who established a connection between the ϵ-insensitive loss in SVR and sparse approximation. Given a predictor f(x) = Pm i=1 αik(xi, x) = Kα, we consider minimizing the error between f = [f(x1), . . . f(xm)]′ and y = [y1, . . . , ym]′. While sparse approximation techniques such as basis pursuit typically use the L2 norm for the error, Girosi argued that the norm of the RKHS Hk is a better measure of smoothness. However, the RKHS norm operates on functions, while here we have vectors f and y w.r.t. x1, . . . , xm. Hence, we will use the kernel PCA map with ∥y −f∥2 K ≡(y −f)K−1(y −f). First, consider the simpler case where the manifold regularizer is replaced by a simple regularizer ∥α∥2 2. As in LASSO, we also add a ℓ1 penalty on α. The optimization problem is formulated as: min ∥y −f∥2 K + µm C α′α : ∥α∥1 = C, (10) where C and µ are constants. As in [7], we decompose α as β−β∗, where β, β∗≥0 and βiβ∗ i = 0. Then, (10) can be rewritten as: max [β′ β∗′][2y′ −2y′]′ −[β′ β∗′] ˜K[β′ β∗′]′ : β, β∗≥0, β′1 + β∗′1 = C, (11) where3 ˜K =  K + µm C I −K −K K + µm C I  . On the other hand, consider the following variant of SVR using the ϵ-insensitive loss: min ∥w∥2 + C mµ m X i=1 (ξ2 i + ξ∗2 i ) + 2C¯ε : yi −w′ϕ(xi) ≤¯ε + ξi, w′ϕ(xi) −yi ≤¯ε + ξ∗ i . (12) It can be shown that its dual is identical to (11), with β, β∗as dual variables. Moreover, the LASSO penalty (i.e., the equality constraint in (11)) is induced from the ¯ε in (12). Hence, the ϵ-insensitive loss in SVR achieves a similar effect as using the error ∥y −f∥2 K and the LASSO penalty. We now add back the manifold regularizer. The derivation is similar, though more involved, and so details are skipped. As above, the key steps are on replacing the ℓ2 norm by the kernel PCA map, and adding a ℓ1 penalty on the variables. It can then be shown that sparsified manifold regularizer (based on the ϵ-insensitive loss) can again be recovered by using the LASSO penalty. 3.4 Complexities As the proposed algorithm is an extension of the CVM, its properties are analogous to those in [12]. For example, its approximation ratio is (1 + ϵ)2, and so the approximate solution obtained is very close to the exact optimal solution. As for the computational complexities, it can be shown that the SLapCVM only takes O(1/ϵ8) time and O(1/ϵ2) space when probabilistic speedup is used. (Here, we ignore O(m+|E|) space required for storing the m training patterns and 2|E| edge constraints, as these may be stored outside the core memory.) They are thus independent of the numbers of labeled and unlabeled examples for a fixed ϵ. In contrary, LapSVM involves an expensive matrix inversion for K(m+n)×(m+n) and requires O((m + n)3) time and O((m + n)2) space. 3.5 Remarks The reduced SVM [8] has been used to scale up the standard SVM. Hence, another natural alternative is to extend it for the LapSVM. This “reduced LapSVM” solves a smaller optimization problem that involves a random r×(m+n) rectangular subset of the kernel matrix, where the r patterns are chosen from both the labeled and unlabeled data. It can be easily shown that it requires O((m + n)2r) time and O((m + n)r) space. Experimental comparisons based on this will be made in Section 4. Note that the CVM [12] is in many aspects similar to the column generation technique [4] commonly used in large-scale linear or integer programs. Both start with only a small number of nonzero variables, and the restricted master problem in column generation corresponds to the inner QP that is solved at each CVM iteration. Moreover, both can be regarded as primal methods that maintain primal4 feasibility and work towards dual feasibility. Also, as is typical in column generation, the dual variable whose KKT condition is most violated is added at each iteration. The key difference5, 3For simplicity, here we have only considered the case where f does not have a bias. In the presence of a bias, it can be easily shown that K (in the expression of ˜K) has to be replaced by K + 11′. 4By convention, column generation takes the optimization problem to be solved as the primal. Hence, in this section, we also regard the QP to be solved as CVM’s primal, and the MEB problem as its dual. Note that each dual variable then corresponds to a training pattern. 5Another difference is that an entire column is added at each iteration of column generation. However, in CVM, the dual variable added is just a pattern and the extra space required for the QP is much smaller. Besides, there are other implementation tricks (such as probabilistic speedup) that further improves the speed of CVM. however, is that CVM exploits the “approximateness” as in other approximation algorithms. Instead of requiring the dual solution to be strictly feasible, CVM only requires it to be feasible within a factor of (1 + ϵ). This, together with the fact that its dual is a MEB problem, allows its number of iterations for convergence to be bounded and thus the total time and space complexities guaranteed. On the other hand, we are not aware of any similar results for column generation. By regarding the CVM as the approximation algorithm counterpart of column generation, this suggests that the CVM can also be used in the same way as column generation in speeding up other optimization problems. For example, the CVM can also be used for SVM training with other loss functions (e.g. 1-norm error). However, as the dual may no longer be a MEB problem, the downside is that its convergence bound and complexity results in Section 3.4 may no longer be available. 4 Experiments In this section, we perform experiments on some massive data sets 6 (Table 1). The graph (for the manifold) is constructed by using the 6 nearest neighbors of each pattern, and the weight w(ue, ve) in (3) is defined as exp(−∥xue −xve∥2/βg), where βg = 1 |E| P e∈E ∥xeu −xev∥2. For simplicity, we use the unnormalized Laplacian and so all s(·)’s in (3) are 1. The value of mµ in (5) is always fixed at 1, and the other parameters are tuned by a small validation set. Unless otherwise specified, we use the Gaussian kernel exp(−∥x −z∥2/β), with β = 1 m Pm i=1 ∥xi −¯x∥2. For comparison, we also run the LapSVM7 and another LapSVM implementation based on the reduced SVM [8] (Section 3.5). All the experiments are performed on a 3.2GHz Pentium–4 PC with 1GB RAM. Table 1: A summary of the data sets used. #training patns data set #attrib class labeled unlabeled #test patns two-moons 2 + 1 500,000 2,500 − 1 500,000 2,500 extended USPS 676 + 1 144,473 43,439 − 1 121,604 31,944 extended MIT face 361 + 5 408,067 472 − 5 481,909 23,573 4.1 Two-Moons Data Set We first perform experiments on the popular two-moons data set, and use one labeled example for each class (Figure 1(a)). To better illustrate the scaling behavior, we vary the number of unlabeled patterns used for training (from 1, 000 up to a maximum of 1 million). Following [1], the width of the Gaussian kernel is set to β = 0.25. For the reduced LapSVM implementation, we fix r = 200. (a) Data distribution. (b) Typical decision boundary obtained by SLapCVM. 10 3 10 4 10 5 10 6 10 −1 10 0 10 1 10 2 10 3 number of unlabeled points CPU time (in seconds) SLapCVM LapSVM Reduced LapSVM (c) CPU time. 10 3 10 4 10 5 10 6 10 2 10 3 10 4 number of unlabeled points number of kernel expansions SLapCVM core−set Size LapSVM Reduced LapSVM (d) #kernel expansions. Figure 1: Results on the two-moons data set (some abscissas and ordinates are in log scale). The two labeled examples are labeled in red in Figure 1(a). Results are shown in Figure 1. Both the LapSVM and SLapCVM always attain 100% accuracy on the test set, even with only two labeled examples (Figure 1(b)). However, SLapCVM is faster than LapSVM (Figure 1(c)). Moreover, as mentioned in Section 2.1, the LapSVM solution is non-sparse 6Both the USPS and MIT face data sets are downloaded from http://www.cs.ust.hk/∼ivor/cvm.html. 7http://manifold.cs.uchicago.edu/manifold regularization/. and all the labeled and unlabeled examples are involved in the solution (Figure 1(d))). On the other hand, SLapCVM uses only a small fraction of the examples. As can be seen from Figures 1(c) and 1(d), both the time and space required by the SLapCVM are almost constant, even when the unlabeled data set gets very large. The reduced LapSVM, though also fast, is slightly inferior to both the SLapCVM and LapSVM. Moreover, note that both the standard and reduced LapSVMs cannot be run on the full data set on our PC because of their large memory requirements. 4.2 Extended USPS Data Set The second experiment is performed on the USPS data from [12]. One labeled example is randomly sampled from each class for training. To achieve comparable accuracy, we use r = 2, 000 for the reduced LapSVM. For comparison, we also train a standard SVM with the two labeled examples. Results are shown in Figure 2. As can be seen, the SLapCVM is again faster (Figures 2(a)) and produces a sparser solution than LapSVM (Figure 2(b)). For the SLapCVM, both the time required and number of kernel expansions involved grow only sublinearly with the number of unlabeled examples. Figure 2(c) demonstrates that semi-supervised learning (using either the LapSVMs or SLapCVM) can have much better generalization performance than supervised learning using the labeled examples only. Note that although the use of the 2-norm error in SLapCVM could in theory be less robust than the use of the 1-norm error in LapSVM, the SLapCVM solution is indeed always more accurate than that of LapSVM. On the other hand, the reduced LapSVM has comparable speed with the SLapCVM, but its performance is inferior and cannot handle large data sets. 10 3 10 4 10 5 10 6 10 0 10 1 10 2 10 3 10 4 10 5 number of unlabeled points CPU time (in seconds) SLapCVM LapSVM Reduced LapSVM (a) CPU time. 10 3 10 4 10 5 10 6 10 2 10 3 10 4 10 5 number of unlabeled points number of kernel expansions SLapCVM core−set Size LapSVM Reduced LapSVM (b) #kernel expansions. 10 3 10 4 10 5 10 6 0 10 20 30 40 50 60 70 80 number of unlabeled points error rate (in %) SLapCVM LapSVM Reduced LapSVM SVM (#labeled = 2) (c) Test error. Figure 2: Results on the extended USPS data set (some abscissas and ordinates are in log scale). 4.3 Extended MIT Face Data Set In this section, we perform face detection using the extended MIT face database in [12]. Five labeled example are randomly sampled from each class and used in training. Because of the imbalanced nature of the test set (Table 1), the classification error is inappropriate for performance evaluation here. Instead, we will use the area under the ROC curve (AUC) and the balanced loss 1 −(TP + TN)/2, where TP and TN are the true positive and negative rates respectively. Here, faces are treated as positives while non-faces as negatives. For the reduced LapSVM, we again use r = 2, 000. For comparison, we also train two SVMs: one uses the 10 labeled examples only while the other uses all the labeled examples (a total of 889,986) in the original training set of [12]. Figure 3 shows the results. Again, the SLapCVM is faster and produces a sparser solution than LapSVM. Note that the SLapCVM, using only 10 labeled examples, can attain comparable AUC and even better balanced loss than the SVM trained on the original, massive training set (Figures 3(c) and 3(d)). This clearly demonstrates the usefulness of semi-supervised learning when a large amount of unlabeled data can be utilized. On the other hand, note that the LapSVM again cannot be run with more than 3,000 unlabeled examples on our PC because of its high space requirement. The reduced LapSVM performs very poorly here, possibly because this data set is highly imbalanced. 5 Conclusion In this paper, we addressed two issues associated with the Laplacian SVM: 1) How to obtain a sparse solution for fast testing? 2) How to handle data sets with millions of unlabeled examples? For the 10 3 10 4 10 5 10 0 10 1 10 2 10 3 10 4 10 5 number of unlabeled points CPU time (in seconds) SLapCVM LapSVM Reduced LapSVM (a) CPU time. 10 3 10 4 10 5 10 2 10 3 10 4 10 5 number of unlabeled points number of kernel expansions SLapCVM core−set Size LapSVM Reduced LapSVM (b) #kernel expansions. 10 3 10 4 10 5 10 15 20 25 30 35 40 45 50 number of unlabeled points balanced loss (in %) SLapCVM LapSVM Reduced LapSVM SVM (#labeled = 10) CVM (w/ all training labels) (c) Balanced loss. 10 3 10 4 10 5 0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 number of unlabeled points AUC SLapCVM LapSVM Reduced LapSVM SVM (#labeled = 10) CVM (w/ all training labels) (d) AUC. Figure 3: Results on the extended MIT face data (some abscissas and ordinates are in log scale). first issue, we introduce a sparsified manifold regularizer based on the ϵ-insensitive loss. For the second issue, we integrate manifold regularization with the CVM. The resultant algorithm has low time and space complexities. Moreover, by avoiding the underlying matrix inversion in the original LapSVM, a sparse solution can also be recovered. Experiments on a number of massive data sets show that the SLapCVM is much faster than the LapSVM. Moreover, while the LapSVM can only handle several thousand unlabeled examples, the SLapCVM can handle one million unlabeled examples on the same machine. On one data set, this produces comparable or even better performance than the (supervised) CVM trained on 900K labeled examples. This clearly demonstrates the usefulness of semi-supervised learning when a large amount of unlabeled data can be utilized. References [1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434, 2006. [2] O. Chapelle, B. Sch¨olkopf, and A. Zien. Semi-Supervised Learning. MIT Press, Cambridge, MA, USA, 2006. [3] O. Delalleau, Y. Bengio, and N. L. Roux. Efficient non-parametric function induction in semi-supervised learning. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Barbados, January 2005. [4] G. Desaulniers, J. Desrosiers, and M.M. Solomon. Column Generation. Springer, 2005. [5] J. Garcke and M. Griebel. Semi-supervised learning with sparse grids. In Proceedings of the ICML Workshop on Learning with Partially Classified Training Data, Bonn, Germany, August 2005. [6] T. G¨artner, Q.V. Le, S. Burton, A. Smola, and S.V.N. Vishwanathan. Large-scale multiclass transduction. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18. MIT Press, Cambridge, MA, 2006. [7] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1455–1480, 1998. [8] Y.-J. Lee and O.L. Mangasarian. RSVM: Reduced support vector machines. In Proceeding of the First SIAM International Conference on Data Mining, 2001. [9] V. Sindhwani, M. Belkin, and P. Niyogi. The geometric basis of semi-supervised learning. In Semisupervised Learning. MIT Press, 2005. [10] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the Twenty-Second International Conference on Machine Learning, pages 825–832, Bonn, Germany, August 2005. [11] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B, 58:267–288, 1996. [12] I. W. Tsang, J. T. Kwok, and P.-M. Cheung. Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6:363–392, 2005. [13] I. W. Tsang, J. T. Kwok, and K. T. Lai. Core vector regression for very large regression problems. In Proceedings of the Twenty-Second International Conference on Machine Learning, pages 913–920, Bonn, Germany, August 2005. [14] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Department of Computer Sciences, University of Wisconsin - Madison, 2005. [15] X. Zhu and J. Lafferty. Harmonic mixtures: Combining mixture models and graph-based methods. In Proceedings of the Twenty-Second International Conference on Machine Learning, Bonn, Germany, August 2005.
2006
45
3,065
Non-rigid point set registration: Coherent Point Drift Andriy Myronenko Xubo Song Miguel ´A. Carreira-Perpi˜n´an Department of Computer Science and Electrical Engineering OGI School of Science and Engineering Oregon Health and Science University Beaverton, OR, USA, 97006 {myron, xubosong, miguel}@csee.ogi.edu Abstract We introduce Coherent Point Drift (CPD), a novel probabilistic method for nonrigid registration of point sets. The registration is treated as a Maximum Likelihood (ML) estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with the second set. We formulate the motion coherence constraint and derive a solution of regularized ML estimation through the variational approach, which leads to an elegant kernel form. We also derive the EM algorithm for the penalized ML optimization with deterministic annealing. The CPD method simultaneously finds both the non-rigid transformation and the correspondence between two point sets without making any prior assumption of the transformation model except that of motion coherence. This method can estimate complex non-linear non-rigid transformations, and is shown to be accurate on 2D and 3D examples and robust in the presence of outliers and missing points. 1 Introduction Registration of point sets is an important issue for many computer vision applications such as robot navigation, image guided surgery, motion tracking, and face recognition. In fact, it is the key component in tasks such as object alignment, stereo matching, point set correspondence, image segmentation and shape/pattern matching. The registration problem is to find meaningful correspondence between two point sets and to recover the underlying transformation that maps one point set to the second. The “points” in the point set are features, most often the locations of interest points extracted from an image. Other common geometrical features include line segments, implicit and parametric curves and surfaces. Any geometrical feature can be represented as a point set; in this sense, the point locations is the most general of all features. Registration techniques can be rigid or non-rigid depending on the underlying transformation model. The key characteristic of a rigid transformation is that all distances are preserved. The simplest nonrigid transformation is affine, which also allows anisotropic scaling and skews. Effective algorithms exist for rigid and affine registration. However, the need for more general non-rigid registration occurs in many tasks, where complex non-linear transformation models are required. Non-linear non-rigid registration remains a challenge in computer vision. Many algorithms exist for point sets registration. A direct way of associating points of two arbitrary patterns is proposed in [1]. The algorithm exploits properties of singular value decomposition and works well with translation, shearing and scaling deformations. However, for a non-rigid transformation, the method performs poorly. Another popular method for point sets registration is the Iterative Closest Point (ICP) algorithm [2], which iteratively assigns correspondence and finds the least squares transformation (usually rigid) relating these point sets. The algorithm then redetermines the closest point set and continues until it reaches the local minimum. Many variants of ICP have been proposed that affect all phases of the algorithm from the selection and matching of points to the minimization strategy [3]. Nonetheless ICP requires that the initial pose of the two point sets be adequately close, which is not always possible, especially when transformation is non-rigid [3]. Several non-rigid registration methods are introduced [4, 5]. The Robust Point Matching (RPM) method [4] allows global to local search and soft assignment of correspondences between two point sets. In [5] it is further shown that the RPM algorithm is similar to Expectation Maximization (EM) algorithms for the mixture models, where one point set represents data points and the other represents centroids of mixture models. In both papers, the non-rigid transform is parameterized by Thin Plate Spline (TPS) [6], leading to the TPS-RPM algorithm [4]. According to regularization theory, the TPS parametrization is a solution of the interpolation problem in 2D that penalizes the second order derivatives of the transformation. In 3D the solution is not differentiable at point locations. In four or higher dimensions the generalization collapses completely [7]. The M-step in the EM algorithm in [5] is approximated for simplification. As a result, the approach is not truly probabilistic and does not lead, in general, to the true Maximum Likelihood solution. A correlation-based approach to point set registration is proposed in [8]. Two data sets are represented as probability densities, estimated using kernel density estimation. The registration is considered as the alignment between the two distributions that minimizes a similarity function defined by L2 norm. This approach is further extended in [9], where both densities are represented as Gaussian Mixture Models (GMM). Once again thin-plate spline is used to parameterize the smooth non-linear underlying transformation. In this paper we introduce a probabilistic method for point set registration that we call the Coherent Point Drift (CPD) method. Similar to [5], given two point sets, we fit a GMM to the first point set, whose Gaussian centroids are initialized from the points in the second set. However, unlike [4, 5, 9] which assumes a thin-plate spline transformation, we do not make any explicit assumption of the transformation model. Instead, we consider the process of adapting the Gaussian centroids from their initial positions to their final positions as a temporal motion process, and impose a motion coherence constraint over the velocity field. Velocity coherence is a particular way of imposing smoothness on the underlying transformation. The concept of motion coherence was proposed in the Motion Coherence Theory [10]. The intuition is that points close to one another tend to move coherently. This motion coherence constraint penalizes derivatives of all orders of the underlying velocity field (thin-plate spline only penalizes the second order derivative). Examples of velocity fields with different levels of motion coherence for different point correspondence are illustrated in Fig. 1. (a) (b) (c) (d) Figure 1: (a) Two given point sets. (b) A coherent velocity field. (c, d) Velocity fields that are less coherent for the given correspondences. We derive a solution for the velocity field through a variational approach by maximizing the likelihood of GMM penalized by motion coherence. We show that the final transformation has an elegant kernel form. We also derive an EM algorithm for the penalized ML optimization with deterministic annealing. Once we have the final positions of the GMM centroids, the correspondence between the two point sets can be easily inferred through the posterior probability of the Gaussian mixture components given the first point set. Our method is a true probabilistic approach and is shown to be accurate and robust in the presence of outliers and missing points, and is effective for estimation of complex non-linear non-rigid transformations. The rest of the paper is organized as follows. In Section 2 we formulate the problem and derive the CPD algorithm. In Section 3 we present the results of CPD algorithm and compare its performance with that of RPM [4] and ICP [2]. In Section 4 we summarize the properties of CPD and discuss the results. 2 Method Assume two point sets are given, where the template point set Y = (y1, . . . , yM)T (expressed as a M × D matrix) should be aligned with the reference point set X = (x1, . . . , xN)T (expressed as a N × D matrix) and D is the dimension of the points. We consider the points in Y as the centroids of a Gaussian Mixture Model, and fit it to the data points X by maximizing the likelihood function. We denote Y0 as the initial centroid positions and define a continuous velocity function v for the template point set such that the current position of centroids is defined as Y = v(Y0) + Y0. Consider a Gaussian-mixture density p(x) = PM m=1 1 M p(x|m) with x|m ∼N(ym, σ2ID), where Y represents D-dimensional centroids of equally-weighted Gaussians with equal isotropic covariance matrices, and X set represents data points. In order to enforce a smooth motion constraint, we define the prior p(Y|λ) ∝exp (−λ 2 φ(Y)), where λ is a weighting constant and φ(Y) is a function that regularizes the motion to be smooth. Using Bayes theorem, we want to find the parameters Y by maximizing the posteriori probability, or equivalently by minimizing the following energy function: E(Y) = − N X n=1 log M X m=1 e−1 2∥xn−ym σ ∥ 2 + λ 2 φ(Y) (1) We make the i.i.d. data assumption and ignore terms independent of Y. Equation 1 has a similar form to that of Generalized Elastic Net (GEN) [11], which has shown good performance in nonrigid image registration [12]; note that there we directly penalized Y, while here we penalize the transformation v. The φ function represents our prior knowledge about the motion, which should be smooth. Specifically, we want the velocity field v generated by template point set displacement to be smooth. According to [13], smoothness is a measure of the “oscillatory” behavior of a function. Within the class of differentiable functions, one function is said to be smoother than another if it oscillates less; in other words, if it has less energy at high frequency. The high frequency content of a function can be measured by first high-pass filtering the function, and then measuring the resulting power. This can be represented as φ(v) = R Rd |˜v(s)|2/ ˜G(s) ds, where ˜v indicates the Fourier transform of the velocity and ˜G is some positive function that approaches zero as ∥s∥→∞. Here ˜G represents a symmetric low-pass filter, so that its Fourier transform G is real and symmetric. Following this formulation, we rewrite the energy function as: E(˜v) = − N X n=1 log M X m=1 e−1 2∥xn−ym σ ∥ 2 + λ 2 Z Rd |˜v(s)|2 ˜G(s) ds (2) It can be shown using a variational approach (see Appendix A for a sketch of the proof) that the function which minimizes the energy function in Eq. 2 has the form of the radial basis function: v(z) = M X m=1 wmG(z −y0m) (3) We choose a Gaussian kernel form for G (note it is not related to the Gaussian form of the distribution chosen for the mixture model). There are several motivations for such a Gaussian choice: First, it satisfies the required properties (symmetric, positive definite, and ˜G approaches zero as ∥s∥→∞). Second, a Gaussian low pass filter has the property of having the Gaussian form in both frequency and time domain without oscillations. By choosing an appropriately sized Gaussian filter we have the flexibility to control the range of filtered frequencies and thus the amount of spatial smoothness. Third, the choice of the Gaussian makes our regularization term equivalent to the one in Motion Coherence Theory (MCT) [10]. The regularization term R Rd |˜v(s)|2/ ˜G(s) ds, with a Gaussian function for G, is equivalent to the sum of weighted squares of all order derivatives of the velocity field R Rd P∞ m=1 β2m m!2m (Dmv)2 [10, 13] , where D is a derivative operator so that D2mv = ∇2mv and D2m+1v = ∇(∇2mv). The equivalence of the regularization term with that of the Motion Coherence Theory implies that we are imposing motion coherence among the points and thus we call our method the Coherent Point Drift (CPD) method. Detailed discussion of MCT can be found in [10]. Substituting the solution obtained in Eq. 3 back into Eq. 2, we obtain CPD algorithm: • Initialize parameters λ, β, σ • Construct G matrix, initialize Y = Y0 • Deterministic annealing: • EM optimization, until convergence: · E-step: Compute P · M-step: Solve for W from Eq. 7 · Update Y = Y0 + GW • Anneal σ = ασ • Compute the velocity field: v(z) = G(z, ·)W Figure 2: Pseudo-code of CPD algorithm. E(W) = − N X n=1 log M X m=1 e −1 2 ‚‚‚‚ xn−y0m−PM k=1 wkG(y0k−y0m) σ ‚‚‚‚ 2 + λ 2 tr WT GW  (4) where GM×M is a square symmetric Gram matrix with elements gij = e −1 2 ‚‚‚ y0i−y0j β ‚‚‚ 2 and WM×D = (w1, . . . , wM)T is a matrix of the Gaussian kernel weights in Eq. 3. Optimization. Following the EM algorithm derivation for clustering using Gaussian Mixture Model [14], we can find the upper bound of the function in Eq. 4 as (E-step): Q(W) = N X n=1 M X m=1 P old(m|xn)∥xn −y0m −G(m, ·)W∥2 2σ2 + λ 2 tr WT GW  (5) where P old denotes the posterior probabilities calculated using previous parameter values, and G(m, ·) denotes the mth row of G. Minimizing the upper bound Q will lead to a decrease in the value of the energy function E in Eq. 4, unless it is already at local minimum. Taking the derivative of Eq. 5 with respect to W, and rewriting the equation in matrix form, we obtain (M-step) ∂Q ∂W = 1 σ2 G(diag (P1))(Y0 + GW) −PX) + λGW = 0 (6) where P is a matrix of posterior probabilities with pmn = e −1 2 ‚‚‚‚ yold m −xn σ ‚‚‚‚ 2 / PM m=1 e −1 2 ‚‚‚‚ yold m −xn σ ‚‚‚‚ 2 . The diag (·) notation indicates diagonal matrix and 1 is a column vector of all ones. Multiplying Eq. 6 by σ2G−1 (which exists for a Gaussian kernel) we obtain a linear system of equations: (diag (P1)) G + λσ2I)W = PX −diag (P1) Y0 (7) Solving the system for W is the M-step of EM algorithm. The E step requires computation of the posterior probability matrix P. The EM algorithm is guaranteed to converge to a local optimum from almost any starting point. Eq. 7 can also be obtained directly by finding the derivative of Eq. 4 with respect to W and equating it to zero. This results in a system of nonlinear equations that can be iteratively solved using fixed point update, which is exactly the EM algorithm shown above. The computational complexity of each EM iteration is dominated by the linear system of Eq. 7, which takes O(M 3). If using a truncated Gaussian kernel and/or linear conjugate gradients, this can be reduced to O(M 2). Robustness to Noise. The use of a probabilistic assignment of correspondences between point sets is innately more robust than the binary assignment used in ICP. However, the GMM requires that each data point be explained by the model. In order to account for outliers, we add an additional uniform pdf component to the mixture model. This new component changes posterior probability matrix P in Eq. 7, which now is defined as pmn = e −1 2 ‚‚‚‚ yold m −xn σ ‚‚‚‚ 2 /( (2πσ2) D 2 a +PM m=1 e −1 2 ‚‚‚‚ yold m −xn σ ‚‚‚‚ 2 ), where a defines the support for the uniform pdf. The use of the uniform distribution greatly improves the noise. Free Parameters. There are three free parameters in the method: λ, β and σ. Parameter λ represents the trade off between data fitting and smoothness regularization. Parameter β reflects the strength of interaction between points. Small values of β produce locally smooth transformation, while large values of β correspond to nearly pure translation transformation. The value of σ serves as a capture range for each Gaussian mixture component. Smaller σ indicates smaller and more localized capture range for each Gaussian component in the mixture model. We use deterministic annealing for σ, starting with a large value and gradually reducing it according to σ = ασ, where α is annealing rate (normally between [0.92 0.98]), so that the annealing process is slow enough for the algorithm to be robust. The gradual reducing of σ leads to a coarse-to-fine match strategy. We summarize the CPD algorithm in Fig. 2. 3 Experimental Results We show the performance of CPD on artificial data with non-rigid deformations. The algorithm is implemented in Matlab, and tested on a Pentium4 CPU 3GHz with 4GB RAM. The code is available at www.csee.ogi.edu/˜myron/matlab/cpd. The initial value of λ and β are set to 1.0 in all experiments. The starting value of σ is 3.0 and gradually annealed with α = 0.97. The stopping condition for the iterative process is either when the current change in parameters drops below a threshold of 10−6 or the number of iterations reaches the maximum of 150. CPD algorithm RPM algorithm ICP algorithm Figure 3: Registration results for the CPD, RPM and ICP algorithms from top to bottom. The first column shows template (◦) and reference (+) point sets. The second column shows the registered position of the template set superimposed over the reference set. The third column represents the recovered underlying deformation . The last column shows the link between initial and final template point positions (only every second point’s displacement is shown). On average the algorithm converges in few seconds and requires around 80 iterations. All point sets are preprocessed to have zero mean and unit variance (which normalizes translation and scaling). We compare our method on non-rigid point registration with RPM and ICP. The RPM and ICP implementations and the 2D point sets used for comparison are taken from the TPS-RPM Matlab package [4]. For the first experiment (Fig. 3) we use two clean point sets. Both CPD and RPM algorithms produce accurate results for non-rigid registration. The ICP algorithm is unable to escape a local minimum. We show the velocity field through the deformation of a regular grid. The deformation field for RPM corresponds to parameterized TPS transformation, while that for CPD represents a motion coherent non-linear deformation. For the second experiment (Fig. 4) we make the registration problem more challenging. The fish head in the reference point set is removed, and random noise is added. In the template point set the tail is removed. The CPD algorithm shows robustness even in the area of missing points and corrupted data. RPM incorrectly wraps points to the middle of the figure. We have also tried different values of smoothness parameters for RPM without much success, and we only show the best result. ICP also shows poor performance and is stuck in a local minimum. For the 3D experiment (Fig. 5) we show the performance of CPD on 3D faces. The face surface is defined by the set of control points. We artificially deform the control point positions non-rigidly and use it as a template point set. The original control point positions are used as a reference point set. CPD is effective and accurate for this 3D non-rigid registration problem. CPD algorithm RPM algorithm ICP algorithm Figure 4: The reference point set is corrupted to make the registration task more challenging. Noise is added and the fish head is removed in the reference point set. The tail is also removed in the template point set. The first column shows template (◦) and reference (+) point sets. The second column shows the registered position of the template set superimposed over the reference set. The third column represents the recovered underlying deformation. The last column shows the link between the initial and final template point positions. 4 Discussion and Conclusion We intoduce Coherent Point Drift, a new probabilistic method for non-rigid registration of two point sets. The registration is considered as a Maximum Likelihood estimation problem, where one point set represents centroids of a GMM and the other represents the data. We regularize the velocity field over the points domain to enforce coherent motion and define the mathematical formulation of this constraint. We derive the solution for the penalized ML estimation through the variational approach, and show that the final transformation has an elegant kernel form. We also derive the EM optimization algorithm with deterministic annealing. The estimated velocity field represents the underlying non-rigid transformation. Once we have the final positions of the GMM centroids, the correspondence between the two point sets can be easily inferred through the posterior probability of (a) (b) (c) −1.5 −1 −0.5 0 0.5 1 1.5 −2 −1 0 1 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 −1.5 −1 −0.5 0 0.5 1 1.5 −2 0 2 −2 −1 0 1 2 3 4 −1.5 −1 −0.5 0 0.5 1 1.5 −2 0 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 (d) (e) (f) Figure 5: The results of CPD non-rigid registration on 3D point sets. (a, d) The reference face and its control point set. (b, e) The template face and its control point set. (c, f) Result obtained by registering the template point set onto the reference point set using CPD. the GMM components given the data. The computational complexity of CPD is O(M 3), where M is the number of points in template point set. It is worth mentioning that the components in the point vector are not limited to spatial coordinates. They can also represent the geometrical characteristic of an object (e.g., curvature, moments), or the features extracted from the intensity image (e.g., color, gradient). We compare the performance of the CPD algorithm on 2D and 3D data against ICP and RPM algorithms, and show how CPD outperforms both methods in the presence of noise and outliers. It should be noted that CPD does not work well for large in-plane rotation. Typically such transformation can be first compensated by other well known global registration techniques before CPD algorithm is carried out. The CPD method is most effective when estimating smooth non-rigid transformations. Appendix A E = − N X n=1 log M X m=1 e−1 2∥xn−ym σ ∥ 2 + λ 2 Z Rd |˜v(s)|2 ˜G(s) ds (8) Consider the function in Eq. 8, where ym = y0m + v(y0m), and y0m is the initial position of ym point. v is a continuous velocity function and v(y0m) = R Rd ˜v(s)e2πi<y0m,s>ds in terms of its Fourier transform ˜v. The following derivation follows [13]. Substituting v into equation Eq. 8 we obtain: E(˜v) = − N X n=1 log M X m=1 e −1 2 ‚‚‚‚ xn−y0m−R Rd ˜v(s)e2πi<y0m,s>ds σ ‚‚‚‚ 2 + λ 2 Z Rd |˜v(s)|2 ˜G(s) ds (9) In order to find the minimum of this functional we take its functional derivatives with respect to ˜v, so that δE(˜v) δ˜v(t) = 0, ∀t ∈Rd: δE(˜v) δ˜v(t) = − N X n=1 PM m=1 e−1 2∥xn−ym σ ∥ 2 1 σ2 (xn −ym) R Rd δ˜v(s) δ˜v(t)e2πi<y0m,s>ds PM m=1 e−1 2∥xn−ym σ ∥ 2 + λ 2 Z Rd δ δ˜v(t) |˜v(s)|2 ˜G(s) ds = − N X n=1 PM m=1 e−1 2∥xn−ym σ ∥ 2 1 σ2 (xn −ym)e2πi<y0,t> PM m=1 e−1 2∥xn−ym σ ∥ 2 + λ ˜v(−t) ˜G(t) = 0 We now define the coefficients amn = e −1 2∥xn−ym σ ∥ 2 1 σ2 (xn−ym) PM m=1 e −1 2∥xn−ym σ ∥ 2 , and rewrite the functional derivative as: − N X n=1 M X m=1 amne2πi<y0m,t> + λ ˜v(−t) ˜G(t) = − M X m=1 ( N X n=1 amn)e2πi<y0m,t> + λ ˜v(−t) ˜G(t) = 0 (10) Denoting the new coefficients wm = 1 λ PN n=1 amn, and changing t to −t, we multiply by ˜G(t) on both sides of this equation, which results in: ˜v(t) = ˜G(−t) M X m=1 wme−2πi<y0m,t> (11) Assuming that ˜G is symmetric (so that its Fourier transform is real), and taking the inverse Fourier transform of the last equation, we obtain: v(z) = G(z) ∗ M X m=1 wmδ(z −y0m) = M X m=1 wmG(z −y0m) (12) Since wm depend on v through amn and ym, the wm that solve Eq. 12 must satisfy a self consistency equation equivalent to Eq. 7. A specific form of regularizer ˜G results in a specific basis function G. Acknowledgment This work is partially supported by NIH grant NEI R01 EY013093, NSF grant IIS–0313350 (awarded to X. Song) and NSF CAREER award IIS–0546857 (awarded to Miguel ´A. Carreira-Perpi˜n´an). References [1] G.L. Scott and H.C. Longuet-Higgins. An algorithm for associating the features of two images. Royal Society London Proc., B-244:21–26, 1991. [2] P.J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell., 14(2):239–256, 1992. [3] S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. Third International Conference on 3D Digital Imaging and Modeling, page 145, 2001. [4] H Chui and A. Rangarajan. A new algorithm for non-rigid point matching. CVPR, 2:44–51, 2000. [5] H. Chui and A. Rangarajan. A feature registration framework using mixture models. IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA), pages 190–197, 2000. [6] F. L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell., 11(6):567–585, 1989. [7] R Sibson and G. Stone. Comp. of thin-plate splines. SIAM J. Sci. Stat. Comput., 12(6):1304–1313, 1991. [8] Y. Tsin and T. Kanade. A correlation-based approach to robust point set registration. ECCV, 3:558–569, 2004. [9] B. Jian and B.C. Vemuri. A robust algorithm for point set registration using mixture of gaussians. ICCV, pages 1246–1251, 2005. [10] A.L. Yuille and N.M. Grzywacz. The motion coherence theory. Int. J. Computer Vision, 3:344–353, 1988. [11] M. ´A. Carreira-Perpi˜n´an, P. Dayan, and G. J. Goodhill. Differential priors for elastic nets. In Proc. of the 6th Int. Conf. Intelligent Data Engineering and Automated Learning (IDEAL’05), pages 335–342, 2005. [12] A. Myronenko, X Song, and M. ´A. Carreira-Perpi˜n´an. Non-parametric image registration using generalized elastic nets. Int. Workshop on Math. Foundations of Comp. Anatomy: Geom. and Stat. Methods in Non-Linear Image Registration, MICCAI, pages 156–163, 2006. [13] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219–269, 1995. [14] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
2006
46
3,066
Learning Nonparametric Models for Probabilistic Imitation David B. Grimes Daniel R. Rashid Rajesh P.N. Rao Department of Computer Science University of Washington Seattle, WA 98195 grimes,rashid8,rao@cs.washington.edu Abstract Learning by imitation represents an important mechanism for rapid acquisition of new behaviors in humans and robots. A critical requirement for learning by imitation is the ability to handle uncertainty arising from the observation process as well as the imitator’s own dynamics and interactions with the environment. In this paper, we present a new probabilistic method for inferring imitative actions that takes into account both the observations of the teacher as well as the imitator’s dynamics. Our key contribution is a nonparametric learning method which generalizes to systems with very different dynamics. Rather than relying on a known forward model of the dynamics, our approach learns a nonparametric forward model via exploration. Leveraging advances in approximate inference in graphical models, we show how the learned forward model can be directly used to plan an imitating sequence. We provide experimental results for two systems: a biomechanical model of the human arm and a 25-degrees-of-freedom humanoid robot. We demonstrate that the proposed method can be used to learn appropriate motor inputs to the model arm which imitates the desired movements. A second set of results demonstrates dynamically stable full-body imitation of a human teacher by the humanoid robot. 1 Introduction A fundamental and versatile mechanism for learning in humans is imitation. Infants as young as 42 minutes of age have been found to imitate facial acts such as tongue protrusion while older children can perform complicated forms of imitation ranging from learning to manipulate novel objects in particular ways to imitation based on inference of goals from unsuccessful demonstrations (see [11] for a review). Robotics researchers have become increasingly interested in learning by imitation (also called “learning by watching” or “learning from demonstration”) as an attractive alternative to manually programming robots [5,8,19]. However, most of these approaches do not take uncertainty into account. Uncertainty in imitation arises from many sources including the internal dynamics of the robot, the robot’s interactions with its environment, observations of the teacher, etc. Being able to handle uncertainty is especially critical in robotic imitation because executing actions that have high uncertainty during imitation could lead to potentially disastrous consequences. In this paper, we propose a new technique for imitation that explicitly handles uncertainty using a probabilistic model of actions and their sensory consequences. Rather than relying on a physicsbased parametric model of system dynamics as in traditional methods, our approach learns a nonparametric model of the imitator’s internal dynamics during a constrained exploration period. The learned model is then used to infer appropriate actions for imitation using probabilistic inference in a dynamic Bayesian network (DBN) with teacher observations as evidence. We demonstrate the viability of the approach using two systems: a biomechanical model of the human arm and a 25a2 s3 o3 a1 o2 s2 c2 c1 c3 cT s1 aT −1 o1 sT oT (a) (b) (c) Figure 1: Graphical model and systems for imitation learning. (a) Dynamic Bayesian network for inferring a sequence of imitative actions a1:T−1 from a sequence of observations of the teacher o1:T . The model also allows for probabilistic constraint variables ct on the imitators states st. Nonparametric model learning constructs the model P(st+1|st, at) from empirical data. (b) The two link biomechanical model of the human arm (from [10]) used in experiments on learning reaching movements via imitation. (c) The Fujitsu Hoap-2 humanoid robot used in our experiments on fullbody, dynamic imitation. degrees-of-freedom humanoid robot. Our first set of results illustrate how the proposed method can be used to learn appropriate motor commands for producing imitative movements in the model human arm. The second set of results demonstrates dynamically stable full-body imitation of a human teacher by the humanoid robot. Taken together, the results suggest that a probabilistic approach to imitation based on nonparametric model learning could provide a powerful and flexible platform for acquiring new behaviors in complex robotic systems. 2 Imitation via Inference and Constrained Exploration In this section we present our inference-based approach to selecting a set of actions based on observations of another agent’s state during demonstration, and a set of probabilistic constraints. We present our algorithms within the framework of the graphical model shown in Fig. 1(a). We denote the sequence of continuous action variables a1, ··, at, ··, aT−1. We use the convention that the agent starts in an initial state s1, and as the result of executing the actions visits the set of continuous states s2, ··, st, ··, sT . Note that an initial action a0 can be trivially included. In our imitation learning framework the agent observes a sequence of continuous variables o1, ··, ot, ··, oT−1 providing partial information about the state of the teacher during demonstration. The conditional probability density P(ot|st) encodes how likely an observation of the teacher (ot) agrees with an an agent’s state (st) while performing the same motion or task. This marks a key difference with the Partially Observable Markov Decision Process (POMDP) framework. Here the observations are of the demonstrator (generally with different embodiment), and we currently assume that the learner can observe it’s own state. Probabilistic constraints on state variables are included within the graphical model by a set of variables ct. The corresponding constraint models P(ct|st) encode the likelihood of satisfying the constraint in state st. Constraint variables are used in our framework to represent goals such as reaching a desired goal state (cT = sG), or a going through a way point (ct = sW ). The choice of the constraint model is domain dependent. Here we utilize a central Gaussian density P(ct|st) = N(ct −st; 0, Σc). The variance parameter for each constraint may be set by hand using domain knowledge, or could be learned using feedback from the environment. Given a set of evidence E ⊆{o1, ··, oT , c1, ··, cT } we desire actions which maximize the likelihood of the evidence. Although space limitations rule out a thorough discussion, to achieve a tractable inference we focus here on computing marginal posteriors over each action rather than the maximum a posteriori (MAP) sequence. While in principle any algorithm for computing the marginal posterior distributions of the action variables could be used, we find it convenient here to use Pearl’s belief propagation (BP) algorithm [13]. BP was originally restricted to tree structured graphical models with discrete variables. Several advances have broadened the applicability to general graph structures [18] and to continuous variables in undirected graph structures [16]. Here we derive belief propagation for the directed case though we note that the difference is largely a semantic convenience, as any Bayesian network can be represented as a Markov random field, or more generally a factor graph [9]. Our approach is most similar to Nonparametric Belief Propagation (NBP) [16], with key differences highlighted throughout this section. The result of performing belief propagation is a set of marginal belief distributions B(x) = P(x|E) = π(x)λ(x). This belief distribution is the product of two sets of messages π(x) and λ(x), which represent the information coming from neighboring parent and children variable nodes respectively. Beliefs are computed via messages passed along the edges of the graphical model, which are distributions over single variables. The i-th parent of variable x passes to x the distribution πx(ui). Child j of variable x would pass to x the distribution λyj(x). In the discrete (finite space) case, messages are easily represented by discrete distributions. For the case of arbitrary continuous densities message representation is in itself a challenge. As we propose a nonparametric, model-free approach to learning system dynamics it follows that we also want to allow for (approximately) representing the multi-modal, non-Gaussian distributions that arise during inference. As in the NBP approach [16] we adopt the use of a mixture of Gaussian kernels (Eq. 5) to represent arbitrary message and belief distributions. For convenience we treat observed and hidden variables in the graph identically by allowing a node X to send itself the message λX(x). This “self message” represented using a Dirac delta distribution about the observed value is considered in the product of all messages from the m children (denoted Yj) of X: λ(x) = λX(x) m Y j λYj(x). (1) Messages from parent variables are incorporated by integrating the conditional probability of x over all possible values of the k parents times the probability that combination of values as evaluated in the corresponding messages from a parent node: π(x) = Z u1:n P(x|u1, ··, un) n Y i πX(ui)du1:n. (2) Messages are updated according to the following two equations: λX(uj) = Z x λ(x) Z u1:n/j P(x|u1, ··, un) n Y i̸=j πx(ui)du1:n/jdx (3) πYj(x) = π(x)λX(x) Y i̸=j λYi(x). (4) The main operations in Eqs 1-4 are integration and multiplication of mixtures of Gaussians. The evaluation of integrals will be discussed after first introducing Gaussian Mixture Regression in Sec. 3. Although the product of a set of mixtures of Gaussians is simply another mixture of Gaussians, the complexity (in terms of the number of components in the output mixture) grows exponentially in the number of input mixtures. Thus an approximation is needed to keep inference tractable in the action sequence length T. Rather than use a multiscale sampling method to obtain a set of representative particles from the product as in [7] we first assume that we can compute the exact product density for a given set of input mixtures. We then apply the simple heuristic of keeping a fixed number of mixture components, which through experimentation we found to be highly effective. This heuristic is based on the empirical sparseness of product mixture components’ prior probabilities. For example, when the message πst+1(st) coming from a previous state has M = 10 components, the message from the action πat−1(at) has N = 1 component (based on a unimodal Gaussian prior), and the GMR model has P =67 components, the conditional product has MNP =670 components. However, we see experimentally that less than ten components have a weight which is within five orders of magnitude of the maximal weight. Thus we can simply select the top K′ =10 components. This sparsity should not be surprising as the P model components represent localized data, and only a few of these components tend to have overlap with the belief state being propagated. Currently we fix K′ although an adaptive mechanism could further speed inference. −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 −2 −1 0 1 2 −2 −1 0 1 2 3 1 50 100 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 4 K K=140 K=67 K=1 b) Likelihood a) Figure 2: Nonparametric GMR model selection. a) The value of our model selection criteria rises from the initial model with K =L components, to a peak around 67 components, after which it falls off. b) The three series of plots show the current parameters of the model (blue ellipses), layered over the set of regression test points (in green), and the minimum spanning tree (red lines) between neighboring components. Shown here is a projection of the 14-dimensional data (projected onto the first two principal components of the training data). We now briefly describe our algorithm 1 for action inference and constrained exploration. The inputs to the action inference algorithm are the set of evidence E, and an instance of the dynamic Bayesian network M={PS, PA, PF , PO, PC}, composed of the prior on the initial state, the prior on actions, the forward model, the imitation observation model, and the probabilistic constraint model respectively. Inference proceeds by first ”inverting” the evidence from the observations and constraint variables yielding the messages λot(st), λct(st). After initialization from the priors PS, PA we perform a forward planning pass thereby computing the forward state messages πst+1(st). Similarly a backward planning sweep produces the messages λst(st−1). The algorithm then combines information from forward and backward messages (via Eq. 3) to compute belief distributions of actions. We then select the maximum marginal belief action ˆat from the belief distribution using the mode finding algorithm described in [4]. Our algorithm to iteratively explore the state and action spaces while satisfying the constraints placed on the system builds on the inference based action selection algorithm described above. The inputs are an initial model M0, a set of evidence E, and a number of iterations to be performed N. At each iteration we infer a sequence of maximum marginal actions and execute them. Execution yields a sequence of states, which are used to update the learned forward model (see Section 3). Using the new (ideally more accurate) forward model we show we are able to obtain a better imitation of the teacher via the newly inferred actions. The final model and sequence of actions are then returned after N constrained exploration trials. For simplicity, we currently assume that state and action prior distributions, and the observation and constraint models are pre-specified. Evidence from our experiments shows that specifying these parts of the model is not overly cumbersome even in the real-world domains we have studied. The focus of our algorithms presented here is to learn the forward model which in many real-world domains is extremely complex to derive analytically. Sections 4.1 and 4.2 describe the results of our algorithms applied in the human arm model and humanoid robot domains respectively. 3 Nonparametric Model Learning In this section we investigate an algorithm for learning a nonparametric forward model via Gaussian Mixture Regression (GMR) [17]. The motivation behind selecting GMR is that it allows for closed form evaluation of the integrals found in Eqs 1-4. Thus it allows efficient inference without the need to resort to Monte Carlo (sample based) approximations in inference. 1For detailed algorithms please refer to the technical report available at http://www.cs.washington.edu/homes/grimes/dil The common Gaussian Mixture Model (GMM) forms the basis of Gaussian Mixture Regression: p(x|θ) = X k p(k|θk)p(x|k, θk) = X k wk N(x; µk, Σk) . (5) We now assume that the random variable X is formed via the concatenation of the n random variables X1, X2, ··, Xn, such that x = [x⊤ 1 x⊤ 2 · ·x⊤ n ]⊤. The theorem of Gaussian conditioning states that if x ∼N(µ, Σ) where µ = [(µi)] and Σ = [(Σij)] then the variable Xi is normally distributed given Xj: p(Xi = xi|Xj = xj) = N µi + ΣijΣ−1 jj (xj −µj), Σii −ΣijΣ−1 jj Σji  . (6) Gaussian mixture regression is derived by applying the result of this theorem to Eq. 5: p(xi|xj, θ) = X k wkj(xj) N(xi; µkij(xj), Σkij) . (7) We use µkj denote the mean of the j-th variable in the k-th component of the mixture model. Likewise Σkij denotes the covariance between the variables xi and xj in the k-th component. Instead of a fixed weight and mean for each component we now have a weight function dependent on the conditioning variable xj: wkj(x) = wk N(x; µkj, Σkjj) P k′ wk′ N(x; µk′j, Σk′jj). (8) Likewise the mean of the k-th conditioned component of xi given xj is a function of xj: µkij(x) = µki + ΣkijΣ−1 kjj(x −µkj). (9) Belief propagation requires the evaluation of integrals convolving the conditional distribution of one variable xi given a GMM distribution γ(·, θ′) of another variable xj: Z p(xi|xj, θ)γ(xj; θ′)dxj (10) Fortunately rearranging the terms in the densities reduces the product of the two GMMs to a third GMM, which is then marginalized w.r.t. xi under the integral operator. We now turn to the problem of learning a GMR model from data. As the learning methodology we wish to adopt is non-parametric we do not want to select the number of components K a priori. This rules out the common strategy of using the well known expectation maximization (EM) algorithm for learning a model of the full joint density p(x). Although Bayesian strategies exist for selecting the number of components, as pointed out by [17] a joint density modeling approach rarely yields the best model under a regression loss function. Thus we adopt a very similar algorithm to the Iterative Pairwise Replace Algorithm (IPRA) [15,17] which simultaneously performs model fitting and selection of the GMR model parameters θ. We assume that a set of state and action histories have been observed during the N trial histories: {[si 1, ai 1, si 2, ai 2, ··, ai T−1si T ]}N i=1. To learn a GMR forward model we first construct the joint variable space: x = [s⊤a⊤(s′)⊤]⊤where s′ denotes the resulting state when executing action a in state s. The time-invariant dataset is then represented with by matrix Xtr = [x1, ··, xL] Model learning and selection first constructs the fully non-parametric representation of the training set with K = L isotropic mixture components centered on each data point µk = xk. This parametrization is exact at making predictions at points within the training set, but generalizes extremely poorly. The algorithm proceeds by merging components which are very similar, as determined by a symmetric similarity metric between two mixture components. Following [17] we use the Hellinger distance metric. To perform efficient merging we first compute the minimum spanning tree of all mixture components. Iteratively, the algorithm merges the closest pair in the minimum spanning tree. Merging continues until there is only a single Gaussian component left. Merging the two mixtures requires computation of new local mixture parameters (to fit the data covered by both). Rather than the ”method of moments” (MoM) approach to merging components and then later running expectation maximization to fine-tune the selected model, we that found performing local maximum likelihood estimation (MLE) within model selection to be more effective at finding an accurate model. In order to effectively perform MLE merges we first randomly partition the training data into two sets: one of ”basis” vectors that we compute the minimum spanning tree on, and one of regression data points. In our experiments here we used a random fifth of the data for basis vectors. The goal of our modified IPRA algorithm is to find the model which best describes the regression points. We then define the regression likelihood over the current GMR model parameters θ: L(θ, Xtr) = L X l n X i p(xl i|xl 1,2,i−1,i+1,n, θ). (11) The model of size K which maximizes this criteria is returned for use in our inference procedure. Fig. 2 demonstrates the learning of a forward model for the biomechanical arm model from Section 4.1. We found the regression-based model selection criteria to be effective at generalizing well outside both the basis and regression sets. 4 Results 4.1 Imitating Reaching Movements −0.04 −0.03 −0.02 0.45 0.5 0.55 −0.1 0 0.1 −0.04 −0.02 0 −0.1 −0.05 0 −0.04 −0.03 −0.02−0.06 −0.04 −0.02−0.04 −0.03 −0.02 5 10 15 −0.1 0 0.1 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 −20 0 20 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 5 10 15 Demonstration 1 5 10 15 20 Random Exp. Hand position Constrained Exploration Trials a) b) Torque (input) c) Joint velocity Figure 3: Learning to imitate a reaching motion. a) The first row shows the motion of the teacher’s hand (shown in black) in Cartesian space, along with the target position. The imitator explores the space via body babbling (second plot, shown in red). From this data a GMR model is learned, constrained exploration is performed to find an imitative reaching motion (shown every 5 iterations). b) The velocities of the two joints during the imitation learning process. By trial number 20 the imitator’s velocities (thin lines) closely match the demonstrator’s velocities (the thick,light colored lines), and meet the zero final velocity constraint. c) The teacher’s original muscle torques, followed by the babbling torques, and the torques computed during constrained exploration. In the first set of experiments we learn reaching movements via imitation in a complex non-linear model of the human arm. The arm simulator we use is a biomechanical arm model [10] consisting of two degrees of freedom (denoted θ) representing the shoulder and elbow joints. The arm is controlled via two torque inputs (denoted τ) for the two degrees of freedom. The dynamics of the arm are described via the following differential equation: M(θ)¨θ + C(θ, ˙θ) + B( ˙θ) = τ (12) where M is the inertial force matrix, C is a vector of centripetal and Coriolis forces, and B is the matrix of force due to friction at the joints. Fig. 3 shows the process of learning to perform a reaching motion via imitation. First we compute the teacher’s simulated arm motion using the model-based iLQG algorithm [10] based on start and target positions of the hand. By executing the sequence of computed torque inputs [ˆa1:T−1] from a specified initial state s1, we obtain the state history of the demonstrator [ˆs1:T ]. To simulate the (a) 0 5 10 15 20 25 30 35 40 45 0 10 20 30 40 50 60 70 trial # balanced duration random (from prior) exploration period constrained exploration period (b) Figure 4: Humanoid robot dynamic imitation. a) The first row consists of frames from an IK fit to the marker data during observation. The second row shows the result of performing a kinematic imitation in the simulator. The third and fourth rows show the final imitation result obtained by our method of constrained exploration, in the simulator, and on the Hoap-2 robot. b) The duration that the executed imitation was balanced (out of a total of T = 63) shown vs the trial number. The random exploration trials are shown in red, and the inferred imitative trials are shown in blue. Note that the balanced duration rapidly rises and by the 15th inferred sequence is able to perform the imitation without falling. natural partial observability of a human demonstrator and a human learner, we provide our inference algorithm with noisy measurements of the kinematic state only (not the torques). A probabilistic constraint dictates that the final joint velocities be very close to zero. 4.2 Dynamic Humanoid Imitation We applied our algorithms for nonparametric action selection, model learning, and constrained exploration to the problem of full-body dynamic imitation in a humanoid robot. The experiment consisted of a humanoid demonstrator performing motions such as squatting and standing on one leg. Due to space limitations only briefly describe the experiments, for more details see [6]. First, the demonstrators’ kinematics were obtained using a commercially available retroreflective marker-based optical motion capture system based on inverse kinematics (IK). The IK skeletal model of the human was restricted to have the same degrees of freedom as the Fujitsu Hoap-2 humanoid robot. Representing humanoid motion using a full kinematic configuration is problematic (due to the curse of dimensionality). Fortunately with respect to a wide class of motion (such as walking, kicking, squatting) the full number of degrees of freedom (25 in the Hoap-2) is highly redundant. For simplicity here, we use linear principal components analysis (PCA) but we are investigating the use of non-linear embedding techniques. Using PCA we were able to represent the observed instructor’s kinematics in a compact four-dimensional space, thus forming the first four dimensions of the state space. The goal of the experiment is to perform dynamic imitation, i.e. considering the dynamic balance involved in stably imitating the human demonstrator. Dynamic balance is considered using a sensorbased model. The Hoap-2 robot’s sensors provide measurements of the angular rotation gt (via a gyroscope in the torso) and foot pressure ft (at eight points on the feet) every 1 millisecond. By computing four differential features of the pressure sensors, and extracting the two horizontal gyroscope axis, we form a six dimensional representation of the dynamic state of the robot. Concatenating the four dimensional kinematic state and the six dimensional dynamic state we form the full ten dimensional state representation st. Robot actions at are then simply points in the embedded kinematic space. We bootstrap the forward model (of robot kinematics and dynamics) by first performing random exploration (body babbling) about the instructor’s trajectory. Once we have collected sufficient data (around 20 trials) we learn an initial forward model. Subsequently we place a probabilistic constraint on the dynamic configuration of the robot (using a tight, central Gaussian distribution around zero angular velocity, and zero pressure differentials). Using this constraint on dynamics we perform constrained exploration, until we obtain a stable motion for the Hoap-2 which imitates the human motion. The results we obtained in imitating a difficult one-legged balance motion are shown in Fig. 4. 5 Conclusion Our results demonstrate that probabilistic inference and learning techniques can be used to successfully acquire new behaviors in complex robotic systems such as a humanoid robot. In particular, we showed how a nonparametric model of forward dynamics can be learned from constrained exploration and used to infer actions for imitating a teacher while simultaneously taking the imitator’s dynamics into account. There exists a large body of previous work on robotic imitation learning (see, for example [2, 5, 14, 19]). Some rely on producing imitative behaviors using nonlinear dynamical systems (e.g., [8]) while others focus on biologically motivated algorithms (e.g., [3]). In the field of reinforcement learning, techniques such as inverse reinforcement learning [12] and apprenticeship learning [1] have been proposed to learn controllers for complex systems based on observing an expert and learning their reward function. However, the role of this type of expert and that of our human demonstrator must be distinguished. In the former case, the teacher is directly controlling the artificial system. In the imitation learning paradigm, one can only observe the teacher controlling their own body. Further, despite kinematic similarities between the human and humanoid robot, the dynamic properties of the robot and human are very different. Finally, the fact that our approach is based on inference in graphical models confers two major advantages: (1) we can continue to leverage algorithmic advances in the rapidly developing area of inference in graphical models, and (2) the approach promises generalization to graphical models of more complex systems such as with semi-Markov dynamics and hierarchical systems. References [1] P. Abbeel and A. Y. Ng. Exploration and apprenticeship learning in reinforcement learning. In In Proceedings of the Twenty-first International Conference on Machine Learning, 2005. [2] C. Atkeson and S. Schaal. Robot learning from demonstration. pages 12–20, 1997. [3] A. Billard and M. Mataric. Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture. Robotics and Autonomous Systems, (941), 2001. [4] M. A. Carreira-Perpinan. Mode-finding for mixtures of gaussian distributions. IEEE Trans. Pattern Anal. Mach. Intell., 22(11):1318–1323, 2000. [5] J. Demiris and G. Hayes. A robot controller using learning by imitation, 1994. [6] D. B. Grimes, R. Chalodhorn, and R. P. N. Rao. Dynamic imitation in a humanoid robot through nonparametric probabilistic inference. In Proceedings of Robotics: Science and Systems (RSS’06), Cambridge, MA, 2006. MIT Press. [7] A. T. Ihler, E. B. Sudderth, W. T. Freeman, and A. S. Willsky. Efficient multiscale sampling from products of gaussian mixtures. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [8] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Trajectory formation for imitation with nonlinear dynamical systems. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 752–757, 2001. [9] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001. [10] W. Li and E. Todorov. Iterative linear-quadratic regulator design for nonlinear biological movement systems. In Proceedings of the 1st Int. Conf. on Informatics in Control, Automation and Robotics, volume 1, pages 222–229, 2004. [11] A. N. Meltzoff. Elements of a developmental theory of imitation. pages 19–41, 2002. [12] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. 17th International Conf. on Machine Learning, pages 663–670, 2000. [13] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. [14] S. Schaal, A. Ijspeert, and A. Billard. Computational approaches to motor learning by imitation. 1431:199–218, 2004. [15] D. Scott and W. Szewczyk. From kernels to mixtures. Technometrics, 43(3):323–335. [16] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In CVPR (1), pages 605–612, 2003. [17] H.-G. Sung. Gaussian Mixture Regression and Classification. PhD thesis, Rice University, 2004. [18] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1):1–41, 2000. [19] M. Y. Kuniyoshi and H. Inoue. “learning by watching: Extracting reusable task knowledge from visual observation of human performance” ieee transaction on robotics and automation, vol.10, no.6, pp.799–822, dec., 1994.
2006
47
3,067
Active learning for misspecified generalized linear models Francis R. Bach Centre de Morphologie Math´ematique Ecole des Mines de Paris Fontainebleau, France francis.bach@mines.org Abstract Active learning refers to algorithmic frameworks aimed at selecting training data points in order to reduce the number of required training data points and/or improve the generalization performance of a learning method. In this paper, we present an asymptotic analysis of active learning for generalized linear models. Our analysis holds under the common practical situation of model misspecification, and is based on realistic assumptions regarding the nature of the sampling distributions, which are usually neither independent nor identical. We derive unbiased estimators of generalization performance, as well as estimators of expected reduction in generalization error after adding a new training data point, that allow us to optimize its sampling distribution through a convex optimization problem. Our analysis naturally leads to an algorithm for sequential active learning which is applicable for all tasks supported by generalized linear models (e.g., binary classification, multi-class classification, regression) and can be applied in non-linear settings through the use of Mercer kernels. 1 Introduction The goal of active learning is to select training data points so that the number of required training data points for a given performance is smaller than the number which is required when randomly sampling those points. Active learning has emerged as a dynamic field of research in machine learning and statistics [1], from early works in optimal experimental design [2, 3], to recent theoretical results [4] and applications, in text retrieval [5], image retrieval [6] or bioinformatics [7]. Despite the numerous successful applications of active learning to reduce the number of required training data points, many authors have also reported cases where widely applied active learning heuristic schemes such as maximum uncertainty sampling perform worse than random selection [8, 9], casting doubt into the practical applicability of active learning: why would a practitioner use an active learning strategy that is not ensuring, unless the data satisfy possibly unrealistic and usually non verifiable assumptions, that it performs better than random? The objectives of this paper are (1) to provide a theoretical analysis of active learning with realistic assumptions and (2) to derive a principled algorithm for active learning with guaranteed consistency. In this paper, we consider generalized linear models [10], which provide flexible and widely used tools for many supervised learning tasks (Section 2). Our analysis is based on asymptotic arguments, and follows previous asymptotic analysis of active learning [11, 12, 9, 13]; however, as shown in Section 4, we do not rely on correct model specification and assume that the data are not identically distributed and may not be independent. As shown in Section 5, our theoretical results naturally lead to convex optimization problems for selecting training data point in a sequential design. In Section 6, we present simulations on synthetic data, illustrating our algorithms and comparing them favorably to usual active learning schemes. 2 Generalized linear models Given data x ∈Rd, and targets y in a set Y, we consider the problem of modeling the conditional probability p(y|x) through a generalized linear model (GLIM) [10]. We assume that we are given an exponential family adapted to our prediction task, of the form p(y|η) = exp(η⊤T(y) −ψ(η)), where T(y) is a k-dimensional vector of sufficient statistics, η ∈Rk is vector of natural parameters and ψ(η) is the convex log-partition function. We then consider the generalized linear model defined as p(y|x, θ) = exp(tr(θ⊤xT(y)⊤) −ψ(θ⊤x)), where θ ∈Θ ⊂Rd×k. The framework of GLIMs is general enough to accomodate many supervised learning tasks [10], in particular: • Binary classification: the Bernoulli distribution leads to logistic regression, with Y = {0, 1}, T(y) = y and ψ(η) = log(1 + eη). • k-class classification: the multinomial distribution leads to softmax regression, with Y = {y ∈{0, 1}k, Pk i=1 yi = 1}, T(y) = y and ψ(η) = log(Pk i=1 eηi). • Regression: the normal distribution leads to Y = R, T(y) = (y, −1 2y2)⊤∈R2, and ψ(η1, η2) = −1 2 log η2 + 1 2 log 2π + η2 1 2η2 . When both η1 and η2 depends linearly on x, we have an heteroscedastic model, while if η2 is constant for all x, we obtain homoscedastic regression (constant noise variance). Maximum likelihood estimation We assume that we are given independent and identically distributed (i.i.d.) data sampled from the distribution p0(x, y) = p0(x)p0(y|x). The maximum likelihood population estimator θ0 is defined as the minimizer of the expectation under p0 of the negative log-likelihood ℓ(y, x, θ) = −tr(θ⊤xT(y)⊤) + ψ(θ⊤x). The function ℓ(y, x, θ) is convex in θ and by taking derivatives and using the classical relationship between the derivative of the log-partition and the expected sufficient statistics [10], the population maximum likelihood estimate is defined by: Ep0(x,y)∇ℓ(y, x, θ0) = Ep0(x)  x(Ep(y|x,θ0)T(y) −Ep0(y|x)T(y))⊤ = 0 (1) Given i.i.d data (xi, yi), i = 1, . . . , n, we use the penalized maximum likelihood estimator, which minimizes Pn i=1 ℓ(yi, xi, θ) + 1 2λtrθ⊤θ. The minimization is performed by Newton’s method [14]. Model specification A GLIM is said well-specified is there exists a θ ∈Rd×k such that for all x ∈Rd, Ep(y|x,θ)T(y) = Ep0(y|x)T(y). A sufficient condition for correct specification is that there exist θ ∈Rd×k such that for all x ∈Rd, y ∈Y, p(y|x, θ) = p0(y|x). This condition is necessary for the Bernoulli and multinomial exponential family, but not for example for the normal distribution. In practice, the model is often misspecified and it is thus of importance to consider potential misspecification while deriving asymptotic expansions. Kernels The theoretical results of this paper mainly focus on generalized linear models; however, they can be readily generalized to non-linear settings by using Mercer kernels [15], for example leading to kernel logistic regression or kernel ridge regression. When the data are given by a kernel matrix, we can use the incomplete Cholesky decomposition [16] to find an approximate basis of the feature space on which the usual linear methods can be applied. Note that our asymptotic results do not hold when the number of parameters may grow with the data (which is the case for kernels such as the Gaussian kernel). However, our dimensionality reduction procedure uses a non-parametric method on the entire (usually large) training dataset and we then consider a finite dimensional problem on a much smaller sample. If the whole training dataset is large enough, then the dimension reduction procedure may be considered deterministic and our criteria may apply. 3 Active learning set-up We consider the following “pool-based” active learning scenario: we have a large set of i.i.d. data points xi ∈Rd, i = 1, . . . , m sampled from p0(x). The goal of active learning is to select the points to label, i.e., the points for which the corresponding yi will be observed. We assume that given xi, i = 1, . . . , n, the targets yi, i = 1, . . . , n are independent and sampled from the corresponding conditional distribution p0(yi|xi). This active learning set-up is well studied and appears naturally in many applications where the input distribution p0(x) is only known through i.i.d. samples [5, 17]. For alternative scenarii, where the density p0(x) is known, see e.g. [18, 19, 20]. More precisely, we assume that the points xi are selected sequentially, and we let denote qi(xi|x1, . . . , xi−1) the sampling distribution of xi given the previously observed points. In situations where the data are not sampled from the testing distribution, it has proved advantageous to consider likelihood weighting techniques [13, 19], and we thus consider weights wi = wi(xi|x1, . . . , xi−1). We let ˆθn denote the weighted penalized ML estimator, defined as the minimum with respect to θ of Pn i=1 wiℓ(yi, xi, θ) + λ 2 trθ⊤θ. (2) In this paper, we work with two different assumptions regarding the sequential sampling distributions: (1) the variables xi are independent, i.e., qi(xi|x1, . . . , xi−1) = qi(xi), (2) the variable xi depends on x1, . . . , xi−1 only through the current empirical ML estimator ˆθi, i.e., qi(xi|x1, . . . , xi−1) = q(xi| ˆθi), where q(xi|θ) is a pre-specified sampling distribution. The first assumption is not realistic, but readily leads to asymptotic expansions. The second assumption is more realistic, as most of the heuristic schemes for sequential active learning satisfy this assumption. It turns out that under certain assumption, the asymptotic expansions of the expected generalization performance for both sets of assumptions are identical. 4 Asymptotic expansions In this section, we derive the asymptotic expansions that will lead to active learning algorithms in Section 5. Throughout this section, we assume that p0(x) has a compact support K and has a twice differentiable density with respect to the Lebesgue measure, and that all sampling distributions have a compact support included in the one of p0(x) and have twice differentiable densities. We first make the assumption that the variables xi are independent, i.e., we have sampling distributions qi(xi) and weights wi(xi), both measurable, and such that wi(xi) > 0 for all xi ∈K. In Section 4.4, we extend some of our results to the dependent case. 4.1 Bias and variance of ML estimator The following proposition is a simple extension to non identically distributed observations, of classical results on maximum likelihood for misspecified generalized linear models [21, 13]. We let ED and varD denote the expectation and variance with respect to the data D = {(xi, yi), i = 1, . . . , n}. Proposition 1 We let θn denote the minimizer of Pn i=1 Eqi(xi)p0(yi|xi)wi(xi)ℓ(yi, xi, θ). If (a) the weight functions wn and the sampling densities qn are pointwise strictly positive and such that wn(x)qn(x) converges in the L∞-norm, and (b) Eqn(x)w2 n(x) is bounded , then ˆθn −θn converges to zero in probability and we have ED ˆθn = θn + O(n−1) and varD ˆθn = 1 nJ−1 n InJ−1 n + O(n−2) (3) where Jn = 1 n Pn i=1 Eqi(x)wi(x)∇2ℓ(x, θn) can be consistently estimated by ˆJn = 1 n Pn i=1 wihi and In = 1 n Pn i=1 Eqi(x)p0(y|x)wi(x)2∇ℓ(y, x, θn)∇ℓ(y, x, θn)⊤can be consistently estimated by ˆIn = 1 n Pn i=1 w2 i gig⊤ i , where gi = ∇ℓ(yi, xi, ˆθn) and hi = ∇2ℓ(xi, ˆθn). From Proposition 1, it is worth noting that in general θn will not converge to the population maximum likelihood estimate θ0, i.e., using a different sampling distribution than p0(x) may introduce a non asymptotically vanishing bias in estimating θ0. Thus, active learning requires to ensure that (a) our estimators have a low bias and variance in estimating θn, and (b) that θn does actually converge to θ0. This double objective is taken care of by our estimates of generalization performance in Propositions 2 and 3. There are two situations, however, where θn is equal to θ0. First, if the model is well specified, then whatever the sampling distributions are, θn is the population ML estimate (which is a simple consequence of the fact that Ep(y|x,θ0)T(y) = Ep0(y|x)T(y), for all x, implies that, for all q(x), Eq(x)p0(y|x)∇ℓ(y, x, θ) = Eq(x)  x(Ep(y|x,θ0)T(y) −Ep0(y|x)T(y))⊤ = 0). Second, When wn(x) = p0(x)/qn(x), then θn is also equal to θ0, and we refer to this weighting scheme as the unbiased reweighting scheme, which was used by [19] in the context of active learning. We refer to the weights wu n = p0(xn)/qn(xn) as the importance weights. Note however, that restricting ourselves to such unbiased estimators, as done in [19] might not be optimal because they may lead to higher variance [13], in particular due to the potential high variance of the importance weights (see simulations in Section 6). 4.2 Expected generalization performance We let Lu(θ) = Ep0(x)p0(y|x)ℓ(y, x, θ) denote the generalization performance1 of the parameter θ. We now provide an unbiased estimator of the expected generalization error of ˆθn, which generalized the Akaike information criterion [22] (for a proof, see [23]): Proposition 2 In addition to the assumptions of Proposition 1, we assume that Eqn(x) (p0(x)/qn(x))2 is bounded. Let bG = 1 n Pn i=1 wu i ℓ(yi, xi, ˆθn) + 1 n  1 n Pn i=1 wu i wig⊤ i ( ˆJn)−1gi  , (4) where wu i = p0(xi)/qi(xi). bG is an asymptotically unbiased estimator of EDLu(ˆθn), i.e., ED bG = EDLu(ˆθn) + O(n−2). The criterion bG is a sum of two terms: the second term corresponds to a variance term and will converge to zero in probability at rate O(n−1); the first term, however, which corresponds to a selection bias induced by a specific choice of sampling distributions, will not always converge to the minimum possible value Lu(θ0). Thus, in order to ensure that our active learning method are consistent, we have to ensure that this first term is going to its minimum value. One simple way to achieve this is to always optimize our weights so that the estimate bG is smaller than the estimate for the unbiased reweighting scheme (see Section 5). 4.3 Expected performance gain We now look at the following situation: we are given the first n data points (xi, yi) and the current estimate ˆθn, the gradients gi = ∇ℓ(yi, xi, ˆθn), the Hessians hi = ∇2ℓ(xi, ˆθn) and the third derivatives Ti = ∇3ℓ(xi, ˆθn), we consider the following criterion, which depends on the sampling distributions and weights of the (n + 1)-th point: bH(qn+1, wn+1|α, β) = 1 n3 Pn i=1 αiwu i wn+1(xi) qn+1(xi) p0(xi) + Pn i=1 βiwu i wn+1(xi)2 qn+1(xi) p0(xi) (5) where αi = −(n + 1)n˜g⊤ i ˆJnA −wiwu i ˜g⊤ i hi˜gi + wu i ˜g⊤ i ˆJn˜gi −2˜g⊤ i B −wi˜g⊤ i ˆJu n ˜gi + Ti[˜gi, C] −2wi˜g⊤ i hiA + Ti[A, ˜gi, ˜gi] (6) βi = 1 2 ˜g⊤ i ˆJu n ˜gi + A⊤hi˜gi (7) with ˜gi = ˆJ−1 n gi, A = ˆJ−1 n 1 n Pn i=1 wu i gi, B = Pn i=1 wu i wihi˜gi, C = Pn i=1 wiwu i ˜gi˜g⊤ i , ˆJu n = 1 n Pn i=1 wu i hi. The following proposition shows that bH(qn+1, wn+1|α, β) is an estimate of the expected performance gain of choosing a point xn+1 according to distribution qn+1 and weight wn+1 (and marginalizing over yn+1) and may be used as an objective function for learning the distributions qn+1, wn+1 (for a proof, see [23]). In Section 5, we show that if the distributions and weights are properly parameterized, this leads to a convex optimization problem. Proposition 3 We assume that Eqn(x)w2 n(x) and Eqn(x) (p0(x)/qn(x))2 are bounded. We let denote ˆθn denote the weighted ML estimator obtained from the first n points, and ˆθn+1 the one-step estimator obtained from the first n+1 points, i.e., ˆθn+1 is obtained by one Newton step from ˆθn [24]; then the criterion defined in Eq. (5) is such that ED bH(qn+1, wn+1) = EDLu(ˆθn)−EDLu(ˆθn+1)+ O(n−3), where ED denotes the expectation with respect to the first n+1 data points and their labels. Moreover, for n large enough, all values of βi are positive. 1In this paper, we use the negative log-likelihood as a measure of performance, which allows simple asymptotic expansions, and the focus of the paper is about the differences between testing and training sampling distributions. The study of potentially different costs for testing and training is beyond the scope of this paper. Note that many of the terms in Eq. (6) and Eq. (7) are dedicated to weighting schemes for the first n points other than the unbiased reweighting scheme. For the unbiased reweighting scheme where wi = wu i , for i = 1, . . . , n, then A = 0 and the equations may be simplified. 4.4 Dependent observations In this section, we show that under a certain form of weak dependence between the data points xi, i = 1, . . . , n, then the results presented in Propositions 1 and 2 still hold. For simplicity and brevity, we restrict ourselves to the unbiased reweighting scheme, i.e., wn(xn|x1, . . . , xn−1) = p0(xn)/qn(xn|x1, . . . , xn−1) for all n, and we assume that those weights are uniformly bounded away from zero and infinity. In addition, we only prove our result in the well-specified case, which leads to a simpler argument for the consistency of the estimator. Many sequential active learning schemes select a training data point with a distribution or criterion that depends on the estimate so far (see Section 6 for details). We thus assume that the sampling distribution qn is of the form q(xn|ˆθn), where q(x|θ) is a fixed set of smooth parameterized densities. Proposition 4 (for a proof, see [23]) Let bG = 1 n Pn i=1 wiℓ(yi, xi, ˆθn) + 1 n  1 n Pn i=1 w2 i g⊤ i ( ˆJn)−1gi  , (8) where wi = wu i = p0(xi)/q(xi|ˆθi). bG is an asymptotically unbiased estimator of EDLu(ˆθn), i.e., ED bG = EDLu(ˆθn) + O(log(n)n−2). The estimator is the same as in Proposition 2. The effect of the dependence is asymptotically negligible and only impacts the result with the presence of an additional log(n) term. In the algorithms presented in Section 5, the distribution qn is obtained as the solution of a convex optimization problem, and thus the previous theorem does not readily apply. However, when n gets large, qn depends on the previous data points only through the first two derivatives of the objective function of the convex problem, which are empirical averages of certain functions of all currently observed data points; we are currently working out a generalization of Proposition 4 that allows the dependence on certain empirical moments and potential misspecification. 5 Algorithms In Section 4, we have derived a criterion bH in Eq. (5) that enables to optimize the sampling density of the (n + 1)-th point, and an estimate bG in Eq. (4) and Eq. (8) of the generalization error. Our algorithms are composed of the following three ingredients: 1. Those criteria assume that the variance of the importance weights wu n = p0(xn)/qn(xn) is controlled. In order to make sure that those results apply, our algorithms will ensure that this condition is met. 2. The sampling density qn+1 will be obtained by minimizing bH(wn+1, qn+1|α, β) for a certain parameterization of qn+1 and wn+1. It turns out that those minimization problems are convex, and can thus be efficiently solved, without local minima. 3. Once a new sample has been selected, and its label observed, Proposition 4 is used in a way similar to [13], in order to search for the best mixture between the current weights (wi) and the importance weights (wu i ), i.e., we look at weights of the form wγ i (wu i )1−γ and perform a grid search on γ to find γ such that bG in Eq. (4) is minimum. The main interest of the first and third points is that we obtain a final estimator of θ0 which is at least provably consistent: indeed, although our criteria are obtained from an assumption of independence, the generalization performance result also holds for “weakly” dependent observations and thus ensures the consistency of our approach. Thus, as opposed to most previous active learning heuristics, our estimator will always converge (in probability) to the ML estimator. In Section 6, we show empirically that usual heuristic schemes do not share this property. Convex optimization problem We assume that we have a fixed set of candidate distributions sk(x) of the form sk(x) = p0(x)rk(x). Note that the multiplicative form of our candidate distributions allows efficient sampling from a pool of samples of p0. We look at distributions qn+1(x) with mixture density of the form s(x|η) = P k ηksk(x) = p0(x)r(x), where the weights η are non-negative and sum to one. The criterion bH(qn+1, wn+1|α, β) in Eq. (5) is thus a function H(η|α, β) of η. We consider two weighting schemes: (a) one with all weights equal to one (unit weighting scheme) which leads to H0(η|α, β), and (b) the unbiased reweighting scheme, where wn+1(x) = p0(x)/qn+1(x), which leads to H1(η|α, β). We have H0(η|α, β) = 1 n3 P k ηk (Pn i=1(αi + βi)wu i sk(xi)) , (9) H1(η|α, β) = 1 n3 Pn i=1 αiwu i + Pn i=1 βiwu i P k ηksk(xi). (10) The function H0(η) is linear in η, while the function H1(η) is the sum of a constant and positive inverse functions, and is thus convex [14]. Unless natural candidate distributions sk(x) can be defined for the active learning problem, we use the set of distributions obtained as follows: we perform K-means clustering with a large number p of clusters (e.g., 100 or 200), and then consider functions rk(x) of the form rk(x) = 1 Zk e−αk∥x−µk∥2, where αk is one element of a finite given set of parameters, and µk is one of the p centroids y1, . . . , yp, obtained from K-means. We let ˜wi denote the number of data points assigned to the centroid yi. We normalize by Zk = Pp i=1 ˜wie−αk∥yi−µk∥2/ Pp i=1 ˜wi. We thus obtained O(p) candidate distributions rk(x), which, if p is large enough, provides a flexible yet tractable set of mixture distributions. One additional element is the constraint on the variance of the importance weights. The variance of wu n+1 can be estimated as var wu n+1 = Pm i=1 ˜ wi r(xi) −1 = Pm i=1 ˜ wi P k ηkrk(xi) −1 = V (η), which is convex in η. Thus constraining the variance of the new weights leads to a convex optimization problem, with convex objective and convex constraints, which can be solved efficiently by the logbarrier method [14], with cubic complexity in the number of candidate distributions. Algorithms We have three versions of our algorithm, one with unit weights (referred to as “no weight”) which optimizes H0(η|α, β) at each iteration, one with the unbiased reweighting scheme, which optimizes H1(η|α, β) (referred to as ”unbiased”) and one which does both and chooses the best one, as measured by bH (referred to as ”full”): in the initialization phase, K-means is run to generate candidate distributions that will be used throughout the sampling of new points. Then, in order to select the new training data point xn+1, the scores α and β are computed from Eq. (6) and Eq. (7), then the appropriate cost function, H0(η|α, β), H1(η|α, β) (or both) is minimized and once η is obtained, we sample xn+1 from the corresponding distribution, and compute the weights wn+1 and wu n+1. As described earlier, we then find γ such that bG((wγ i (wu i )1−γ)i) in Eq. (4) is minimized and update weights accordingly. Regularization parameter In the active learning set-up, the number of samples used for learning varies a lot. It is thus not possible to use a constant regularization parameter. We thus learn it by cross-validation every 10 new samples. 6 Simulation experiments In this section, we present simulation experiments on synthetic examples (sampled from Gaussian mixtures in two dimensions), for the task of binary and 3-class classification. We compare our algorithms to the following three active learning frameworks. In the maximum uncertainty framework (referred to as “maxunc”), the next training data point is selected such that the entropy of p(y|x, ˆθn) is maximal [17]. In the maximum variance reduction framework [25, 9] (referred to as “varred”), the next point is selected so that the variance of the resulting estimator has the lowest determinant, which is equivalent to finding x such that tr∇(x, ˆθn) ˆJ−1 n is minimum. Note that this criterion has theoretical justification under correct model specification. In the minimum prediction error framework (referred to as “minpred”), the next point is selected so that it reduces the most the expected log-loss, with the current model as an estimate of the unknown conditional probability p0(y|x) [5, 8]. Sampling densities In Figure 1, we look at the limit selected sampling densities, i.e., we assume that a large number of points has been sampled, and we look at the criterion ˆH in Eq. (5). We show the density obtained from the unbiased reweighting scheme (middle of Figure 1), as well as −10 −5 0 5 −5 0 5 −10 −5 0 5 −5 0 5 0.2 0.4 0.6 0.8 −10 −5 0 5 −5 0 5 −0.5 0 0.5 Figure 1: Proposal distributions: (Left) density p0(x) with the two different classes (red and blue), (Middle) best density with unbiased reweighting, (Right) function γ(x) such that bH(qn+1(x), 1) = R γ(x)qn+1(x)dx (see text for details). 0 50 100 0.1 0.15 0.2 number of samples error rate random full 0 50 100 0.1 0.15 0.2 number of samples error rate random no weight unbiased 0 50 100 0.1 0.15 0.2 number of samples error rate random minpred varred maxunc Figure 2: Error rates vs. number of samples averaged over 10 replications sampled from same distribution as in Figure 1: (Left) random sampling and active learning ”full”, with standard deviations, (Middle) Comparison of the two schemes “unbiased” and ”no weight”, (Right) Comparison with other methods. the function γ(x) (right of Figure 1) such that, for the unit weighting scheme, bH(qn+1(x), 1) = R γ(x)qn+1(x)dx. In this framework, minimizing the cost without any constraint leads to a Dirac at the maximum of γ(x), while minimizing with a constraint on the variance of the corresponding importance weights will select point with high values of γ(x). We also show the line θ⊤ 0 x = 0. From Figure 1, we see that (a) the unit weighting scheme tends to be more selective (i.e., finer grain) than the unbiased scheme, and (b) that the mode of the optimal densities are close to the maximum uncertainty hyperplane but some parts of this hyperplane are in fact leading to negative cost gains (e.g., the part of the hyperplane crossing the central blob), hinting at the bad potential behavior of the maximum uncertainty framework. Comparison with other algorithms In Figure 2 and Figure 1, we compare the performance of our active learning algorithms. In the left of Figure 2, we see that our active learning framework does perform better on average but also leads to smaller variance. In the middle of Figure 2, we compare the two schemes “no weight” and “unbiased”, showing the superiority of the unit weighting scheme and the significance of our asymptotic results in Proposition 2 and 3 which extend the unbiased framework of [13]. In the right of Figure 2 and in Figure 3, we compare with the other usual heuristic schemes: our “full” algorithm outperforms other schemes; moreover, in those experiments, the other schemes do perform worse than random sampling and converge to the wrong estimator, a bad situation that our algorithms provably avoid. 7 Conclusion We have presented a theoretical asymptotic analysis of active learning for generalized linear models, under realistic sampling assumptions. From this analysis, we obtain convex criteria which can be optimized to provide algorithms for online optimization of the sampling distributions. This work naturally leads to several extensions. First, our framework is not limited to generalized linear models, but can be readily extended to any convex differentiable M-estimators [24]. Second, it seems advantageous to combine our active learning analysis with semi-supervised learning frameworks, in particular ones based on data-dependent regularization [26]. Finally, we are currently investigating applications to large scale image retrieval tasks, where unlabelled data are abundant but labelled data are scarce. 0 50 100 0.1 0.2 0.3 0.4 0.5 number of samples error rate random full minpred varred maxunc Figure 3: Error rates vs. number of samples averaged over 10 replications for 3 classes: (left) data, (right) comparisons of methods. References [1] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. J. Art. Intel. Res., 4:129–145, 1996. [2] V. V. Fedorov. Theory of optimal experiments. Academic Press, 1972. [3] P. Chaudhuri and P. A. Mykland. On efficient designing of nonlinear experiments. Stat. Sin., 5:421–440, 1995. [4] S. Dasgupta. Coarse sample complexity bounds for active learning. In Adv. NIPS 18, 2006. [5] N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of error reduction. In Proc. ICML, 2001. [6] S. Tong and E. Chang. Support vector machine active learning for image retrieval. In Proc. ACM Multimedia, 2001. [7] M. Warmuth, G. R¨atsch, M. Mathieson, J. Liao, and C. Lemmen. Active learning in the drug discovery process. In Adv. NIPS 14, 2002. [8] X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions. In Proc. ICML, 2003. [9] A I. Schein. Active Learning for Logistic Regression. Ph.D. diss., U. Penn., 2005. CIS Dpt. [10] P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman and Hall, 1989. [11] T. Zhang and F. J. Oles. A probability analysis on the value of unlabeled data for classification problems. In Proc. ICML, 2000. [12] O. Chapelle. Active learning for parzen window classifier. In Proc. AISTATS, 2005. [13] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plan. Inf., 90:227–244, 2000. [14] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2003. [15] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004. [16] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. J. Mach. Learn. Res., 2:243–264, 2001. [17] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In Proc. ICML, 2000. [18] K. Fukumizu. Active learning in multilayer perceptrons. In Adv. NIPS 8, 1996. [19] T. Kanamori and H. Shimodaira. Active learning algorithm using the maximum weighted log-likelihood estimator. J. Stat. Plan. Inf., 116:149–162, 2003. [20] T. Kanamori. Statistical asymptotic theory of active learning. Ann. Inst. Stat. Math., 54(3):459–475, 2002. [21] H. White. Maximum likelihood estimation of misspecified models. Econometrica, 50(1):1–26, 1982. [22] H. Akaike. A new look at statistical model identification. IEEE Trans. Aut. Cont., 19:716–722, 1974. [23] F. R. Bach. Active learning for misspecified generalized linear models. Technical Report N15/06/MM, Ecole des Mines de Paris, 2006. [24] A. W. Van der Vaart. Asymptotic Statistics. Cambridge Univ. Press, 1998. [25] D. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590–604, 1992. [26] Y. Bengio and Y Grandvalet. Semi-supervised learning by entropy minimization. In Adv. NIPS 17, 2005.
2006
48
3,068
Tighter PAC-Bayes Bounds Amiran Ambroladze Dep. of Mathematics Lund University/LTH Box 118, S-221 00 Lund, SWEDEN amiran.ambroladze@math.lth.se Emilio Parrado-Hern´andez Dep. of Signal Processing and Communications University Carlos III of Madrid Legan´es, 28911, SPAIN emipar@tsc.uc3m.es John Shawe-Taylor Dep. of Computer Science University College London Gower Street, London WC1E 6BT, UK jst@cs.ucl.ac.uk Abstract This paper proposes a PAC-Bayes bound to measure the performance of Support Vector Machine (SVM) classifiers. The bound is based on learning a prior over the distribution of classifiers with a part of the training samples. Experimental work shows that this bound is tighter than the original PAC-Bayes, resulting in an enhancement of the predictive capabilities of the PAC-Bayes bound. In addition, it is shown that the use of this bound as a means to estimate the hyperparameters of the classifier compares favourably with cross validation in terms of accuracy of the model, while saving a lot of computational burden. 1 Introduction Support vector machines (SVM) implement linear classifiers in a high-dimensional feature space using the kernel trick to enable a dual representation and efficient computation. The danger of overfitting in such high-dimensional spaces is countered by maximising the margin of the classifier on the training examples. For this reason there has been considerable interest in bounds on the generalisation in terms of the margin. Early bounds have relied on covering number computations [7], while later bounds have considered Rademacher complexity. The tightest bounds for practical applications appear to be the PAC-Bayes bound [4, 5]. In particular the form given in [3] is specially attractive for margin classifiers, like SVM. The PAC-Bayesian bounds are also present in other Machine Learning models such as Gaussian Processes [6]. The aim of this paper is to consider a refinement of the PAC-Bayes approach and investigate whether it can improve on the original PAC-Bayes bound and uphold its capabilities of delivering reliable model selection. The standard PAC-Bayes bound uses a Gaussian prior centred at the origin in weight space. The key to the new bound is to use part of the training set to compute a more informative prior and then compute the bound on the remainder of the examples relative to this prior. The bounds are tested experimentally in several classification tasks, including the model selection, on common benchmark datasets. The rest of the document is organised as follows. Section 2 briefly reviews the PAC-Bayes bound for SVMs obtained in [3]. The new bound obtained by means of the refinement of the prior is presented in Section 3. The experimental work, included in Section 4, compares the tightness of the new bound with the original one and indicates about its usability in a model selection task. Finally, the main conclusions of this work are outlined in Section 5. 2 PAC-Bayes Bound This section is devoted to a brief review of the PAC-Bayes Bound Theorem of [3]. Let us consider a distribution D of patterns x lying in a certain input space X, with their corresponding output labels y, y ∈{−1, 1}. In addition, let us also consider a distribution Q over the classifiers c. For every classifier c, the following two error measures are defined: Definition (True error) The true error cD of a classifier c is defined as the probability of misclassifying a pair pattern-label (x, y) selected at random from D cD ≡Pr(x,y)∼D(c(x) ̸= y) Definition (Empirical error) The empirical error ˆcS of a classifier c on a sample S of size m is defined as the rate of errors on a set S ˆcS ≡Pr(x,y)∼S(c(x) ̸= y) = 1 m m X i=1 I(c(xi) ̸= yi) where I(·) is a function equal to 1 if the argument is true and equal to 0 if the argument is false. Now we can define two error measures on the distribution of classifiers: the true error, QD ≡ Ec∼QcD, as the probability of misclassifying an instance x chosen from D with a classifier c chosen according to Q; and the empirical error ˆQS ≡Ec∼QˆcS, as the probability of classifier c chosen according to Q misclassifying an instance x chosen from a sample S. For these two quantities we can derive the PAC-Bayes Bound on the true error of the distribution of classifiers: Theorem 2.1 (PAC-Bayes Bound) For all prior distributions P(c) over the classifiers c, and for any δ ∈(0, 1] PrS∼Dm  ∀Q(c) : KL( ˆQS||QD) ≤KL(Q(c)||P(c)) + ln( m+1 δ ) m  ≥1 −δ, where KL is the Kullback-Leibler divergence, KL(p||q) = q ln q p + (1 −q) ln 1−q 1−p and KL(Q(c)||P(c)) = Ec∼Q ln Q(c) P (c). The proof of the theorem can be found in [3]. This bound can be particularised for the case of linear classifiers in the following way. The m training patterns define a linear classifier that can be represented by the following equation1: c(x) = sign(wT φ(x)) (1) where φ(x) is a nonlinear projection to a certain feature space where a linear classification actually takes place, and w is a vector from that feature space that determines the separating plane. For any vector w we can define a stochastic classifier in the following way: we choose the distribution Q = Q(w, µ) to be a spherical Gaussian with identity covariance matrix centred on the direction given by w at a distance µ from the origin. Moreover, we can choose the prior P(c) to be a spherical Gaussian with identity covariance matrix centred on the origin. Then, for classifiers of the form in equation (1) performance can be bounded by 1We are considering here unbiased classifiers, i.e., with b = 0. Corollary 2.2 (PAC-Bayes Bound for margin classifiers [3]) For all distributions D, for all classifiers given by w and µ > 0, for all δ ∈(0, 1], we have Pr KL( ˆQS(w, µ)||QD(w, µ)) ≤ µ2 2 + ln( m+1 δ ) m ! ≥1 −δ. It can be shown (see [3]) that ˆQS(w, µ) = Em[ ˜F(µγ(x, y))] (2) where Em is the average over the m training examples, γ(x, y) is the normalised margin of the training patterns γ(x, y) = ywT φ(x) ∥φ(x)∥∥w∥ (3) and ˜F = 1 −F, where F is the cumulative normal distribution F(x) = Z x −∞ 1 √ 2π e−x2/2dx. (4) Note that the SVM is a thresholded linear classifier expressed as (1) computed by means of the kernel trick [2]. The generalisation error of such a classifier can be bounded by at most twice the true (stochastic) error QD(w, µ) in Corollary 2.2, (see [4]); Pr(x,y)∼D sign(wT φ(x)) ̸= y  ≤2QD(w, µ) for all µ. 3 Choosing a prior for the PAC-Bayes Bound Our first contribution is motivated by the fact that the PAC-Bayes bound allows us to choose the prior distribution, P(c). In the standard application of the bound this is chosen to be a Gaussian centred at the origin. We now consider learning a different prior based on training an SVM on a subset R of the training set comprising r training patterns and labels. In the experiments this is taken as a random subset but for simplicity of the presentation we will assume these to be the last r examples {xk, yk}m k=m−r+1 in the description below. With these r examples we can determine an SVM classifier, wr and form a prior P(w|wr) consisting of a Gaussian distribution with identity covariance matrix centred on wr. The introduction of this prior P(w|wr) in Theorem 2.1 results in the following new bound. Corollary 3.1 (Single Prior based PAC-Bayes Bound for margin classifiers) Let us consider a prior on the distribution of classifiers consisting in a spherical Gaussian with identity covariance centred along the direction given by wr at a distance η from the origin. Then, for all distributions D, for all classifiers wm and µ > 0, for all δ ∈(0, 1], we have PrS∼D KL( ˆQS\R(wm, µ)||QD(wm, µ)) ≤ ||ηwr−µwm||2 2 + ln( m−r+1 δ ) m −r ! ≥1 −δ where ˆQS\R is a stochastic measure of the error of the classifier on the m −r samples not used to learn the prior. This stochastic error is computed as indicated in equation (2) averaged over S\R. Proof Since we separate r instances to learn the prior, the actual size of the training set to which we apply the bound is m −r. In addition, the stochastic error must be computed only on the instances not used to learn the prior, i.e. the subset S\R. The KL divergence between prior and posterior is computed as follows: KL(Q(w)||P(w)) = Ew∼Q ln Q(w) P (w) = Ew∼Q ln exp(−1 2 (w−µwm)T (w−µwm)) exp(−1 2 (w−ηwr)T (w−ηwr)) = Ew∼Q  −1 2(w −µwm)T (w −µwm) + 1 2(w −ηwr)T (w −ηwr)  = Ew∼Q µwT mw  −1 2µ2wT mwm −Ew∼Q ηwT wr  + 1 2η2wT r wr Taking expectations using Ew∼Qw = µwm we arrive at 1 2||µwm −ηwr||2 Intuitively, if the selection of the prior is appropriate, the bound can be tighter than the one given in Corollary 2.2 when applied to the SVM weight vector on the whole training set. It is perhaps worth stressing that the bound holds for all wm and so can be applied to the SVM trained on the whole set. This might at first appear as ’cheating’, but the critical point is that the bound is evaluated on the set S\R not involved in generating the prior. The experimental work illustrates how in fact this bound can be tighter than the standard PAC-Bayes bound. Moreover, the selection of the prior may be further refined in exchange for a very small increase in the penalty term. This can be achieved with the application of the following result. Theorem 3.2 (Bound for several priors) Let {Pj(c)}J j=1 be a set of possible priors that can be selected with positive weights {πj}J j=1 so that PJ j=1 πj = 1. Then, for all priors P(c) ∈{Pj(c)}J j=1, for all posterior distributions Q(c), for all δ ∈(0, 1], PrS∼Dm ∀Q(c), ∀j : KL( ˆQS||QD) ≤ KL(Q(c)||Pj(c)) + ln m+1 δ + ln 1 πj m ! ≥1 −δ, Proof The bound in Theorem 2.1 can be particularised for a certain Pj(c) with associated weight πj and with confidence δπj PrS∼Dm ∀Q(c) : KL( ˆQS||QD) > KL(Q(c)||Pj(c)) + ln( m+1 δπj ) m ! < δπj, Now let us combine the bounds for all the priors {Pj(c)}J j=1 with the union operation (we use the fact that P(a ∪b) ≤P(a) + P(b)). PrS∼Dm ∀Q(c), ∃P(c) ∈{Pj(c)}J j=1 : KL( ˆQS||QD) > KL(Q(c)||Pj(c))+ln m+1 δ +ln 1 πj m ! < δ, (5) Finally, let us take the negation of (5) to arrive at the final result. This result can be also particularised for the case of SVM classifiers. The set of priors is constructed by allocating Gaussian distributions with identity covariance matrix along the direction given by wr at distances {ηj}J j=1 from the origin where {ηj}J j=1 are real numbers. In such a case, we obtain Corollary 3.3 (Multiple Prior PAC-Bayes Bound for linear classifiers) Let us consider a set {Pj(w|wr, ηj)}J j=1 of prior distributions of classifiers consisting in spherical Gaussian distributions with identity covariance matrix centred on ηjwr, where {ηj}J j=1 are real numbers. Then, for all distributions D, for all classifiers w, for all µ > 0, for all δ ∈(0, 1], we have PrS∼D KL( ˆQS\R(w, µ)||QD(w, µ)) ≤ ||ηjwr−µw||2 2 + ln( m−r+1 δ ) + ln J m −r ! ≥1 −δ Proof The proof is straightforward, substituting πj = 1 J for all j in Theorem 3.2 and computing the KL divergence between prior and posterior as in the proof of Corollary 3.1. Note that the {ηj}J j=1 must be chosen before we actually compute the posterior. However, the bound holds for all µ. Therefore, a linear search can be implemented for the value of µ that leads to the tightest bound. In the case of several priors, the search is repeated for every prior and the reported value of the bound is the tightest. In Section 4 we present experimental results comparing this new bound to the standard PAC-Bayes bound and using it to guide model selection. 4 Experiments The tightness of the new bound is evaluated in a model selection and classification task using some UCI [1] datasets (see their description in terms of number of instances, input dimension and number of positive/negative examples in Table 1). Problem # samples input dim. Pos/Neg Wdbc 569 30 357 / 212 Image 2310 18 1320 / 990 Waveform 5000 21 1647 / 3353 Ringnorm 7400 20 3664 / 3736 Table 1: Description of the datasets: for every set we give the number of patterns, number of input variables and number of positive/negative examples. For every dataset, we obtain 50 different training/test set partitions with 80% of the samples forming the training set and the remaining 20% forming the test set. With each of the partitions we learn a SVM classifier with Gaussian RBF kernel preceded by a model selection. The model selection consists in the determination of an optimal pair of hyperparameters (C, σ). C is the SVM trade-off between the maximisation of the margin and the minimisation of the hinge loss of the training samples, while σ is the width of the Gaussian kernel. The best pair is sought in a 15×15 grid of parameters where C ∈{0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000} and σ ∈{ 1 8 √ d, 1 7 √ d, 1 6 √ d, 1 5 √ d, 1 4 √ d, 1 3 √ d, 1 2 √ d, √ d, 2 √ d, 3 √ d, 4 √ d, 5 √ d, 6 √ d, 7 √ d, 8 √ d}, where d is the input space dimension. For completeness, this model selection is guided by the PAC-Bayes bound: we select the model corresponding to the pair that yields a lower value of QD in the bound. Table 2 shows the value of the PAC-Bayes Bound averaged over the 50 training/test partitions. For every partition we use the minimum value of the bound resulting from all the pairs (C, σ) of the grid. Note that this procedure is computationally less costly than the commonly used N-fold cross validation model selection, since it saves the training of N classifiers (one for each fold) for each parameter combination. Problem PAC-Bayes Bound Test error rate Wdbc 0.334 ± 0.005 0.073 ± 0.021 Image 0.254 ± 0.003 0.074 ± 0.014 Waveform 0.198 ± 0.002 0.089 ± 0.008 Ringnorm 0.212 ± 0.002 0.026 ± 0.005 Table 2: Averaged PAC-Bayes Bound and Test Error Rate obtained by the model that yielded the lowest bound in each of the 50 training/test partitions. We repeated this experiment using the Prior PAC-Bayes Bound with different configurations for learning the prior distribution of classifiers. These configurations are defined by variations on the percentage of training patterns separated to compute the prior and on the number of scalings of the magnitude of that prior. The scalings represent different lengths η of ||wr|| equally spaced between η = 1 and η = 100. To summarize, for every training/test partition and for every pair (% patterns, # of scalings) we look at the pair (C, σ) that outputs the smaller value of QD. In this case, the use of the Prior PAC-Bayes Bound to perform the model selection increases the computational burden of using the PAC-Bayes one in the training of one classifier (the one used to learn the prior), in comparison to the extra N classifiers needed by N-fold cross validation. Table 3 displays both the average value and the sample standard deviation over the 50 realisations. It seems that ten scalings of the prior are enough to obtain tighter bounds, since the use of 100 or 500 scalings does not improve the best results. With respect to the percentage of training instances left out to learn the prior, something close to 50% of the training set works well in the considered problems. It is worth mentioning that we treat each position in the Table as a separate experiment. Winsconsin Database of Breast Cancer (PAC-Bayes Bound = 0.334±0.005) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.341 ± 0.006 0.351 ± 0.007 0.364 ± 0.009 0.379 ± 0.011 0.398 ± 0.013 10 0.337 ± 0.010 0.323 ± 0.012 0.314 ± 0.012 0.310 ± 0.013 0.306 ± 0.018 100 0.319 ± 0.007 0.315 ± 0.010 0.313 ± 0.011 0.315 ± 0.013 0.315 ± 0.017 500 0.324 ± 0.007 0.320 ± 0.009 0.319 ± 0.011 0.321 ± 0.013 0.322 ± 0.017 Image Segmentation (PAC-Bayes Bound = 0.254±0.003) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.255 ± 0.003 0.262 ± 0.005 0.274 ± 0.003 0.284 ± 0.005 0.300 ± 0.008 10 0.215 ± 0.004 0.203 ± 0.006 0.200 ± 0.005 0.188 ± 0.007 0.184 ± 0.010 100 0.217 ± 0.004 0.203 ± 0.007 0.196 ± 0.005 0.187 ± 0.007 0.186 ± 0.009 500 0.218 ± 0.004 0.204 ± 0.007 0.198 ± 0.005 0.189 ± 0.007 0.188 ± 0.009 Waveform (PAC-Bayes Bound = 0.198±0.002) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.197 ± 0.003 0.201 ± 0.003 0.207 ± 0.003 0.214 ± 0.004 0.222 ± 0.005 10 0.161 ± 0.004 0.156 ± 0.004 0.153 ± 0.004 0.150 ± 0.005 0.151 ± 0.005 100 0.161 ± 0.004 0.155 ± 0.004 0.153 ± 0.004 0.152 ± 0.005 0.153 ± 0.005 500 0.162 ± 0.004 0.157 ± 0.004 0.155 ± 0.004 0.154 ± 0.005 0.155 ± 0.005 Ringnorm (PAC-Bayes Bound = 0.212±0.002) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.216 ± 0.001 0.225 ± 0.002 0.236 ± 0.002 0.249 ± 0.004 0.265 ± 0.002 10 0.172 ± 0.068 0.140 ± 0.047 0.126 ± 0.037 0.116 ± 0.030 0.109 ± 0.024 100 0.173 ± 0.068 0.139 ± 0.047 0.126 ± 0.037 0.117 ± 0.030 0.110 ± 0.024 500 0.173 ± 0.068 0.140 ± 0.047 0.127 ± 0.037 0.117 ± 0.030 0.110 ± 0.024 Table 3: Averaged Prior PAC-Bayes bound for different settings of percentage of training instances reserved to compute the prior and of number of scalings of the normalised prior. However, one could have included the tuning of the pair (% patterns, # of scalings) in the model selection. This would have involved a further application of the union bound with the 20 entries of the Table for each problem, at the cost of adding an extra ln(20)/m (0.0053 for Wdbc and less for the other datasets) in the right part of Theorem 3.2. We decided to fix the number of scalings and the amount of training patterns to compute the prior since to perform all of the different options would augment the computational burden of the model selection. In order to evaluate the predictive capabilities of the Prior PAC-Bayes bound as a means to select models with low test error rate, Table 4 displays the averaged test error corresponding to the models selected in the previous experiment (note that in this case the computational burden involved in determining the model is increased by the training of the SVM that learns the prior wr). Table 5 displays the test error rate obtained by SVMs with their hyperparameters tuned on the above mentioned grid by means of ten-fold cross-validation, that serves as a baseline method for comparison purposes. According to the values shown in the tables, the Prior PAC-Bayes bound achieves tighter predictions of the generalization error of the randomized classifier in almost all cases. Notice how the length of the prior is not so critical in comparison with its direction. The goodness of the latter relying on the subset of samples left out for the purpose of learning the prior classifier. Moreover it has to be remarked that this tightening of the bound does not appear to deliver any reduction in the capabilities to select a good model (such a case would imply that we can predict more accurately a bigger error rate, but our bound is able to predict accurately the same error rate as the PAC-Bayes Bound). Winsconsin Database of Breast Cancer (PAC-Bayes Test Error = 0.073±0.021) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.076 ± 0.020 0.076 ± 0.021 0.076 ± 0.021 0.076 ± 0.021 0.076 ± 0.021 10 0.075 ± 0.021 0.076 ± 0.021 0.075 ± 0.021 0.074 ± 0.021 0.072 ± 0.021 100 0.076 ± 0.021 0.076 ± 0.021 0.074 ± 0.021 0.074 ± 0.020 0.072 ± 0.021 500 0.076 ± 0.020 0.076 ± 0.021 0.074 ± 0.020 0.073 ± 0.020 0.072 ± 0.021 Image Segmentation (PAC-Bayes Test Error = 0.074±0.014) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.078 ± 0.011 0.078 ± 0.011 0.078 ± 0.011 0.083 ± 0.019 0.100 ± 0.019 10 0.064 ± 0.011 0.066 ± 0.011 0.063 ± 0.014 0.054 ± 0.010 0.056 ± 0.011 100 0.064 ± 0.011 0.063 ± 0.011 0.061 ± 0.011 0.059 ± 0.011 0.057 ± 0.012 500 0.064 ± 0.011 0.063 ± 0.011 0.061 ± 0.011 0.059 ± 0.011 0.057 ± 0.012 Waveform (PAC-Bayes Test Error = 0.089±0.008) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.089 ± 0.008 0.089 ± 0.008 0.090 ± 0.009 0.091 ± 0.009 0.091 ± 0.009 10 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.009 100 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.009 500 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.008 0.089 ± 0.009 Ringnorm (PAC-Bayes Test Error = 0.026±0.005) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.025 ± 0.004 0.030 ± 0.007 0.038 ± 0.005 0.036 ± 0.007 0.038 ± 0.005 10 0.020 ± 0.007 0.021 ± 0.007 0.021 ± 0.007 0.025 ± 0.008 0.026 ± 0.008 100 0.020 ± 0.007 0.021 ± 0.007 0.021 ± 0.007 0.025 ± 0.008 0.026 ± 0.008 500 0.020 ± 0.007 0.021 ± 0.007 0.021 ± 0.007 0.025 ± 0.008 0.025 ± 0.005 Table 4: Averaged Test Error Rate corresponding to the model determined by the bound for the different settings of Table 3. Problem Cross-validation error rate Test error rate Wdbc 0.060 ± 0.006 0.072 ± 0.024 Image 0.022 ± 0.002 0.024 ± 0.008 Waveform 0.079 ± 0.011 0.085 ± 0.009 Ringnorm 0.015 ± 0.001 0.017 ± 0.004 Table 5: Averaged test error rate. For every partition we select the test error rate corresponding to the model reporting the smaller cross-validation error. However, the comparison with Table 5 points out that the PAC-Bayes bound is not as accurate as Ten Fold cross-validation when it comes to selecting a model that yields a low test error rate. Nevertheless, in two out of the four problems (waveform, and wdbc) the bound provided a model as good as the one found by cross-validation, added to the fact that in ringnorm the error bars overlap. We conclude the discussion by pointing that the Cross-validation error rate cannot be used directly as a prediction on the expected test error rate in the sense of worse case performances. Of course the values of the cross-validation error rate and the test error rate are close, but it is difficult to predict how close they are going to be. 5 Conclusions and ongoing research In this paper we have presented a version of the PAC-Bayes bound for linear classifiers that introduces the learning of the prior distribution over the classifiers. This prior distribution is a Gaussian with identity covariance matrix. The mean weight vector is learnt in the following way: its direction is determined from a separate subset of the training examples, while its length has to be chosen from an a priori fixed set of lengths. The experimental work shows that this new version of the bound achieves tighter predictions of the generalization error of the stochastic classifier, compared to the original PAC-Bayes bound predictions. Moreover, if the model selection is driven by the bound, the Prior PAC-Bayes does not degrade the quality of the model selected by the original bound. Nevertheless, it has to be said that in some of our experiments the model selected by the bounds resulted as accurate as the ones selected by ten-fold cross-validation in terms of test error rate on a separate test. This fact is remarkable since to include the model selection in the training of the classifier roughly multiplies by ten the computational burden of the training when using ten-fold cross-validation but roughly by two when using the prior PAC-Bayes bound. Of course the original PAC-Bayes provides with a cheaper model selection, but its predictions about the generalization capabilities are more pessimistic. The amount of training patterns used to learn the prior seems to be a key aspect in the goodness of this prior and thus in the tightness of the bound. Therefore, ongoing research includes methods to systematically determine an amount of patterns that provides with suitable priors. Another line of research explores the use of these bounds to reinforce different properties of the design of classifiers, such as sparsity. Finally, a deeper study about which dataset structure causes differences among the performances of cross-validation and bound-driven model selections is also being carried out. Acknowledgments This work has been supported by the IST Programme of the European Community under the PASCAL Network of Excellence IST2002-506788. E. P-H. acknowledges support from Spain CICYT grant TEC2005-04264/TCM. References [1] C L Blake and C J Merz. UCI Repository of machine learning databases. University of California, Irvine, Dept. of Information and Computer Sciences, [http://www.ics.uci.edu/∼mlearn/MLRepository.html], 1998. [2] Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik. A training algorithm for optimal margin classifiers. In Computational Learing Theory, pages 144–152, 1992. [3] J Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6(Mar):273–306, 2005. [4] J Langford and J Shawe-Taylor. PAC-Bayes & Margins. In Advances in Neural Information Processing Systems, volume 14, Cambridge MA, 2002. MIT Press. [5] D McAllester. Pac-bayesian stochastic model selection. Machine Learning, 51(1):5–21, 2003. [6] M Seeger. PAC-Bayesian Generalization Error Bounds for Gaussian Process Classification. Journal of Machine Learning Research, 3:233–269, 2002. [7] J Shawe-Taylor, P L Bartlett, R C Williamson, and M Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Trans. Information Theory, 44(5):1926 – 1940, 1998.
2006
49
3,069
Unified Inference for Variational Bayesian Linear Gaussian State-Space Models David Barber IDIAP Research Institute rue du Simplon 4, Martigny, Switzerland david.barber@idiap.ch Silvia Chiappa IDIAP Research Institute rue du Simplon 4, Martigny, Switzerland silvia.chiappa@idiap.ch Abstract Linear Gaussian State-Space Models are widely used and a Bayesian treatment of parameters is therefore of considerable interest. The approximate Variational Bayesian method applied to these models is an attractive approach, used successfully in applications ranging from acoustics to bioinformatics. The most challenging aspect of implementing the method is in performing inference on the hidden state sequence of the model. We show how to convert the inference problem so that standard Kalman Filtering/Smoothing recursions from the literature may be applied. This is in contrast to previously published approaches based on Belief Propagation. Our framework both simplifies and unifies the inference problem, so that future applications may be more easily developed. We demonstrate the elegance of the approach on Bayesian temporal ICA, with an application to finding independent dynamical processes underlying noisy EEG signals. 1 Linear Gaussian State-Space Models Linear Gaussian State-Space Models (LGSSMs)1 are fundamental in time-series analysis [1, 2, 3]. In these models the observations v1:T 2 are generated from an underlying dynamical system on h1:T according to: vt = Bht + ηv t , ηv t ∼N(0V , ΣV ), ht = Aht−1 + ηh t , ηh t ∼N (0H, ΣH) , where N(µ, Σ) denotes a Gaussian with mean µ and covariance Σ, and 0X denotes an Xdimensional zero vector. The observation vt has dimension V and the hidden state ht has dimension H. Probabilistically, the LGSSM is defined by: p(v1:T , h1:T |Θ) = p(v1|h1)p(h1) T Y t=2 p(vt|ht)p(ht|ht−1), with p(vt|ht) = N (Bht, ΣV ), p(ht|ht−1) = N (Aht−1, ΣH), p(h1) = N(µ, Σ) and where Θ = {A, B, ΣH, ΣV , µ, Σ} denotes the model parameters. Because of the widespread use of these models, a Bayesian treatment of parameters is of considerable interest [4, 5, 6, 7, 8]. An exact implementation of the Bayesian LGSSM is formally intractable [8], and recently a Variational Bayesian (VB) approximation has been studied [4, 5, 6, 7, 9]. The most challenging part of implementing the VB method is performing inference over h1:T , and previous authors have developed their own specialized routines, based on Belief Propagation, since standard LGSSM inference routines appear, at first sight, not to be applicable. 1Also called Kalman Filters/Smoothers, Linear Dynamical Systems. 2v1:T denotes v1, . . . , vT . A key contribution of this paper is to show how the Variational Bayesian treatment of the LGSSM can be implemented using standard LGSSM inference routines. Based on the insight we provide, any standard inference method may be applied, including those specifically addressed to improve numerical stability [2, 10, 11]. In this article, we decided to describe the predictor-corrector and Rauch-Tung-Striebel recursions [2], and also suggest a small modification that reduces computational cost. The Bayesian LGSSM is particularly of interest when strong prior constraints are needed to find adequate solutions. One such case is in EEG signal analysis, whereby we wish to extract sources that evolve independently through time. Since EEG is particularly noisy [12], a prior that encourages sources to have preferential dynamics is advantageous. This application is discussed in Section 4, and demonstrates the ease of applying our VB framework. 2 Bayesian Linear Gaussian State-Space Models In the Bayesian treatment of the LGSSM, instead of considering the model parameters Θ as fixed, we define a prior distribution p(Θ|ˆΘ), where ˆΘ is a set of hyperparameters. Then: p(v1:T |ˆΘ) = Z Θ p(v1:T |Θ)p(Θ|ˆΘ) . (1) In a full Bayesian treatment we would define additional prior distributions over the hyperparameters ˆΘ. Here we take instead the ML-II (‘evidence’) framework, in which the optimal set of hyperparameters is found by maximizing p(v1:T |ˆΘ) with respect to ˆΘ [6, 7, 9]. For the parameter priors, here we define Gaussians on the columns of A and B3: p(A|α, ΣH) ∝ H Y j=1 e− αj 2 (Aj−ˆ Aj) TΣ−1 H (Aj−ˆ Aj), p(B|β, ΣV ) ∝ H Y j=1 e− βj 2 (Bj−ˆ Bj) TΣ−1 V (Bj−ˆ Bj), which has the effect of biasing the transition and emission matrices to desired forms ˆA and ˆB. The conjugate priors for general inverse covariances Σ−1 H and Σ−1 V are Wishart distributions [7]4. In the simpler case assumed here of diagonal covariances these become Gamma distributions [5, 7]. The hyperparameters are then ˆΘ = {α, β}5. Variational Bayes Optimizing Eq. (1) with respect to ˆΘ is difficult due to the intractability of the integrals. Instead, in VB, one considers the lower bound [6, 7, 9]6: L = log p(v1:T |ˆΘ) ≥Hq(Θ, h1:T ) + D log p(Θ|ˆΘ) E q(Θ) + ⟨E(h1:T , Θ)⟩q(Θ,h1:T ) ≡F, where E(h1:T , Θ) ≡log p(v1:T , h1:T |Θ). Hd(x) signifies the entropy of the distribution d(x), and ⟨·⟩d(x) denotes the expectation operator. The key approximation in VB is q(Θ, h1:T ) ≡q(Θ)q(h1:T ), from which one may show that, for optimality of F, q(h1:T ) ∝e⟨E(h1:T ,Θ)⟩q(Θ), q(Θ) ∝p(Θ|ˆΘ)e⟨E(h1:T ,Θ)⟩q(h1:T ). These coupled equations need to be iterated to convergence. The updates for the parameters q(Θ) are straightforward and are given in Appendices A and B. Once converged, the hyperparameters are updated by maximizing F with respect to ˆΘ, which lead to simple update formulae [7]. Our main concern is with the update for q(h1:T ), for which this paper makes a departure from treatments previously presented. 3More general Gaussian priors may be more suitable depending on the application. 4For expositional simplicity, we do not put priors on µ and Σ. 5For simplicity, we keep the parameters of the Gamma priors fixed. 6Strictly we should write throughout q(·|v1:T ). We omit the dependence on v1:T for notational convenience. 3 Unified Inference on q(h1:T) Optimally q(h1:T ) is Gaussian since, up to a constant, ⟨E(h1:T , Θ)⟩q(Θ) is quadratic in h1:T 7: −1 2 T X t=1 ·­ (vt−Bht)TΣ−1 V (vt−Bht) ® q(B,ΣV ) + D (ht−Aht−1)T Σ−1 H (ht−Aht−1) E q(A,ΣH) ¸ . (2) In addition, optimally, q(A|ΣH) and q(B|ΣV ) are Gaussians (see Appendix A), so we can easily carry out the averages in Eq. (2). The further averages over q(ΣH) and q(ΣV ) are also easy due to conjugacy. Whilst this defines the distribution q(h1:T ), quantities such as q(ht), required for example for the parameter updates (see the Appendices), need to be inferred from this distribution. Clearly, in the non-Bayesian case, the averages over the parameters are not present, and the above simply represents the posterior distribution of an LGSSM whose visible variables have been clamped into their evidential states. In that case, inference can be performed using any standard LGSSM routine. Our aim, therefore, is to try to represent the averaged Eq. (2) directly as the posterior distribution ˜q(h1:T |˜v1:T ) of an LGSSM , for some suitable parameter settings. Mean + Fluctuation Decomposition A useful decomposition is to write ­ (vt −Bht)TΣ−1 V (vt −Bht) ® q(B,ΣV )= (vt −⟨B⟩ht)T ­ Σ−1 V ® (vt −⟨B⟩ht) | {z } mean + hT t SBht | {z } fluctuation , and similarly ­ (ht−Aht−1)TΣ−1 H (ht−Aht−1) ® q(A,ΣH)= (ht−⟨A⟩ht−1)T ­ Σ−1 H ® (ht−⟨A⟩ht−1) | {z } mean +hT t−1SAht−1 | {z } fluctuation , where the parameter covariances are SB ≡ ­ BTΣ−1 V B ® −⟨B⟩T ­ Σ−1 V ® ⟨B⟩= V H−1 B and SA ≡ ­ ATΣ−1 H A ® −⟨A⟩T ­ Σ−1 H ® ⟨A⟩= HH−1 A (for HA and HB defined in Appendix A). The mean terms simply represent a clamped LGSSM with averaged parameters. However, the extra contributions from the fluctuations mean that Eq. (2) cannot be written as a clamped LGSSM with averaged parameters. In order to deal with these extra terms, our idea is to treat the fluctuations as arising from an augmented visible variable, for which Eq. (2) can then be considered as a clamped LGSSM. Inference Using an Augmented LGSSM To represent Eq. (2) as an LGSSM ˜q(h1:T |˜v1:T ), we may augment vt and B as8: ˜vt = vert(vt, 0H, 0H), ˜B = vert(⟨B⟩, UA, UB), where UA is the Cholesky decomposition of SA, so that U T AUA = SA. Similarly, UB is the Cholesky decomposition of SB. The equivalent LGSSM ˜q(h1:T |˜v1:T ) is then completed by specifying9 ˜A ≡⟨A⟩, ˜ΣH ≡ ­ Σ−1 H ®−1 , ˜ΣV ≡diag( ­ Σ−1 V ®−1 , IH, IH), ˜µ ≡µ, ˜Σ ≡Σ. The validity of this parameter assignment can be checked by showing that, up to negligible constants, the exponent of this augmented LGSSM has the same form as Eq. (2)10. Now that this has been written as an LGSSM ˜q(h1:T |˜v1:T ), standard inference routines in the literature may be applied to compute q(ht|v1:T ) = ˜q(ht|˜v1:T ) [1, 2, 11]11. 7For simplicity of exposition, we ignore the first time-point here. 8The notation vert(x1, . . . , xn) stands for vertically concatenating the arguments x1, . . . , xn. 9Strictly, we need a time-dependent emission ˜Bt = ˜B, for t = 1, . . . , T −1. For time T, ˜BT has the Cholesky factor UA replaced by 0H,H. 10There are several ways of achieving a similar augmentation. We chose this since, in the non-Bayesian limit UA = UB = 0H,H, no numerical instabilities would be introduced. 11Note that, since the augmented LGSSM ˜q(h1:T |˜v1:T ) is designed to match the fully clamped distribution q(h1:T |v1:T ), the filtered posterior ˜q(ht|˜v1:t) does not correspond to q(ht|v1:t). Algorithm 1 LGSSM: Forward and backward recursive updates. The smoothed posterior p(ht|v1:T ) is returned in the mean ˆhT t and covariance P T t . procedure FORWARD 1a: P ←Σ 1b: P ←DΣ, where D ≡I −ΣUAB ¡ I + U T ABΣUAB ¢−1 U T AB 2a: ˆh0 1 ←µ 2b: ˆh0 1 ←Dµ 3: K ←PBT(BPBT + ΣV )−1, P 1 1 ←(I −KB)P, ˆh1 1 ←ˆh0 1 + K(vt −Bˆh0 1) for t ←2, T do 4: P t−1 t ←AP t−1 t−1 AT + ΣH 5a: P ←P t−1 t 5b: P ←DtP t−1 t , where Dt ≡I −P t−1 t UAB ¡ I + U T ABP t−1 t UAB ¢−1 U T AB 6a: ˆht−1 t ←Aˆht−1 t−1 6b: ˆht−1 t ←DtAˆht−1 t−1 7: K ←PBT(BPBT + ΣV )−1, P t t ←(I −KB)P, ˆht t ←ˆht−1 t + K(vt −Bˆht−1 t ) end for end procedure procedure BACKWARD for t ←T −1, 1 do ←− At ←P t t AT(P t t+1)−1 P T t ←P t t + ←− At(P T t+1 −P t t+1)←− AtT ˆhT t ←ˆht t + ←− At(ˆhT t+1 −Aˆht t) end for end procedure For completeness, we decided to describe the standard predictor-corrector form of the Kalman Filter, together with the Rauch-Tung-Striebel Smoother [2]. These are given in Algorithm 1, where ˜q(ht|˜v1:T ) is computed by calling the FORWARD and BACKWARD procedures. We present two variants of the FORWARD pass. Either we may call procedure FORWARD in Algorithm 1 with parameters ˜A, ˜B, ˜ΣH, ˜ΣV , ˜µ, ˜Σ and the augmented visible variables ˜vt in which we use steps 1a, 2a, 5a and 6a. This is exactly the predictor-corrector form of a Kalman Filter [2]. Otherwise, in order to reduce the computational cost, we may call procedure FORWARD with the parameters ˜A, ⟨B⟩, ˜ΣH, ­ Σ−1 V ®−1 , ˜µ, ˜Σ and the original visible variable vt in which we use steps 1b (where U T ABUAB ≡SA+SB), 2b, 5b and 6b. The two algorithms are mathematically equivalent. Computing q(ht|v1:T ) = ˜q(ht|˜v1:T ) is then completed by calling the common BACKWARD pass. The important point here is that the reader may supply any standard Kalman Filtering/Smoothing routine, and simply call it with the appropriate parameters. In some parameter regimes, or in very long time-series, numerical stability may be a serious concern, for which several stabilized algorithms have been developed over the years, for example the square-root forms [2, 10, 11]. By converting the problem to a standard form, we have therefore unified and simplified inference, so that future applications may be more readily developed12. 3.1 Relation to Previous Approaches An alternative approach to the one above, and taken in [5, 7], is to write the posterior as log q(h1:T ) = T X t=2 φt(ht−1, ht) + const. for suitably defined quadratic forms φt(ht−1, ht). Here the potentials φt(ht−1, ht) encode the averaging over the parameters A, B, ΣH, ΣV . The approach taken in [7] is to recognize this as a 12The computation of the log-likelihood bound does not require any augmentation. pairwise Markov chain, for which the Belief Propagation recursions may be applied. The approach in [5] is based on a Kullback-Leibler minimization of the posterior with a chain structure, which is algorithmically equivalent to Belief Propagation. Whilst mathematically valid procedures, the resulting algorithms do not correspond to any of the standard forms in the Kalman Filtering/Smoothing literature, whose properties have been well studied [14]. 4 An Application to Bayesian ICA Figure 1: The structure of the LGSSM for ICA. A particular case for which the Bayesian LGSSM is of interest is in extracting independent source signals underlying a multivariate timeseries [5, 15]. This will demonstrate how the approach developed in Section 3 makes VB easily to apply. The sources si are modeled as independent in the following sense: p(si 1:T , sj 1:T ) = p(si 1:T )p(sj 1:T ), for i ̸= j, i, j = 1, . . . , C. Independence implies block diagonal transition and state noise matrices A, ΣH and Σ, where each block c has dimension Hc. A one dimensional source sc t for each independent dynamical subsystem is then formed from sc t = 1T chc t, where 1c is a unit vector and hc t is the state of dynamical system c. Combining the sources, we can write st = Pht, where P = diag(1T 1, . . . , 1T C), ht = vert(h1 t, . . . , hC t ). The resulting emission matrix is constrained to be of the form B = WP, where W is the V × C mixing matrix. This means that the observations are formed from linearly mixing the sources, vt = Wst + ηv t . The graphical structure of this model is presented in Fig 1. To encourage redundant components to be removed, we place a zero mean Gaussian prior on W. In this case, we do not define a prior for the parameters ΣH and ΣV which are instead considered as hyperparameters. More details of the model are given in [15]. The constraint B = WP requires a minor modification from Section 3, as we discuss below. Inference on q(h1:T ) A small modification of the mean + fluctuation decomposition for B occurs, namely: ­ (vt −Bht)TΣ−1 V (vt −Bht) ® q(W ) = (vt −⟨B⟩ht)TΣ−1 V (vt −⟨B⟩ht) + hT t P TSW Pht , where ⟨B⟩≡⟨W⟩P and SW = V H−1 W . The quantities ⟨W⟩and HW are obtained as in Appendix A.1 with the replacement ht ←Pht. To represent the above as a LGSSM, we augment vt and B as ˜vt = vert(vt, 0H, 0C), ˜B = vert(⟨B⟩, UA, UW P), where UW is the Cholesky decomposition of SW . The equivalent LGSSM is then completed by specifying ˜A ≡⟨A⟩, ˜ΣH ≡ΣH, ˜ΣV ≡diag(ΣV , IH, IC), ˜µ ≡µ, ˜Σ ≡Σ, and inference for q(h1:T ) performed using Algorithm 1. This demonstrates the elegance and unity of the approach in Section 3, since no new algorithm needs to be developed to perform inference, even in this special constrained parameter case. 4.1 Demonstration As a simple demonstration, we used an LGSSM to generate 3 sources sc t with random 5×5 transition matrices Ac, µ = 0H and Σ ≡ΣH ≡IH. The sources were mixed into three observations vt = Wst + ηv t , for W chosen with elements from a zero mean unit variance Gaussian distribution, and ΣV = IV . We then trained a Bayesian LGSSM with 5 sources and 7 × 7 transition matrices Ac. To bias the model to find the simplest sources, we used ˆAc ≡0Hc,Hc for all sources. In Fig2a and Fig 2b we see the original sources and the noisy observations respectively. In Fig2c we see the estimated sources from our method after convergence of the hyperparameter updates. Two of the 5 sources have been removed, and the remaining three are a reasonable estimation of the original sources. Another possible approach for introducing prior knowledge is to use a Maximum a Posteriori (MAP) 0 50 100 150 200 250 300 (a) 0 50 100 150 200 250 300 (b) 0 50 100 150 200 250 300 (c) 0 50 100 150 200 250 300 (d) Figure 2: (a) Original sources st. (b) Observations resulting from mixing the original sources, vt = Wst + ηv t , ηv t ∼N(0, I). (c) Recovered sources using the Bayesian LGSSM. (d) Sources found with MAP LGSSM. 0 1 2 3s (a) 0 1 2 3s (b) 0 1 2 3s (c) 0 1 2 3s (d) 0 1 2 3s (e) Figure 3: (a) Original raw EEG recordings from 4 channels. (b-e) 16 sources st estimated by the Bayesian LGSSM. procedure by adding a prior term to the original log-likelihood log p(v1:T |A, W, ΣH, ΣV , µ, Σ) + log p(A|α) + log p(W|β). However, it is not clear how to reliably find the hyperparameters α and β in this case. One solution is to estimate them by optimizing the new objective function jointly with respect to the parameters and hyperparameters (this is the so-called joint map estimation – see for example [16]). A typical result of using this joint MAP approach on the artificial data is presented in Fig2d. The joint MAP does not estimate the hyperparameters well, and the incorrect number of sources is identified. 4.2 Application to EEG Analysis In Fig3a we plot three seconds of EEG data recorded from 4 channels (located in the right hemisphere) while a person is performing imagined movement of the right hand. As is typical in EEG, each channel shows drift terms below 1 Hz which correspond to artifacts of the instrumentation, together with the presence of 50 Hz mains contamination and masks the rhythmical activity related to the mental task, mainly centered at 10 and 20 Hz [17]. We would therefore like a method which enables us to extract components in these information-rich 10 and 20 Hz frequency bands. Standard ICA methods such as FastICA do not find satisfactory sources based on raw ‘noisy’ data, and preprocessing with band-pass filters is usually required. Additionally, in EEG research, flexibility in the number of recovered sources is important since there may be many independent oscillators of interest underlying the observations and we would like some way to automatically determine their effective number. To preferentially find sources at particular frequencies, we specified a block diagonal matrix ˆAc for each source c, where each block is a 2 × 2 rotation matrix at the desired frequency. We defined the following 16 groups of frequencies: [0.5], [0.5], [0.5], [0.5]; [10,11], [10,11], [10,11], [10,11]; [20,21], [20,21], [20,21], [20,21]; [50], [50], [50], [50]. The temporal evolution of the sources obtained after training the Bayesian LGSSM is given in Fig3(b,c,d,e) (grouped by frequency range). The Bayes LGSSM removed 4 unnecessary sources from the mixing matrix W, that is one [10,11] Hz and three [20,21] Hz sources. The first 4 sources contain dominant low frequency drift, sources 5, 6 and 8 contain [10,11] Hz, while source 10 contains [20,21] Hz centered activity. Of the 4 sources initialized to 50 Hz, only 2 retained 50 Hz activity, while the Ac of the other two have changed to model other frequencies present in the EEG. This method demonstrates the usefulness and applicability of the VB method in a real-world situation. 5 Conclusion We considered the application of Variational Bayesian learning to Linear Gaussian State-Space Models. This is an important class of models with widespread application, and finding a simple way to implement this approximate Bayesian procedure is of considerable interest. The most demanding part of the procedure is inference of the hidden states of the model. Previously, this has been achieved using Belief Propagation, which differs from inference in the Kalman Filtering/Smoothing literature, for which highly efficient and stabilized procedures exist. A central contribution of this paper is to show how inference can be written using the standard Kalman Filtering/Smoothing recursions by augmenting the original model. Additionally, a minor modification to the standard Kalman Filtering routine may be applied for computational efficiency. We demonstrated the elegance and unity of our approach by showing how to easily apply a Variational Bayes analysis of temporal ICA. Specifically, our Bayes ICA approach successfully extracts independent processes underlying EEG signals, biased towards preferred frequency ranges. We hope that this simple and unifying interpretation of Variational Bayesian LGSSMs may therefore facilitate the further application to related models. A Parameter Updates for A and B A.1 Determining q(B|ΣV ) By examining F, the contribution of q(B|ΣV ) can be interpreted as the negative KL divergence between q(B|ΣV ) and a Gaussian. Hence, optimally, q(B|ΣV ) is a Gaussian. The covariance [ΣB]ij,kl ≡ ­¡ Bij −⟨Bij⟩ ¢¡ Bkl −⟨Bkl⟩ ¢® (averages wrt q(B|ΣV )) is given by: [ΣB]ij,kl = [H−1 B ]jl [ΣV ]ik , where [HB]jl ≡ T X t=1 D hj thl t E q(ht) + βjδjl. The mean is given by ⟨B⟩= NBH−1 B , where [NB]ij ≡PT t=1 D hj t E q(ht) vi t + βj ˆBij. Determining q(A|ΣH) Optimally, q(A|ΣH) is a Gaussian with covariance [ΣA]ij,kl = [H−1 A ]jl [ΣH]ik , where [HA]jl ≡ T −1 X t=1 D hj thl t E q(ht) + αjδjl. The mean is given by ⟨A⟩= NAH−1 A , where [NA]ij ≡PT t=2 D hj t−1hi t E q(ht−1:t) + αj ˆAij. B Covariance Updates By specifying a Wishart prior for the inverse of the covariances, conjugate update formulae are possible. In practice, it is more common to specify diagonal inverse covariances, for which the corresponding priors are simply Gamma distributions [7, 5]. For this simple diagonal case, the explicit updates are given below. Determining q(ΣV ) For the constraint Σ−1 V = diag(ρ), where each diagonal element follows a Gamma prior Ga(b1, b2) [7], q(ρ) factorizes and the optimal updates are q(ρi) = Ga  b1 + T 2 , b2 + 1 2   T X t=1 (vi t)2 −[GB]ii + X j βj ˆB2 ij    , where GB ≡NBH−1 B N T B. Determining q(ΣH) Analogously, for Σ−1 H = diag(τ) with prior Ga(a1, a2) [5], the updates are q(τi) = Ga  a1 + T −1 2 , a2 + 1 2   T X t=2 ­ (hi t)2® −[GA]ii + X j αj ˆA2 ij    , where GA ≡NAH−1 A N T A. Acknowledgments This work is supported by the European DIRAC Project FP6-0027787. This paper only reflects the authors’ views and funding agencies are not liable for any use that may be made of the information contained herein. References [1] Y. Bar-Shalom and X.-R. Li. Estimation and Tracking: Principles, Techniques and Software. Artech House, 1998. [2] M. S. Grewal and A. P. Andrews. Kalman Filtering: Theory and Practice Using MATLAB. John Wiley and Sons, Inc., 2001. [3] R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, 2000. [4] M. J. Beal, F. Falciani, Z. Ghahramani, C. Rangel, and D. L. Wild. A Bayesian approach to reconstructing genetic regulatory networks with hidden factors. Bioinformatics, 21:349–356, 2005. [5] A. T. Cemgil and S. J. Godsill. Probabilistic phase vocoder and its application to interpolation of missing values in audio signals. In 13th European Signal Processing Conference, 2005. [6] H. Valpola and J. Karhunen. An unsupervised ensemble learning method for nonlinear dynamic statespace models. Neural Computation, 14:2647–2692, 2002. [7] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London, 2003. [8] M. Davy and S. J. Godsill. Bayesian harmonic models for musical signal analysis (with discussion). In J.O. Bernardo, J.O. Berger, A.P Dawid, and A.F.M. Smith, editors, Bayesian Statistics VII. Oxford University Press, 2003. [9] D. J. C. MacKay. Ensemble learning and evidence maximisation. Unpublished manuscipt: www.variational-bayes.org, 1995. [10] M. Morf and T. Kailath. Square-root algorithms for least-squares estimation. IEEE Transactions on Automatic Control, 20:487–497, 1975. [11] P. Park and T. Kailath. New square-root smoothing algorithms. IEEE Transactions on Automatic Control, 41:727–732, 1996. [12] E. Niedermeyer and F. Lopes Da Silva. Electroencephalography: basic principles, clinical applications and related fields. Lippincott Williams and Wilkins, 1999. [13] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305–345, 1999. [14] M. Verhaegen and P. Van Dooren. Numerical aspects of different Kalman filter implementations. IEEE Transactions of Automatic Control, 31:907–917, 1986. [15] S. Chiappa and D. Barber. Bayesian linear Gaussian state-space models for biosignal decomposition. Signal Processing Letters, 14, 2007. [16] S. S. Saquib, C. A. Bouman, and K. Sauer. ML parameter estimation for Markov random fields with applicationsto Bayesian tomography. IEEE Transactions on Image Processing, 7:1029–1044, 1998. [17] G. Pfurtscheller and F. H. Lopes da Silva. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, pages 1842–1857, 1999.
2006
5
3,070
Statistical Modeling of Images with Fields of Gaussian Scale Mixtures Siwei Lyu Eero. P. Simoncelli Howard Hughes Medical Institute Center for Neural Science, and Courant Institute of Mathematical Sciences New York University, New York, NY 10003 Abstract The local statistical properties of photographic images, when represented in a multi-scale basis, have been described using Gaussian scale mixtures (GSMs). Here, we use this local description to construct a global field of Gaussian scale mixtures (FoGSM). Specifically, we model subbands of wavelet coefficients as a product of an exponentiated homogeneous Gaussian Markov random field (hGMRF) and a second independent hGMRF. We show that parameter estimation for FoGSM is feasible, and that samples drawn from an estimated FoGSM model have marginal and joint statistics similar to wavelet coefficients of photographic images. We develop an algorithm for image denoising based on the FoGSM model, and demonstrate substantial improvements over current state-ofthe-art denoising method based on the local GSM model. Many successful methods in image processing and computer vision rely on statistical models for images, and it is thus of continuing interest to develop improved models, both in terms of their ability to precisely capture image structures, and in terms of their tractability when used in applications. Constructing such a model is difficult, primarily because of the intrinsic high dimensionality of the space of images. Two simplifying assumptions are usually made to reduce model complexity. The first is Markovianity: the density of a pixel conditioned on a small neighborhood, is assumed to be independent from the rest of the image. The second assumption is homogeneity: the local density is assumed to be independent of its absolute position within the image. The set of models satisfying both of these assumptions constitute the class of homogeneous Markov random fields (hMRFs). Over the past two decades, studies of photographic images represented with multi-scale multiorientation image decompositions (loosely referred to as “wavelets”) have revealed striking nonGaussian regularities and inter and intra-subband dependencies. For instance, wavelet coefficients generally have highly kurtotic marginal distributions [1, 2], and their amplitudes exhibit strong correlations with the amplitudes of nearby coefficients [3, 4]. One model that can capture the nonGaussian marginal behaviors is a product of non-Gaussian scalar variables [5]. A number of authors have developed non-Gaussian MRF models based on this sort of local description [6, 7, 8], among which the recently developed fields of experts model [7] has demonstrated impressive performance in denoising (albeit at an extremely high computational cost in learning model parameters). An alternative model that can capture non-Gaussian local structure is a scale mixture model [9, 10, 11]. An important special case is Gaussian scale mixtures (GSM), which consists of a Gaussian random vector whose amplitude is modulated by a hidden scaling variable. The GSM model provides a particularly good description of local image statistics, and the Gaussian substructure of the model leads to efficient algorithms for parameter estimation and inference. Local GSM-based methods represent the current state-of-the-art in image denoising [12]. The power of GSM models should be substantially improved when extended to describe more than a small neighborhood of wavelet coefficients. To this end, several authors have embedded local Gaussian mixtures into tree-structured MRF models [e.g., 13, 14]. In order to maintain tractability, these models are arranged such that coefficients are grouped in non-overlapping clusters, allowing a graphical probability model with no loops. Despite their global consistency, the artificially imposed cluster boundaries lead to substantial artifacts in applications such as denoising. In this paper, we use a local GSM as a basis for a globally consistent and spatially homogeneousfield of Gaussian scale mixtures (FoGSM). Specifically, the FoGSM is formulated as the product of two mutually independent MRFs: a positive multiplier field obtained by exponentiating a homogeneous Gaussian MRF (hGMRF), and a second hGMRF. We develop a parameter estimation procedure, and show that the model is able to capture important statistical regularities in the marginal and joint wavelet statistics of a photographic image. We apply the FoGSM to image denoising, demonstrating substantial improvement over the previous state-of-the-art results obtained with a local GSM model. 1 Gaussian scale mixtures A GSM random vector x is formed as the product of a zero-mean Gaussian random vector u and an independent random variable z, as x d= √zu, where d= denotes equality in distribution. The density of x is determined by the covariance of the Gaussian vector, Σ, and the density of the multiplier, p z(z), through the integral p(x) =  z Nx(0, zΣ)pz(z)dz ∝  z 1 √z|Σ| exp  −xTΣ−1x 2z  pz(z)dz. (1) A key property of GSMs is that when z determines the scale of the conditional variance of x given z, which is a Gaussian variable with zero mean and covariance zΣ. In addition, the normalized variable x √z is a zero mean Gaussian with covariance matrix Σ. The GSM model has been used to describe the marginal and joint densities of local clusters of wavelet coefficients, both within and across subbands [9], where the embedded Gaussian structure affords simple and efficient computation. This local GSM model has been be used for denoising, by independently estimating each coefficient conditioned on its surrounding cluster [12]. This method achieves state-of-the-art performances, despite the fact that treating overlapping clusters as independent does not give rise to a globally consistent statistical model that satisfies all the local constraints. 2 Fields of Gaussian scale mixtures In this section, we develop fields of Gaussian scale mixtures (FoGSM) as a framework for modeling wavelet coefficients of photographic images. Analogous to the local GSM model, we use a latent multiplier field to modulate a homogeneous Gaussian MRF (hGMRF). Formally, we define a FoGSM x as the product of two mutually independent MRFs, x d= u ⊗√z, (2) where u is a zero-mean hGMRF, and z is a field of positive multipliers that control the local coefficient variances. The operator ⊗denotes element-wise multiplication, and the square root operation is applied to each component. Note that x has a one-dimensional GSM marginal distributions, while its components have dependencies captured by the MRF structures of u and z. Analogous to the local GSM, when conditioned on z, x is an inhomogeneous GMRF p(x|z) ∝  |Qu|  i zi exp  −1 2xT  D  √z −1 Qu D  √z −1 x  =  |Qu|  i zi exp  −1 2(x ⊘√z)T Qu(x ⊘√z)  , (3) where Qu is the inverse covariance matrix of u (also known as the precision matrix), and D(·) denotes the operator that form a diagonal matrix from an input vector. Note also that the elementwise division of the two fields, x ⊘√z, yields a hGMRF with precision matrix Qu. To complete the FoGSM model, we need to specify the structure of the multiplier field z. For tractability, we use another hGMRF as a substrate, and map it into positive values by exponentiation, x u log z Fig. 1. Decomposition of a subband from image “boat” (left) into the normalized subband u (middle) and the multiplier field z (right, in the logarithm domain). Each image is rescaled individually to fill the full range of grayscale intensities. as was done in [10]. To be more specific, we model log(z) as a hGMRF with mean µ and precision matrix Qz, where the log operator is applied element-wise, from which the density of z follows as: pz(z) ∝ |Qz|  i zi exp  −1 2(log z −µ)T Qz(logz −µ)  . (4) This is a natural extension of the univariate lognormal prior used previously for the scalar multiplier in the local GSM model [12]. The restriction to hGMRFs greatly simplifies computation with FoGSM. Particularly, we take advantage of the fact that a 2D hGMRF with circular boundary handling has a sparse block-circulant precision matrix with a generating kernel θ specifying its nonzero elements. A block-circulant matrix is diagonalized by the Fourier transform, and its multiplication with a vector corresponds to convolution with the kernel θ. The diagonalizability with a fixed and efficiently computed transform makes the parameter estimation, sampling, and inference with a hGMRF substantially more tractable than with a general MRF. Readers are referred to [15] for a detailed description of hGMRFs. Parameter estimation: The estimation of the latent multiplier field z and the model parameters (µ, Qz, Qu) may be achieved by maximizing log p(x, z; Q u, Qz, µ) with an iterative coordinate-ascent method, which is guaranteed to converge. Specifically, based on the statistical dependency structures in the FoGSM model, the following three steps are repeated until convergence: (i) z(t+1) = argmaxz log p(x|z; Q (t) u ) + log p(z; Q (t) z , µ(t)) (ii) Q (t+1) u = argmaxQu log p(x|z(t+1); Qu) (iii) (Q (t+1) z , µ(t+1)) = argmaxQz,µ log p(z(t+1); Qz, µ) (5) According to the FoGSM model structure, steps (ii) and (iii) correspond to maximum likelihood estimates of the parameters of hGMRFs,  x ⊘ √ z(t+1) and log z(t+1) , respectively. Because of this, both steps may be efficiently implemented by exploiting the diagonalization of the precision matrices with 2D Fourier transforms [15]. Step (i) in (5) may be implemented with conjugate gradient ascent [16]. To simplify description and computation, we introduce a new variable for the element-wise inverse square root of the multiplier: s = 1 ⊘√z. The likelihood in (3) is then changed to: p(x|s) ∝ i si exp  −1 2(x ⊗s)T Qu(x ⊗s)  = i si exp  −1 2sT D(x)Qu D(x)s  . (6) The joint density of s is obtained from (4), using the relations between densities of transformed variables, as p(s) ∝ 1  i si exp  −1 2(2 logs + µ)T Qz(2 logs + µ)  . (7) Combining . (6) and (7), step (i) in (5) is equivalent to computing ˆs △= argmaxs log p(x|s; Qu) + log p(s; Qz, µ), which is further simplified into: argmin s 1 2sT D (x) Qu D (x) s + 1 2(2 logs + µ)T Qz(2 logs + µ)  . (8) Barbara boat house peppers log p(x) x x x x Fig. 2. Empirical marginal log distributions of coefficients from a multi-scale decomposition of photographic images (blue dot-dashed line), synthesized FoGSM samples from the same subband (red solid line), and a Gaussian with the same standard deviation (red dashed line). and the optimal ˆz is then recovered as ˆz = 1 ⊘(ˆs ⊗ˆs). We the optimize (8) with conjugate gradient ascent [16]. Specifically, the negative gradient of the objective function in (8) with respect to s is −∂log p(x|s)p(s) ∂s = D (x) Qu D (x) s + 2 D(s) −1Qz(2 logs + µ) = x ⊗(θu ⋆(x ⊗s)) + 2(θz ⋆(2 logs + µ)) ⊘s, and the multiplication of any vector h with the Hessian matrix can be computed as: ∂2 log p(x|s)p(s) ∂s2 h = x ⊗(θu ⋆(x ⊗h)) + 4 (θz ⋆(h ⊘s)) ⊘s −2 θz ⋆(logs + µ) ⊗h ⊘(s ⊗s). Both operations can be expressed entirely in terms of element-wise operations (⊘and ⊗) and 2D convolutions (⋆) with the generating kernels of the two precision matrices θ u and θz, which allows for efficient implementation. 3 Modeling photographic images We have applied the FoGSM model to subbands of a multi-scale image representation known as a steerable pyramid [17]. This decomposition is a tight frame, constructed from oriented multiscale derivative operators, and is overcomplete by a factor of 4K/3, where K is the number of orientation bands. Note that the marginal and joint statistics we describe are not specific to this decomposition, and are similar for other multi-scale oriented representations. We fit a FoGSM model to each subband of a decomposed photographic image, using the algorithms described in the previous section. For precision matrices Qu and Qz, we assumed a 5 × 5 Markov neighborhood (corresponding to a 5 × 5 convolution kernel), which was loosely chosen to optimize the tradeoff between accuracy and overfitting. Figure 1 shows the result of fitting a FoGSM model to an example subband from the “boat” image (left panel). The subband is decomposed into the product of the u field (middle panel) and the z field (right panel, in the logarithm domain), along with model parameters Q u, µ and Qz (not shown). Visually, the changing spatial variances are represented in the estimated log z field, and the estimated u is much more homogeneous than the original subband and has a marginal distribution close to Gaussian.1 However, the log z field still has a non-Gaussian marginal distribution and is spatially inhomogeneous, suggesting limitations of FoGSM for modeling photographic image wavelet coefficients (see Discussion). The statistical dependencies captured by the FoGSM model can be further revealed by examining marginal and joint statistics of samples synthesized with the estimated model parameters. A sample from FoGSM can be formed by multiplying samples of u and √z. The former is obtained by sampling from hGMRF u, and the latter is obtained from the element-wise exponentiation followed by a element-wise square root operation of a sample of hGMRF log z. This procedure is again efficient for FoGSM due to the use of hGMRFs as building blocks [15]. Marginal distributions: We start by comparing the marginal distributions of the samples and the original subband. Figure 2 shows empirical histograms in the log domain of a particular subband 1This ”Gaussianizing” behavior was first noted in photographic images by Ruderman [18], who observed that image derivative measurements that were normalized by a local estimate of their standard deviation had approximately Gaussian marginal distributions. close ∆= 1 near ∆= 4 far ∆= 32 orientation scale real sim real sim Fig. 3. Examples of empirically observed distributions of wavelet coefficient pairs, compared with distributions from synthesized samples with the FoGSM model. See text for details. from four different photographic images (blue dot-dashed line), and those of the synthesized samples of FoGSM models learned from each corresponding subband (red solid line). For comparison, a Gaussian with the same standard deviation as the image subband is also displayed (red dashed line). Note that the synthesized samples have conspicuous non-Gaussian characteristics similar to the real subbands, exemplified by the high peak and heavy tails in the marginal distributions. On the other hand, they are typically less kurtotic than the real subbands. We believe this arises from the imprecise Gaussian approximation of log z (see Discussion). Joint distributions: In addition to one-dimensional marginal statistics, the FoGSM model is capable of capturing the joint behavior of wavelet coefficients. As described in [4, 9], wavelet coefficients of photographic images present non-Gaussian dependencies. Shown in the first and the third rows in Fig. 3 are empirical joint and conditional histograms for one subband of the “boat” image, for five pairs of coefficients, corresponding to basis functions with spatial separations of ∆= {1, 4, 32} samples, two orthogonal orientations and two adjacent scales. Contour lines in the joint histogram are drawn at equal intervals of log probability. Intensities in the conditional histograms correspond to probability, except that each column is independently rescaled to fill the full range of intensity. For a pair of adjacent coefficients, we observe an elliptical joint distribution and a “bow-tie” shaped conditional distribution. The latter is indicative of strong non-Gaussian dependencies. For coefficients that are distant, the dependency becomes weaker and the corresponding joint and conditional histograms become more separable, as would be expected for two independent random variables. Random samples drawn from a FoGSM model, with parameters fitted to the corresponding subband, have statistical characteristics consistent with the general description of wavelet coefficients of photographic images. Shown in the second and the fourth rows of Fig. 3 are the joint and conditional histograms of synthesized samples from the FoGSM model estimated from the same subband as in the first and the third rows. Note that the joint and conditional histograms of the synthesized samples have similar transition of spatial dependencies as the separation increases (column 1,2 and 3), suggesting that the FoGSM accounts well for pairwise joint dependencies of coefficients over a full range of spatial separations. On the other hand, the dependencies between subbands of different orientations and scales are not properly modeled by FoGSM (column 4 and 5). This is especially true for subbands at different scales, which exhibit strong dependencies. The current FoGSM model original image noisy image (σ = 50) GSM-BLS FoGSM (PSNR = 14.15dB) (PSNR = 26.34dB) (PSNR = 27.01dB) Fig. 4. Denoising results using local GSM [12] and FoGSM. Performances are evaluated in peaksignal-to-noise-ratio (PSNR), 20 log10(255/σe), where σe is the standard deviation of the error. does not exhibit those dependencies as only spatial neighbors are used to make use the 2D hGMRFs (see Discussion). 4 Application to image denoising Let y = x+w be a wavelet subband of an image that has been corrupted with white Gaussian noise of known variance. In an overcomplete wavelet domain such as steerable pyramid, the white Gaussian noise is transformed into correlated Gaussian noise w ∼Nw(0, Σw), whose covariance Σw can be derived from the basis functions of the pyramid transform. With FoGSM as prior over x, commonly used denoising methods involve expensive high-dimensional integration: for instance, maximum a posterior estimate, ˆxMAP = argmaxx log p(x|y), requires a high-dimensional integral over z, and the Bayesian least square estimation, ˆxBLS = E(x|y) requires a double high-dimensional integral over x and z. Although it is possible to optimize with these criteria using Monte-Carlo Markov sampling or other approximations, we instead develop a more efficient deterministic algorithm that takes advantage of the hGMRF structure in the FoGSM model. Specifically, we compute (ˆx, ˆz, ˆQu, ˆQz, ˆµ) = argmaxx,z,Qu,Qz,µ log p(x, z|y; Qu, Qz, µ) (9) and take ˆx as the optimal denoised subband. Note that the model parameters are learned within the inference process rather than in a separate parameter learning step. This strategy, known as partial optimal solution [19], greatly reduces the computational complexity. We optimize (9) with coordinate ascent, iterating between maximizing each of (x, z, Q u, Qz, µ) while fixing the others. With fixed estimates of (z, Qu, Qz, µ), the optimization of x is argmaxx log p(x, z|y; Qu, Qz, µ) = argmaxx log p(x|z, y; Qu, Qz, µ) + log p(z|y; Qu, Qz, µ) , which reduces to argmaxx log p(x|z, y; Qu), with the second term independent of x and can be dropped from optimization. Given the Gaussian structure of x given z, this step is then equivalent to a Wiener filter (linear in y). Fixing (x, Qu, Qz, µ), the optimization of z is argmaxz log p(x, z|y; Qu, Qz, µ)= argmaxz log p(y|x, z; Qu)+log p(x, z; Qu, Qz, µ)−log p(y; Qu, Qz, µ) , which is further reduced to argmaxz log p(x, z; Qu, Qz, µ). Here, the first term was dropped since y is independent of z when conditioned on x. The last term was also dropped since it is also independent of z. Therefore, optimizing z given (x, Q u, Qz, µ) is equivalent to the first step of the algorithm in section 2, which can be implemented with efficient gradient descent. Finally, given (x, z), the FoGSM model parameters (Qu, Qz, µ) are estimated from hGMRFs  x ⊘ √ z(t+1) and log z(t+1) , similar to the second and third step in the algorithm of section 2. However, to reduce the overall computation time, instead of a complete maximum likelihood estimation, these parameters are estimated with a maximum pseudo-likelihood procedure [20], which finds the parameters maximizing the product of all conditional distributions (which are 1D Gaussians in the GMRF case), followed by a projection to the subspace of FoGSM parameters that results in positive definite precision matrices. We tested this denoising method on a standard set of test images [12]. The noise corrupted images were first decomposed these into a steerable pyramid with multiple levels (5 levels for a 512 × 512 image and 4 levels for a 256 × 256 image ) and 8 orientations. We assumed a FoGSM model for each subband, with a 5 × 5 neighborhood for field u and a 1 × 1 neighborhood for field logz. These sizes were chosen to provide a reasonable combination of performance and computational efficiency. We then estimate the optimal x with the algorithm described previously, with the initial values of x and z computed from subband denoised with the local GSM model [12]. Shown in Fig. 4 is an example of denoising the “boat” image corrupted with simulated additive white Gaussian noise of strength σ = 50, corresponding to a peak-signal-to-noise-ratio (PSNR), of 14.15 dB. We compare this with the local GSM method in [12], which, assuming a local GSM model for the neighborhood consisting of 3 × 3 spatial neighbors plus parent in the next coarsest scale, computes a Bayes least squares estimate of each coefficient conditioned on its surrounding neighborhood. The FoGSM denoising achieves substantial improvement (+0.68 in PSNR) and is seen to exhibit better contrast and continuation of oriented features (see Fig. 4). On the other hand, FoGSM introduces some noticeable artifacts in low contrast areas, which is caused by numerical instability at locations with small z. We find that the improvement in terms of PSNR is consistent across photographic images and noise levels, as reported in Table 1. But even with a restricted neighborhood for the multiplier field, this PSNR improvement does come at a substantial computational cost. As a rough indication, running on a PowerPC G5 workstation with 2.3 Ghz processor and 16 Gb RAM memory, using unoptimized MATLAB (version R14) code, denoising a 512×512 image takes on average 4.5 hours (results averaging over 5 images), and denoising a 256×256 image takes on average 2.4 hours (result averaging over 2 images), to a convergenceprecision producing the reported results. Our preliminary investigation indicates that the slow running time is mainly due to the nature of coordinate ascent and the landscape of (9), which requires many iterations to converge. 5 Discussion We have introduced fields of Gaussian scale mixtures as a flexible and efficient tool for modeling the statistics of wavelet coefficients of photographic images. We developed a feasible (although admittedly computationally costly) parameter estimation method, and showed that samples synthesized from the fitted FoGSM model are able to capture structures in the marginal and joint wavelet statistics of photographic images. Preliminary results of applying FoGSM to image denoising indicate substantial improvements over the state-of-the-art methods based on the local GSM model. Although FoGSM has a structure that is similar to the local scale mixture model [9, 10], there is a fundamental difference between them. In FoGSM, hGMRF structures are enforced in u and log z, while the local scale mixture models impose minimal statistical structure on these variables. Because of this, our model easily extends to images of arbitrary size, while the local scale mixture models are essentially confined to describing small image patches (the curse of dimensionality, and the increase in computational cost prevent one from scaling the patch size up). On the other hand, the close relation to Gaussian MRF makes the analysis and computation of FoGSM significantly easier than other non-Gaussian MRF based image models [6, 7, 5]. We envision, and are currently working on, a number of model improvements. First, the model should benefit from the introduction of more general Markov neighborhoods, including wavelet coefficients from subbands at other scales and orientations [4, 12], since the current model is clearly not accounting for these dependencies (see Fig. 3). Secondly, the log transformation used to derive the multiplier field from a hGMRF is somewhat ad hoc, and we believe that substitution of another nonlinear transformation (e.g., a power law [14]) might lead to a more accurate description of the image statistics. Thirdly, the current denoising method estimates model parameter during the process of denoising, which produces image adaptive model parameters. We are exploring the possibility of using a set of generic model parameter learned a priori on a large set of photographic images, so that a generic statistical model for all photographic images based on FoGSM can be built. Finally, there exist residual inhomogeneous structures in the log z field (see Fig. 1) that can likely be captured by explicitly incorporating local orientation [21] or phase into the model. Finding tractable models and algorithms for handling such circular variables is challenging, but we believe their inclusion will result in substantial improvements in modeling and in denoising performance. σ/PSNR Barbara barco boat fingerprint 10/28.13 35.01 (34.01) 35.05 (34.42) 34.12 (33.58) 33.28 (32.45) 25/20.17 30.10 (29.07) 30.44 (29.73) 30.03 (29.34) 28.45 (27.44) 50/14.15 26.40 (25.45) 27.36 (26.63) 27.01 (26.35) 25.11 (24.13) 100/8.13 23.01 (22.61) 24.44 (23.84) 24.20 (23.79) 21.78 (21.21) σ/PSNR Flintstones house Lena peppers 10/28.13 32.47 (31.78) 35.63 (35.27) 35.94 (35.60) 34.38 (33.73) 25/20.17 28.29 (27.48) 31.64 (31.32) 32.11 (31.70) 29.78 (29.18) 50/14.15 24.82 (24.02) 28.51 (28.23) 29.12 (28.62) 26.43 (25.93) 100/8.13 21.24 (20.49) 25.33 (25.31) 26.12 (25.77) 23.17 (22.80) Table 1. Denoising results with FoGSM on different images and different noise levels. Shown in the table are PSNRs (20 log10(255/σe), where σe is the standard deviation of the error) of the denoised images, and in the parenthesis are the PSNRs of the same images denoised with a local GSM model [12]. References [1] P. J. Burt. Fast filter transforms for image processing. Comp. Graph. Image Proc., 16:20–51, 1981. [2] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am., 4(12):2379–2394, 1987. [3] B. Wegmann and C. Zetzsche. Statistical dependencies between orientation filter outputs used in human vision based image code. In Proc. Visual Comm. and Image Proc., volume 1360, pages 909–922, 1990. [4] R. W. Buccigrossi and E. P. Simoncelli. Image compression via joint statistical characterization in the wavelet domain. IEEE Trans. on Image Proc., 8(12):1688–1701, 1999. [5] Y. W. Teh, M. Welling, S. Osindero, and G. E. Hinton. Energy-based models for sparse overcomplete representations. J. of Machine Learning Res., 4:1235–1260, 2003. [6] S. C. Zhu, Y. Wu, and D. Mumford. Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. Int’l. J. Comp. Vis., 27(2):107–126, 1998. [7] S. Roth and M. J. Black. Fields of experts: a framework for learning image priors. In IEEE Conf. on Comp. Vis. and Pat. Rec., volume 2, pages 860–867, 2005. [8] P. Gehler and M. Welling. Products of ”edge-perts”. In Adv. in Neural Info. Proc. Systems (NIPS*05). MIT Press, 2006. [9] M. J. Wainwright and E. P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In Adv. Neural Info. Proc. Sys. (NIPS*99), volume 12, pages 855–861, May 2000. [10] Y. Karklin and M. S. Lewicki. A hierarchical Bayesian model for learning non-linear statistical regularities in non-stationary natural signals. Neural Computation, 17(2):397–423, 2005. [11] A. Hyv¨arinen, P. O. Hoyer, and M. Inki. Topographic ICA as a model of natural image statistics. In the First IEEE Int’l. Workshop on Bio. Motivated Comp. Vis., London, UK, 2000. [12] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. on Image Proc., 12(11):1338–1351, 2003. [13] J. Romberg, H. Choi, and R. G. Baraniuk. Bayesian tree-structured image modeling using wavelet domain hidden Markov models. IEEE Trans. on Image Proc., 10(7):303–347, 2001. [14] M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky. Random cascades on wavelet trees and their use in modeling and analyzing natural imagery. Appl. and Comp. Harm. Ana., 11(1):89–123, 2001. [15] H. Rue and L. Held. Gaussian Markov Random Fields: Theory And Applications. Monographs on Statistics and Applied Probability. Chapman and Hall/CRC, 2005. [16] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes. Cambridge, 2nd edition, 2002. [17] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale derivative computation. In IEEE Int’l. Conf. on Image Proc., volume 3, pages 444–447, 1995. [18] D. Ruderman. The statistics of natural images. Network : Comp. in Neural Sys., 5:598–605, 1994. [19] M. Figueiredo and J. Leit¨ao. Unsupervised image restoration and edge location using compound GaussMarkov random fields and MDL principle. IEEE Trans. on Image Proc., 6(8):1089–1122, 1997. [20] J. Besag. On the statistical analysis of dirty pictures. J. of the Royal Stat. Soc., Series B, 48:259–302, 1986. [21] D. K. Hammond and E. P. Simoncelli. Image denoising with an orientation-adaptive Gaussian scale mixture model. In Proc. 13th IEEE Int’l. Conf. on Image Proc., pages 1433–1436, October 2006.
2006
50
3,071
Near-Uniform Sampling of Combinatorial Spaces Using XOR Constraints Carla P. Gomes Ashish Sabharwal Bart Selman Department of Computer Science Cornell University, Ithaca NY 14853-7501, USA {gomes,sabhar,selman}@cs.cornell.edu ∗ Abstract We propose a new technique for sampling the solutions of combinatorial problems in a near-uniform manner. We focus on problems specified as a Boolean formula, i.e., on SAT instances. Sampling for SAT problems has been shown to have interesting connections with probabilistic reasoning, making practical sampling algorithms for SAT highly desirable. The best current approaches are based on Markov Chain Monte Carlo methods, which have some practical limitations. Our approach exploits combinatorial properties of random parity (XOR) constraints to prune away solutions near-uniformly. The final sample is identified amongst the remaining ones using a state-of-the-art SAT solver. The resulting sampling distribution is provably arbitrarily close to uniform. Our experiments show that our technique achieves a significantly better sampling quality than the best alternative. 1 Introduction We present a new method, XORSample, for uniformly sampling from the solutions of hard combinatorial problems. Although our method is quite general, we focus on problems expressed in the Boolean Satisfiability (SAT) framework. Our work is motivated by the fact that efficient sampling for SAT can open up a range of interesting applications in probabilistic reasoning [6, 7, 8, 9, 10, 11]. There has also been a growing interest in combining logical and probabilistic constraints as in the work of Koller, Russell, Domingos, Bacchus, Halpern, Darwiche, and many others (see e.g. statistical relational learning and Markov logic networks [1]), and a recently proposed Markov logic system for this task uses efficient SAT sampling as its core reasoning mechanism [2]. Typical approaches for sampling from combinatorial spaces are based on Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis algorithm and simulated annealing [3, 4, 5]. These methods construct a Markov chain with a predefined stationary distribution. One can draw samples from the stationary distribution by running the Markov chain for a sufficiently long time. Unfortunately, on many combinatorial problems, the time taken by the Markov chain to reach its stationary distribution scales exponentially with the problem size. MCMC methods can also be used to find (globally optimal) solutions to combinatorial problems. For example, simulated annealing (SA) uses the Boltzmann distribution as the stationary distribution. By lowering the temperature parameter to near zero, the distribution becomes highly concentrated around the minimum energy states, which correspond to the solutions of the combinatorial problem under consideration. SA has been successfully applied to a number of combinatorial search problems. However, many combinatorial problems, especially those with intricate constraint structure, are beyond the reach of SA and related MCMC methods. Not only does problem structure make reaching the stationary distribution prohibitively long, even reaching a single (optimal) solution is often infeasible. Alternative combinatorial search techniques have been developed that are much more effective at finding solutions. These methods generally exploit clever search space pruning ∗This work was supported by the Intelligent Information Systems Institute (IISI) at Cornell University (AFOSR grant F49620-01-1-0076) and DARPA (REAL grant FA8750-04-2-0216). techniques, which quickly focus the search on small, but promising, parts of the overall combinatorial space. As a consequence, these techniques tend to be highly biased, and sample the set of solutions in an extremely non-uniform way. (Many are in fact deterministic and will only return one particular solution.) In this paper, we introduce a general probabilistic technique for obtaining near-uniform samples from the set of all (globally optimal) solutions of combinatorial problems. Our method can use any state-of-the-art specialized combinatorial solver as a subroutine, without requiring any modifications to the solver. The solver can even be deterministic. Most importantly, the quality of our sampling method is not affected by the possible bias of the underlying specialized solver — all we need is a solver that is good at finding some solution or proving that none exists. We provide theoretical guarantees for the sampling quality of our approach. We also demonstrate the practical feasibility of our approach by sampling near-uniformly from instances of hard combinatorial problems. As mentioned earlier, to make our discussion more concrete, we will discuss our method in the context of SAT. In the SAT problem, we have a set of logical constraints on a set of Boolean (True/False) variables. The challenge is to find a setting of the variables such that all logical constraints are satisfied. SAT is the prototypical NP-complete problem, and quite likely the most widely studied combinatorial problem in computer science. There have been dramatic advances in recent years in the state-of-the-art of SAT solvers [e.g. 12, 13, 14]. Current solvers are able to solve problems with millions of variables and constraints. Many practical combinatorial problems can be effectively translated into SAT. As a consequence, one of the current most successful approaches to solving hard computational problems, arising in, e.g., hardware and software verification and planning and scheduling, is to first translate the problem into SAT, and then use a state-of-the-art SAT solver to find a solution (or show that it does not exist). As stated above, these specialized solvers derive much of their power from quickly focusing their search on a very small part of the combinatorial space. Many SAT solvers are deterministic, but even when the solvers incorporate some randomization, solutions will be sampled in a highly non-uniform manner. The central idea behind our approach can be summarized as follows. Assume for simplicity that our original SAT instance on n Boolean variables has 2s solutions or satisfying assignments. How can we sample uniformly at random from the set of solutions? We add special randomly generated logical constraints to our SAT problem. Each random constraint is constructed in such a way that it rules out any given truth assignment exactly with probability 1/2. Therefore, in expectation, after adding s such constraints, we will have a SAT instance with exactly one solution.1 We then use a SAT solver to find the remaining satisfying assignment and output this as our first sample. We can repeat this process with a new set of s randomly generated constraints and in this way obtain another random solution. Note that to output each sample, we can use whatever off-the-shelf SAT solver is available, because all it needs to do is find the single remaining assignment.2 The randomization in the added constraints will guarantee that the assignment is selected uniformly at random. How do we implement this approach? For our added constraints, we use randomly generated parity or “exclusive-or” (XOR) constraints. In recent work, we introduced XOR constraints for the problem of counting the number of solutions using MBound [15]. Although the building blocks of MBound and XORSample are the same, this work relies much more heavily on the properties of XOR constraints, namely, pairwise and even 3-wise independence. As we will discuss below, an XOR constraint eliminates any given truth assignment with probability 1/2, and therefore, in expectation, cuts the set of satisfying assignments in half. For this expected behavior to happen often, the elimination of each assignment should ideally be fully independent of the elimination of other assignments. Unfortunately, as far as is known, there are no compact (polynomial size) logical constraints that can achieve such complete independence. However, XOR constraints guarantee at least pairwise independence, i.e., if we know that an XOR constraint C eliminates assignment σ1, this provides no information as to whether C will remove any another assignment σ2. Remarkably, as we will see, such pairwise independence already leads to near-uniform sampling. Our sampling approach is inspired by earlier work in computational complexity theory by Valiant and Vazirani [16], who considered the question whether having one or more assignments affects 1 Of course, we don’t know the true value of s. In practice, we use a binary style search to obtain a rough estimate. As we will see, our algorithms work correctly even with over- and under-estimates for s. 2 The practical feasibility of our approach exploits the fact that current SAT solvers are very effective in finding such truth assignments in many real-world domains. the hardness of combinatorial problems. They showed that, in essence, the number of solutions should not affect the hardness of the problem instances in the worst case [16]. This was received as a negative result because it shows that finding a solution to a Unique SAT problem (a SAT instance that is guaranteed to have at most one solution) is not any easier than finding a solution to an arbitrary SAT instance. Our sampling strategy turns this line of research into a positive direction by showing how a standard SAT solver, tailored to finding just one solution of a SAT problem, can now be used to sample near-uniformly from the set of solutions of an arbitrary SAT problem. In addition to introducing XORSample and deriving theoretical guarantees on the quality of the samples it generates, we also provide an empirical validation of our approach. One question that arises is whether the state-of-the-art SAT solvers will perform well on problem instances with added XOR (or parity) constraints. Fortunately, as our experiments show, a careful addition of such constraints does generally not degrade the performance of the solvers. In fact, the addition of XOR constraints can be beneficial since the constraints lead to additional propagation that can be exploited by the solvers.3 Our experiments show that we can effectively sample near-uniformly from hard practical combinatorial problems. In comparison with the best current alternative method on such instances, our sampling quality is substantially better. 2 Preliminaries For the rest of this paper, fix the set of propositional variables in all formulas to be V, |V| = n. A variable assignment σ : V →{0,1} is a function that assigns a value in {0,1} to each variable in V. We may think of the value 0 as FALSE and the value 1 as TRUE. We will often abuse notation and write σ(i) for valuations of entities i ̸∈V when the intended meaning is either already defined or is clear from the context. In particular, σ(1) = 1 and σ(0) = 0. When σ(i) = 1, we say that σ satisfies i. For x ∈V, ¬x denotes the corresponding negated variable; σ(¬x) = 1−σ(x). Let F be a formula over variables V. σ(F) denotes the valuation of F under σ. If σ satisfies F, i.e., σ(F) = 1, then σ is a model, solution, or satisfying assignment for F. Our goal in this paper is to sample uniformly from the set of all solutions of a given formula F. An XOR constraint D over variables V is the logical “xor” or parity of a subset of V ∪{1}; σ satisfies D if it satisfies an odd number of elements in D. The value 1 allows us to express even parity. For instance, D = {a,b,c,1} represents the xor constraint a ⊕b ⊕c ⊕1, which is TRUE when an even number of a,b,c are TRUE. Note that it suffices to use only positive variables. E.g., ¬a ⊕b ⊕¬c and ¬a ⊕b are equivalent to D = {a,b,c} and D = {a,b,1}, respectively. Our focus will be on formulas which are a logical conjunction of a formula in Conjunctive Normal Form (CNF) and some XOR constraints. In all our experiments, XOR constraints are translated into CNF using additional variables so that the full formula can be fed directly to standard (CNF-based) SAT solvers. We will need basic concepts from linear algebra. Let F2 denote the field of two elements, 0 and 1, and Fn 2 the vector space of dimension n over F. An assignment σ can be thought of as an element of Fn 2. Similarly, an XOR constraint D can be seen as a linear constraint a1x1 +a2x2 +...+anxn +b = 1, where ai,b ∈{0,1}, + denotes addition modulo 2 for F2, ai = 1 iff D has variable i, and b = 1 iff D has the parity constant 1. In this setting, we can talk about linear transformations of Fn 2 as well as linear independence of σ,σ ′ ∈Fn 2 (see standard texts for details). We will use two properties: every linear transformation maps the all-zeros vector to itself, and there exists a linear transformation that maps any k linearly independent vectors to any other k linearly independent vectors. Consider the set X of all XOR constraints over V. Since an XOR constraint is a subset of V ∪{1}, |X| = 2n+1. Our method requires choosing XOR constraints from X at random. Let X(n,q) denote the probability distribution over X defined as follows: select each v ∈V independently at random with probability q and include the constant 1 independently with probability 1/2. This produces XORs of average length nq. In particular, note that every two complementary XOR constraints involving the same subset of V (e.g., c⊕d and c⊕d ⊕1) are chosen with the same probability irrespective of q. Such complementary XOR constraints have the simple but useful property that any assignment σ satisfies exactly one of them. Finally, when the distribution X(n,1/2) is used, every XOR constraint in X is chosen with probability 2−(n+1). 3 Note that there are certain classes of structured instances based on parity constraints that are designed to be hard for SAT solvers [17]. Our augmented problem instances appear to behave quite differently from these specially constructed instances because of the interaction between the constraints in the original instance and the added random parity constraints. We will be interested in the random variables which are the sum of indicator random variables: Y = ∑σ Yσ. Linearity of expectation says that E[Y] = ∑σ E[Yσ]. When various Yσ are pairwise independent, i.e., knowing Yσ2 tells us nothing about Yσ1, even variance behaves linearly: Var[Y] = ∑σ Var[Yσ]. We will also need conditional probabilities. Here, for a random event X, linearity of conditional expectation says that E[Y | X] = ∑σ E[Yσ | X]. Let X =Yσ0. When various Yσ are 3-wise independent, i.e., knowing Yσ2 and Yσ3 tells us nothing about Yσ1, even conditional variance behaves linearly: Var  Y | Yσ0  = ∑σ Var  Yσ | Yσ0  . This will be key to the analysis of our second algorithm. 3 Sampling using XOR constraints In this section, we describe and analyze two randomized algorithms, XORSample and XORSample’, for sampling solutions of a given Boolean formula F near-uniformly using streamlining with random XOR constraints. Both algorithms are parameterized by two quantities: a positive integer s and a real number q ∈(0,1), where s is the number of XORs added to F and X(n,q) is the distribution from which they are drawn. These parameters determine the degree of uniformity achieved by the algorithms, which we formalize as Theorems 1 and 2. The first algorithm, XORSample, uses a SAT solver as a subroutine on the randomly streamlined formula. It repeatedly performs the streamlining process until the resulting formula has a unique solution. When s is chosen appropriately, it takes XORSample a small number of iterations (on average) to successfully produce a sample. The second algorithm, XORSample’, is non-iterative. Here s is chosen to be relatively small so that a moderate number of solutions survive. XORSample’ then uses stronger subroutines, namely a SAT model counter and a model selector, to output one of the surviving solutions uniformly at random. 3.1 XOR-based sampling using SAT solvers: XORSample Let F be a formula over n variables, and q and s be the parameters of XORSample. The algorithm works by adding to F, in each iteration, s random XOR constraints Qs drawn independently from the distribution X(n,q). This generates a streamlined formula Fq s whose solutions (called the surviving solutions) are a subset of the solutions of F. If there is a unique surviving solution σ, XORSample outputs σ and stops. Otherwise, it discards Qs and Fq s , and iterates the process (rejection sampling). The check for uniqueness of σ is done by adding the negation of σ as a constraint to F q s and testing whether the resulting formula is still satisfiable. See Algorithm 1 for a full description. Params: q ∈(0,1), a positive integer s Input : A CNF formula F Output : A solution of F begin iterationSuccessful ←FALSE while iterationSuccessful = FALSE do Qs ←{s random constraints independently drawn from X(n,q)} Fq s ←F ∪Qs // Add s random X O R constraints to F result ←SATSolve(Fq s ) // Solve using a SAT solver if result = TRUE then σ ←solution returned by SATSolve (Fq s ) F′ ←Fq s ∪{ ¯σ} // Remove σ from the solution set result′ ←SATSolve(F′) if result′ = FALSE then iterationSuccessful = TRUE return σ // Output σ; it is the unique solution of Fq s end Algorithm 1: XORSample, sampling solutions with XORs using a SAT solver We now analyze how uniform the samples produced by XORSample are. For the rest of this section, fix q = 1/2 . Let F be satisfiable and have exactly 2s∗solutions; s∗∈[0,n]. Ideally, we would like each solution σ of F to be sampled with probability 2−s∗. Let pone,s(σ) be the probability that XORSample outputs σ in one iteration. This is typically much lower than 2−s∗, which is accounted for by rejection sampling. Nonetheless, we will show that when s is larger than s∗, the variation in pone,s(σ) over different σ is small. Let ps(σ) be the overall probability that XORSample outputs σ. This, we will show, is very close to 2−s∗, where “closeness” is formalized as being within a factor of c(α) which approaches 1 very fast. The proof closely follows the argument used by Valiant and Vazirani [16] in their complexity theory work on unique satisfiability. However, we give a different, non-combinatorial argument for the pairwise independence property of XORs needed in the proof, relying on linear algebra. This approach is insightful and will come handy in Section 3.2. We describe the main idea below, deferring details to the full version of the paper. Lemma 1. Let α > 0,c(α) = 1−2−α, and s = s∗+α. Then c(α)2−s < pone,s(σ) ≤2−s. Proof sketch. We first prove the upper bound on pone,s(σ). Recall that for any two complementary XORs (e.g. c⊕d and c⊕d ⊕1), σ satisfies exactly one XOR. Hence, the probability that σ satisfies an XOR chosen randomly from the distribution X(n,q) is 1/2 . By independence of the s XORs in Qs in XORSample, σ survives with probability exactly 2−s, giving the desired upper bound on pone,s(σ). For the lower bound, we resort to pairwise independence. Let σ ̸= σ ′ be two solutions of F. Let D be an XOR chosen randomly from X(n,1/2). We use linear algebra arguments to show that the probability that σ(D) = 1 (i.e., σ satisfies D) is independent of the probability that σ ′(D) = 1. Recall the interpretation of variable assignments and XOR constraints in the vector space Fn 2 (cf. Section 2). First suppose that σ and σ ′ are linearly dependent. In Fn 2, this can happen only if exactly one of σ and σ ′ is the all-zeros vector. Suppose σ = (0,0,...,0) and σ ′ is non-zero. Perform a linear transformation on Fn 2 so that σ′ = (1,0,...,0). Let D be the constraint a1x1 +a2x2 +...+anxn +b = 1. Then, σ ′(D) = a1 +b and σ(D) = b. Since a1 is chosen uniformly from {0,1} when D is drawn from X(n,1/2), knowing a1 + b gives us no information about b, proving independence. A similar argument works when σ is non-zero and σ ′ = (0,0,...,0), and also when σ and σ ′ are linearly independent to begin with. We skip the details. This proves that σ(D) and σ ′(D) are independent when D is drawn from X(n,1/2). In particular, Pr[σ′(D) = 1 | σ(D) = 1] = 1/2 . This reasoning easily extends to s XORs in Qs and we have that Pr[σ′(Qs) = 1 | σ(Qs) = 1] = 2−s. Now, pone,s(σ) = Pr  σ(Qs) = 1 and for all other solutions σ ′ of F,σ′(Qs) = 0  = Pr[σ(Qs) = 1]· 1−Pr  for some solution σ ′ ̸= σ,σ′(Qs) = 1 | σ(Qs) = 1  . Evaluating this using the union bound and pairwise independence shows pone,s(σ) > c(α) 2−s. Theorem 1. Let F be a formula with 2s∗solutions. Let α > 0,c(α) = 1−2−α, and s = s∗+α. For any solution σ of F, the probability ps(σ) with which XORSample with parameters q = 1/2 and s outputs σ satisfies c(α) 2−s∗< ps(σ) < 1 c(α) 2−s∗ and min σ {ps(σ)} > c(α) max σ {ps(σ)}. Further, the number of iterations needed to produce one sample has a geometric distribution with expectation between 2α and 2α/c(α). Proof. Let ˆp denote the probability that XORSample finds some unique solution in any single iteration. pone,s(σ), as before, is the probability that σ is the unique surviving solution. ps(σ), the overall probability of sampling σ, is given by the infinite geometric series ps(σ) = pone,s(σ)+(1−ˆp)pone,s(σ)+(1−ˆp)2pone,s(σ)+... which sums to pone,s(σ)/ ˆp. In particular, ps(σ) is proportional to pone,s(σ). Lemma 1 says that for any two solutions σ1 and σ2 of F, pone,s(σ1) and pone,s(σ2) are strictly within a factor of c(α) of each other. By the above discussion, ps(σ1) and ps(σ2) must also be strictly within a factor of c(α) of each other, already proving the min vs. max part of the result. Further, ∑σ ps(σ) = 1 because of rejection sampling. For the first part of the result, suppose for the sake of contradiction that ps(σ0) ≤c(α)2−s∗for some σ0, violating the claimed lower bound. By the above argument, ps(σ) is within a factor of c(α) of ps(σ0) for every σ, and would therefore be at most 2−s∗. This would make ∑σ ps(σ) strictly less than one, a contradiction. A similar argument proves the upper bound on ps(σ). Finally, the number of iterations needed to find a unique solution (thereby successfully producing a sample) is a geometric random variable with success parameter ˆp = ∑σ pone,s(σ), and has expected value 1/ ˆp. Using the bounds on pone,s(σ) from Lemma 1 and the fact that the unique survival of each of the 2s∗solutions σ are disjoint events, we have ˆp ≤2s∗2−s = 2−α and ˆp > 2s∗c(α)2−s = c(α)2−α. This proves the claimed bounds on the expected number of iterations, 1/ ˆp. 3.2 XOR-based sampling using model counters and selectors: XORSample’ We now discuss our second parameterized algorithm, XORSample’, which also works by adding to F s random XORs Qs chosen independently from X(n,q). However, now the resulting streamlined formula Fq s is fed to an exact model counting subroutine to compute the number of surviving solutions, mc. If mc > 0, XORSample’ succeeds and outputs the ith surviving solution using a model selector on Fq s , where i is chosen uniformly from {1,2,...,mc}. Note that XORSample’, in contrast to XORSample, is non-iterative. Also, the model counting and selecting subroutines it uses are more complex than SAT solvers; these work well in practice only because F q s is highly streamlined. Params: q ∈(0,1), a positive integer s Input : A CNF formula F Output : A solution of F, or Failure begin Qs ←{s constraints randomly drawn from X(n, p)} Fq s ←F ∪Qs // Add s random X O R constraints to F mc ←SATModelCount(Fq s ) // Compute the exact model count of Fq s if mc ̸= 0 then i ←a random number chosen uniformly from {1,2,...,mc} σ ←SATFindSolution(Fq s ,i) // Compute the ith solution return σ // Sampled successfully! else return Failure end Algorithm 2: XORSample’, sampling with XORs using a model counter and selector The sample-quality analysis of XORSample’ requires somewhat more complex ideas than that of XORSample. Let F have 2s∗solutions as before. We again fix q = 1/2 and prove that if the parameter s is sufficiently smaller than s∗, the sample-quality is provably good. The proof relies on the fact that XORs chosen randomly from X(n,1/2) act 3-wise independently on different solutions, i.e., knowing the value of an XOR constraint on two variable assignments does not tell us anything about its value on a third assignment. We state this as the following lemma, which can be proved by extending the linear algebra arguments we used in the proof of Lemma 1 (see the full version for details). Lemma 2 (3-wise independence). Let σ1,σ2, and σ3 be three distinct assignments to n Boolean variables. Let D be an XOR constraint chosen at random from X(n,1/2). Then for i ∈{0,1}, Pr[σ1(D) = i | σ2(D),σ3(D)] = Pr[σ1(D) = i]. Recall the discussion of expectation, variance, pairwise independence, and 3-wise independence in Section 2. In particular, when a number of random variables are 3-wise independent, the conditional variance of their sum (conditioned on one of these variables) equals the sum of their individual conditional variances. We use this to compute bounds on the sampling probability of XORSample’. The idea is to show that the number of solutions surviving, given that any fixed solution σ survives, is independent of σ in expectation and is highly likely to be very close to the expected value. As a result, the probability with which σ is output, which is inversely proportional to the number of solutions surviving along with σ, will be very close to the uniform probability. Here “closeness” is one-sided and is measured as being within a factor of c′(α) which approaches 1 very quickly. Theorem 2. Let F be a formula with 2s∗solutions. Let α > 0 and s = s∗−α. For any solution σ of F, the probability p′ s(σ) with which XORSample’ with parameters q = 1/2 and s outputs σ satisfies p′ s(σ) > c′(α) 2−s∗, where c′(α) = 1−2−α/3 (1+2−α)(1+2−α/3). Further, XORSample’ succeeds with probability larger than c′(α). Proof sketch. See the full version for a detailed proof. We begin by setting up a framework for analyzing the number of surviving solutions after s XORs Qs drawn from X(n,1/2) are added to F. Let Yσ′ be the indicator random variable which is 1 iff σ ′(Qs) = 1, i.e., σ ′ survives Qs. E[Yσ′] = 2−s and Var[Yσ′] ≤E[Yσ′] = 2−s. Further, a straightforward generalization of Lemma 2 from a single XOR constraint D to s independent XORs Qs implies that the random variables Yσ′ are 3-wise independent. The variable mc (see Algorithm 2), which is the number of surviving solutions, equals ∑σ′ Yσ′. Consider the distribution of mc conditioned on the fact that σ survives. Using pairwise independence, the corresponding conditional expectation can be shows to satisfy: µ = E[mc | σ(Qs) = 1] = 1+(2s∗−1)2−s. More interesting, using 3-wise independence, the corresponding conditional variation can also be bounded: Var[mc | σ(Qs) = 1] < E[mc | σ(Qs) = 1]. Since s = s∗−α, 2α < µ < 1 + 2α. We show that mc conditioned on σ(Qs) = 1 indeed lies very close to µ. Let β ≥0 be a parameter whose value we will fix later. By Chebychev’s inequality, Pr h |mc−µ| ≥µ 2β | σ(Qs) = 1 i ≤22β Var[mc | σ(Qs) = 1] (E[mc | σ(Qs) = 1])2 < 22β E[mc | σ(Qs) = 1] = 22β µ Therefore, conditioned on σ(Qs) = 1, with probability more than 1−22β/µ, mc lies between (1− 2−β)µ and (1+2−β)µ. Recall that p′ s(σ) is the probability that XORSample’ outputs σ. p′ s(σ) = Pr[σ(Qs) = 1] n ∑ i=1 Pr[mc = i | σ(Qs) = 1] 1 i ≥2−s Pr h mc ≤(1+2−β)µ | σ(Qs) = 1 i 1 (1+2−β)µ ≥2−s 1−22β/µ (1+2−β)µ Simplifying this expression and optimizing it by setting β = α/3 gives the desired bound on p′ s(σ). Lastly, the success probability of XORSample’ is ∑σ p′ s(σ) > c′(α). Remark 1. Theorems 1 and 2 show that both XORSample and XORSample’ can be used to sample arbitrarily close to the uniform distribution when q = 1/2 . For example, as the number of XORs used in XORSample is increased, α increases, the deviation c(α) from the truly uniform sampling probability p∗approaches 0 exponentially fast, and we get progressively smaller error bands around p∗. However, for any fixed α, these algorithms, somewhat counter-intuitively, do not always sample truly uniformly (see the full version). As a result, we expect to see a fluctuation around p∗, which, as we proved above, will be exponentially small in α. 4 Empirical validation To validate our XOR-sampling technique, we consider two kinds of formulas: a random 3-SAT instance generated near the SAT phase transition [18] and a structured instance derived from a logistics planning domain (data and code available from the authors). We used a complete model counter, Relsat [12], to find all solutions of our problem instances. Our random instance with 75 variables has a total of 48 satisfying assignments, and our logistics formula with 352 variables has 512 satisfying assignments. (We used formulas with a relatively small number of assignments in order to evaluate the quality of our sampling. Note that we need to draw many samples for each assignment.) We used XORSample with MiniSat [14] as the underlying SAT solver to generate samples from the set of solutions of each formula. Each sample took a fraction of a second to generate on a 4GHz processor. For comparison, we also ran the best alternative method for sampling from SAT problems, SampleSAT [19, 2], allowing it roughly the same cumulative runtime as XORSample. Figure 1 depicts our results. In the left panel, we consider the random SAT instance, generating 200,000 samples total. In pure uniform sampling, in expectation we have 200,000/48 ≈4,167 samples for each solution. This level is indicated with the solid horizontal line. We see that the samples produced by XORSample all lie in a narrow band centered around this line. Contrast this with the results for SampleSAT: SampleSAT does sample quite uniformly from solutions that lie near each other in Hamming distance but different solution clusters are sampled with different frequencies. This SAT instance has two solution clusters: the first 32 solutions are sampled around 2,900 times each, i.e., not frequently enough, whereas the remaining 16 solutions are sampled too frequently, around 6,700 times each. (Although SampleSAT greatly improves on other sampling strategies for SAT, the split into disjoint sampling bands appears inherent in the approach.) The Kullback-Leibler (KL) divergence between the XORSample data and the uniform distribution is 0.002. For SampleSAT the KL-divergence from uniform is 0.085. It is clear that the XORSample approach leads to much more uniform sampling. The right panel in Figure 1 gives the results for our structured logistics planning instance. (To improve the readability of the figure, we plot the sample frequency only for every fifth assignment.) In this case, the difference between XORSample and SampleSAT is even more dramatic. SampleSAT in fact only found 256 of the 512 solutions in a total of 100,000 samples. We also see that one of these solutions is sampled nearly 60,000 times, whereas many other solutions are sampled less than 1000 10000 0 10 20 30 40 50 Absolute Frequency (log scale) Solution # XORsample SampleSat uniform 1 10 100 1000 10000 100000 0 100 200 300 400 500 Absolute Frequency (log scale) Solution # XORSample SampleSat uniform Figure 1: Results of XORSample and SampleSAT on a random 3-SAT instance, the left panel, and a logistics planning problem, the right panel. (See color figures in PDF.) five times. The KL divergence from uniform is 4.16. (Technically the KL divergence is infinite, but we assigned a count of one to the non-sampled solutions.) The expected number of samples for each assignment is 100,000/512 ≈195. The figure also shows that the sample counts from XORSample all lie around this value; their KL divergence from uniform is 0.013. These experiments show that XORSample is a promising practical technique (with theoretical guarantees) for obtaining near-uniform samples from intricate combinatorial spaces. References [1] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1-2):107–136, 2006. [2] H. Poon and P. Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In 21th AAAI, pages 458–463, Boston, MA, July 2006. [3] N. Madras. Lectures on Monte Carlo methods. In Field Institute Monographs, vol. 16. Amer. Math. Soc., 2002. [4] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equations of state calculations by fast computing machines. J. Chem. Phy., 21:1087–1092, 1953. [5] S. Kirkpatrick, D. Gelatt Jr., and M. Vecchi. Optimization by simuleated annealing. Science, 220(4598): 671–680, 1983. [6] D. Roth. On the hardness of approximate reasoning. J. AI, 82(1-2):273–302, 1996. [7] M. L. Littman, S. M. Majercik, and T. Pitassi. Stochastic Boolean satisfiability. J. Auto. Reas., 27(3): 251–296, 2001. [8] J. D. Park. MAP complexity results and approximation methods. In 18th UAI, pages 388–396, Edmonton, Canada, August 2002. [9] A. Darwiche. The quest for efficient probabilistic inference, July 2005. Invited Talk, IJCAI-05. [10] T. Sang, P. Beame, and H. A. Kautz. Performing Bayesian inference by weighted model counting. In 20th AAAI, pages 475–482, Pittsburgh, PA, July 2005. [11] F. Bacchus, S. Dalmao, and T. Pitassi. Algorithms and complexity results for #SAT and Bayesian inference. In 44nd FOCS, pages 340–351, Cambridge, MA, October 2003. [12] R. J. Bayardo Jr. and R. C. Schrag. Using CSP look-back techniques to solve real-world SAT instances. In 14th AAAI, pages 203–208, Providence, RI, July 1997. [13] L. Zhang, C. F. Madigan, M. H. Moskewicz, and S. Malik. Efficient conflict driven learning in a Boolean satisfiability solver. In ICCAD, pages 279–285, San Jose, CA, November 2001. [14] N. E´en and N. S¨orensson. MiniSat: A SAT solver with conflict-clause minimization. In 8th SAT, St. Andrews, U.K., June 2005. Poster. [15] C. P. Gomes, A. Sabharwal, and B. Selman. Model counting: A new strategy for obtaining good bounds. In 21th AAAI, pages 54–61, Boston, MA, July 2006. [16] L. G. Valiant and V. V. Vazirani. NP is as easy as detecting unique solutions. Theoretical Comput. Sci., 47(3):85–93, 1986. [17] J. M. Crawford, M. J. Kearns, and R. E. Schapire. The minimal disagreement parity problem as a hard satisfiability problem. Technical report, AT&T Bell Labs., 1994. [18] D. Achlioptas, A. Naor, and Y. Peres. Rigorous location of phase transitions in hard optimization problems. Nature, 435:759–764, 2005. [19] W. Wei, J. Erenrich, and B. Selman. Towards efficient sampling: Exploiting random walk strategies. In 19th AAAI, pages 670–676, San Jose, CA, July 2004.
2006
51
3,072
Recursive Attribute Factoring David Cohn Google Inc., 1600 Amphitheatre Parkway Mountain View, CA 94043 cohn@google.com Deepak Verma Dept. of CSE, Univ. of Washington, Seattle WA- 98195-2350 deepak@cs.washington.edu Karl Pfleger Google Inc., 1600 Amphitheatre Parkway Mountain View, CA 94043 kpfleger@google.com Abstract Clustering, or factoring of a document collection attempts to “explain” each observed document in terms of one or a small number of inferred prototypes. Prior work demonstrated that when links exist between documents in the corpus (as is the case with a collection of web pages or scientific papers), building a joint model of document contents and connections produces a better model than that built from contents or connections alone. Many problems arise when trying to apply these joint models to corpus at the scale of the World Wide Web, however; one of these is that the sheer overhead of representing a feature space on the order of billions of dimensions becomes impractical. We address this problem with a simple representational shift inspired by probabilistic relational models: instead of representing document linkage in terms of the identities of linking documents, we represent it by the explicit and inferred attributes of the linking documents. Several surprising results come with this shift: in addition to being computationally more tractable, the new model produces factors that more cleanly decompose the document collection. We discuss several variations on this model and show how some can be seen as exact generalizations of the PageRank algorithm. 1 Introduction There is a long and successful history of decomposing collections of documents into factors or clusters to identify “similar” documents and principal themes. Collections have been factored on the basis of their textual contents [1, 2, 3], the connections between the documents [4, 5, 6], or both together [7]. A factored corpus model is usually composed of a small number of “prototype” documents along with a set of mixing coefficients (one for each document in the corpus). Each prototype corresponds to an abstract document whose features are, in some mathematical sense, “typical” of some subset of the corpus documents. The mixing coefficients for a document d indicate how the model’s prototypes can best be combined to approximate d. Many useful applications arise from factored models: • Model prototypes may be used as “topics” or cluster centers in spectral clustering [8] serving as “typical” documents for a class or cluster. • Given a topic, factored models of link corpora allow identifying authoritative documents on that topic [4, 5, 6]. • By exploiting correlations and “projecting out” uninformative terms, the space of a factored model’s mixing coefficients can provide a measure of semantic similarity between documents, regardless of the overlap in their actual terms [1]. The remainder of this paper is organized as follows: Below, we first review the vector space model, formalize the factoring problem, and describe how factoring is applied to linked document collections. In Section 2 we point out limitations of current approaches and introduce Attribute Factoring (AF) to address them. In the following two sections, we identify limitations of AF and describe Recursive Attribute Factoring and several other variations to overcome them, before summarizing our conclusions in Section 5. The Vector Space Model: The vector space model is a convention for representing a document corpus (ordinarily sets of strings of arbitrary length) as a matrix, in which each document is represented as a column vector. Let the number of documents in the corpus be N and the size of vocabulary M. Then T denotes the M × N term-document matrix such that column j represents document dj, and Tij indicates the number of times term ti appears in document dj. Geometrically, the columns of T can also be viewed as points in an M dimensional space, where each dimension i indexes the number of times term ti appears in the corresponding document. A link-based corpus may also be represented as a vector space, defining an N × N matrix L where Lij = 1 if there is a link from document i to j and 0 otherwise. It is sometimes preferable to work with P , a normalized version of L in which Pij = Lij/ P i′ Li′j; that is, each document’s outlinks sum to 1. Figure 1: Factoring decomposes matrix A into matrices U and V Factoring: Let A represent a matrix to be factored (usually T or T augmented with some other matrix) into K factors. Factoring decomposes A into two matrices U and V (each of rank K) such that A ≈U V .1 In the geometric interpretation, columns of U contains the K prototypes, while columns of V indicate what mixture of prototypes best approximates the columns in the original matrix. The definition of what constitutes a “best approximation” leads to the many different factoring algorithms in use today. Latent Semantic Analysis [1] minimizes the sum squared reconstruction error of A , PLSA [2] maximizes the log-likelihood that a generative model using U as prototypes would produce the observed A , and NonNegative Matrix Factorization [3] adds constraints that all components of U and V must be greater than or equal to zero. For the purposes of this paper, however, we are agnostic as to the factorization method used — our main concern is how A , the document matrix to be factored, is generated. 1.1 Factoring Text and Link Corpora When factoring a text corpus (e.g. via LSA [1], PLSA [2], NMF [3] or some other technique), we directly factor the matrix T . Columns of the resulting M × K matrix U are often interpreted as the K “principal topics” of the corpus, while columns of the K × N matrix V are “topic memberships” of the corpus documents. 1In general, A ≈f(U , V ), where f can be any function with takes in the weights for a document and the document prototypes to generate the original vector. When factoring a link corpus (e.g. via ACA [4] or PHITS [6]), we factor L or the normalized link matrix P . Columns of the resulting N × K matrix U are often interpreted as the K “citation communities” of the corpus, and columns of the K × N matrix V indicate to what extent each document belongs to the corresponding community. Additionally, U ij, the degree of citation that community j accords to document di can be interpreted as the “authority” of di in that community. 1.2 Factoring Text and Links Together Many interesting corpora, such as scientific literature and the World Wide Web, contain both text content and links. Prior work [7] has demonstrated that building a single factored model of the joint term-link matrix produces a better model than that produced by using text or links alone. The naive way to produce such a joint model is to append L or P below T , and factor the joint matrix:  T L  ≈  U T U L  × V . (1) Figure 2: The naive joint model concatenates term and link matrices When factored, the resulting U matrix can be seen as having two components, representing the two distinct types of information in [T ; L ]. Column i of UT indicates the expected term distribution of factor i, while the corresponding column of UL indicates the distribution of documents that typically link to documents represented by that factor. In practice, L should be scaled by some factor λ to control the relative importance of the two types of information, but empirical evidence [7] suggests that performance is somewhat insensitive to its exact value. For clarity, we omit reference to λ in the equations below. 2 Beyond the Naive Joint Model Joint models provide a systematic way of incorporating information from both the terms and link structure present in a corpus. But the naive approach described above does not scale up to web-sized corpora, which may have millions of terms and tens of billions of documents. The matrix resulting from a naive representation of a web-scale problem would have N + M features with N ≈1010 and M ≈106. Simply representing this matrix (let alone factoring it) is impractical on a modern workstation. Work on Probabilistic Relational Models (PRMs) [9] suggests another approach. The terms in a document are explicit attributes; links to the document provide additional attributes, represented (in the naive case) as the identities of the inlinking documents. In a PRM however, entities are represented by their attributes, rather than their identities. By taking a similar tack, we arrive at Attribute Factoring — the approach of representing link information in terms of the attributes of the inlinking documents, rather than by their explicit identities. 2.1 Attribute Factoring Each document dj, along with an attribute for each term, has an attribute for each other document di in the corpus, signifying the presence (or absence) of a link from di to dj. When N ≈1010, keeping each document identity as a separate attribute is prohibitive. To create a more economical representation, we propose replacing the link attributes by a smaller set of attributes that “summarize” the information from link matrix L , possibly in combination with the term matrix T . The most obvious attributes of a document are what terms it contains. Therefore, one simple way to represent the “attributes” of a document’s inlinks is to aggregate the terms in the documents that link to it. There are many possible ways to aggregate these terms, including Dirichlet and more sophisticated models. For computational and representational simplicity in this paper, however, we replace inlink identities with a sum of the terms in the inlinking documents. In matrix notation, this is just  T T × L  ≈  U T U T ×L  × V . (2) Figure 3: Representation for Attribute Factoring Colloquially, we can look at this representation as saying that a document has “some distribution of terms” (T ) and is linked to by documents that have “some other term distribution” (T × L ). By substituting the aggregated attributes of the inlinks for their identities, we can reduce the size of the representation down from (M +N)×N to a much more manageable 2M ×N. What is surprising is that, on the domains tested, this more compact representation actually improves factoring performance. 2.2 Attribute Factoring Experiments Figure 4: Attribute Factoring outperforms the content-only and naive joint representations We tested Attribute Factoring on two publiclyavailable corpora of interlinked text documents. The Cora dataset [10] consists of abstracts and references of of approximately 34,000 computer science research papers; of these we used the approximately 2000 papers categorized into the seven subfields of machine learning. The WebKB dataset [11] consists of approximately 6000 web pages from computer science departments, classified by school and category (student, course, faculty, etc.). For both datasets, we factored the content-only, naive joint, and AF joint representations using PLSA [2]. We varied K, the number of computed factors from 2 to 16, and performed 10 factoring runs for each value of K tested. The factored models were evaluated by clustering each document to its dominant factor and measuring cluster precision: the fraction of documents in a cluster sharing the majority label. Figure 4 illustrates a typical result: adding explicit link information improves cluster precision, but abstracting the link information with Attribute Factoring improves it even more. 3 Beyond Simple Attribute Factoring Figure 5: Attribute Factoring can be “spammed” by mirroring one level back Attribute Factoring reduces the number of attributes from N +M to 2M, allowing existing factoring techniques to scale to web-sized corpora. This reduction in number of attributes however, comes at a cost. Since the identity of the document itself is replaced by its attributes, it is possible for unscrupulous authors (spammers) to “pose” as a legitimate page with high PageRank. Consider the example shown in Figure 5, showing two subgraphs present in the web. On the right is a legitimate page like the Yahoo! homepage, linked to by many pages, and linking to page RYL (Real Yahoo Link). A link from the Yahoo! homepage to RYL imparts a lot of authority and hence is highly desired by spammers. Failing that, a spammer might try to create a counterfeit copy of the Yahoo! homepage, boost its PageRank by means of a “link farm”, and create a link from it to his page FYL (Fake Yahoo Link). Without link information, our factoring can not distinguish the counterfeit homepage from the real one. Using AF or the naive joint model allows us to distinguish them based on the distribution of documents that link to each. But with AF, that real/counterfeit distinction is not propagated to documents that they point to. All that AF tells us is that RYL and FYL are pointed to by pages that look a lot like the Yahoo! homepage. 3.1 Recursive Attribute Factoring Spamming AF was simple because it only looks one link behind. That is, attributes for a document are either explicit terms in that document or explicit terms in documents linking to the current document. This let us infer that the fake Yahoo! homepage was counterfeit, but provided no way to propagate this inference on to later pages. Figure 6: Recursive Attribute Factoring aggregates the inferred attributes (columns of V ) of inlinking documents The AF representation introduced in the previous section can be easily fooled. It makes inferences about a document based on explicit attributes propagated from the documents linking to it, but this inference only propagates one level. For example it lets us infer that the fake Yahoo! homepage was counterfeit, but provides no way to propagate this inference on to later pages. This suggests that we need to propagating not only explicit attributes of a document (its component terms), but its inferred attributes as well. A ready source of inferred attributes comes from the factoring process itself. Recall that when factoring T ≈U × V , if we interpret the columns of U as factors or prototypes, then each column of V can be interpreted as the inferred factor memberships of its corresponding document. Therefore, we can propagate the inferred attributes of inlinking documents by aggregating the columns of V they correspond to (Figure 6). Numerically, this replaces T (the explicit document attributes) in the bottom half of the left matrix with V (the inferred document attributes):  T V × L  ≈  UT UV×L  × V . (3) There are some worrying aspects of this representation: the document representation is no longer statically defined, and the equation itself is recursive. In practice, there is a simple iterative procedure for solving the equation (See Algorithm 1), but it is computationally expensive, and carries no convergence guarantees. The “inferred” attributes (IA ) are set initially to random values, which are then updated until they converge. Note that we need to use the normalized version of L , namely P 2. Algorithm 1 Recursive Attribute Factoring 1: Initialize IA0 with random entries. 2: while Not Converged do 3: Factor At =  T IAt  ≈  UT UIA  × V 4: Update IAt+1 = V × P . 5: end while 3.2 Recursive Attribute Factoring Experiments To evaluate RAF, we used the same data sets and procedures as in Section 2.2, with results plotted in Figure 7. It is perhaps not surprising that RAF by itself does not perform as well as AF on 2We would use L and P interchangeably to represent contribution from inlinking documents distinguishing only in case of “recursive” equations where it is important to normalize L to facilitate convergence. the domains tested3 - when available, explicit information is arguably more powerful than inferred information. It’s important to realize, however, that AF and RAF are in no way exclusive of each other; when we combine the two and propagate both explicit and implicit attributes, our performance is (satisfyingly) better than with either alone (top lines in Figures 7(a) and (b)). (a) Cora (b) WebKB Figure 7: RAF and AF+RAF results on Cora and WebKB datasets 4 Discussion: Other Forms of Attribute Factoring Both Attribute Factoring and Recursive Attribute Factoring involve augmenting the term matrix with a matrix (call it IA ) containing attributes of the inlinking documents, and then factoring the augmented matrix: A =  T IA  ≈  UT UIA  V . (4) The traditional joint model set IA = L ; in Attribute Factoring we set IA = T ×L and in Recursive Attribute Factoring IA = V ×P . In general though, we can set IA to be any matrix that aggregates attributes of a document’s inlinks.4 For AF we can replace the N dimensional inlink vector with a M-dimensional inferred vector d′ i such that d′ i = P j:L ji=1 wjdj and then IA would be the matrix with inferred attributes for each document i.e. ith column of IA is d′ i. Different choices for wj lead to different weighting of aggregation of attributes from the incoming documents; some variations are summarized in Table 1. Summed function wi IA Attribute Factoring 1 T ×L Outdegree-normalized Attribute Factoring Pji T ×P PageRank-weighted Attribute Factoring Pj T × diag(P ) ×L PageRank- and outdegree-normalized PjPji T ×diag(P ) ×P Table 1: Variations on attribute weighting for Attribute Factoring. (Pj is PageRank of document j) 3It is somewhat surprising (and disappointing) that RAF performs worse that the content-only model, but other work [7] has posited situations when this may be expected. 4This approach can, of course, be extended to also include attributes of the outlinked documents, but bibliometric analysis has historically found that inlinks are more informative about the nature of a document than outlinks (echoing the Hollywood adage that “It’s not who you know that matters - it’s who knows you”). Extended Attribute Factoring: Recursive Attribute Factoring was originally motivated by the “Fake Yahoo!” problem described in Section 3. While useful in conjunction with ordinary Attribute Factoring, its recursive nature and lack of convergence guarantees are troubling. One way to simulate the desired effect of RAF in a closed form is to explicitly model the inlink attributes more than just one level.5 For example, ordinary AF looks back one level at the (explicit) attributes of inlinking documents by setting IA = T × L . We can extend that “lookback” to two levels by defining IA = [T × L ; T × L × L ]. The IA matrix would have 2M features (M attributes for inlinking documents and another M for attributes of documents that linked to the inlinking documents). Still, it would be possible, albeit difficult, for a determined spammer to fool this Extended Attribute Factoring (EAF) by mimicking two levels of the web’s linkage. This can be combatted by adding a third level to the model (IA =  T × L ; T × L 2; T × L 3 ), which increases the model complexity by only a linear factor, but (due to the web’s high branching) vastly increases the number of pages a spammer would need to duplicate. It should be pointed out that these extended attributes rapidly converge to the stationary distribution of terms on the web: T × L ∞= T × eig(L ), equivalent to weighting inlinking attributes by a version of PageRank that omits random restarts. (Like in Algo. 1, P needs to be used instead of L to achieve convergence). Another PageRank Connection: While the vanilla RAF(+AF) gives good results, one can imagine many variations with interesting properties; one of them in particular is worth mentioning. A smoothed version of the recursive equation, can be written as  T ϵ + γ · V × P  ≈  UT UV×L  × V . (5) This the same basic equation as the RAF but multiplied with a damping factor γ. This smoothed RAF gives a further insight into working of RAF itself once we look at a simpler version of it. Starting the the original equation let us first remove the explicit attributes. This reduces the equation to ϵ + γ · V × P ≈UV×L × V . For the case where UV×L has a single dimension, the above equation further simplifies to ϵ + γ · V × P ≈u× V . For some constrained values of ϵ and γ, we get ϵ + (1 −ϵ) · V × P ≈V , which is just the equation for PageRank [12]. This means that, in the absence of T ’s term data, the inferred attributes V produced by smoothed RAF represent a sort of generalized, multi-dimensional PageRank, where each dimension corresponds to authority on one of the inferred topics of the corpus.6 With the terms of T added, the intuition is that V and the inferred attributes IA = V × P converge to a trade-off between the generalized PageRank of link structure and factor values for T in terms of the prototypes U T capturing term information. 5 Summary We have described a representational methodology for factoring web-scale corpora, incorporating both content and link information. The main idea is to represent link information with attributes of the inlinking documents rather than their explicit identities. Preliminary results on a small dataset demonstrate that the technique not only makes the computation more tractable but also significantly improve the quality of the resulting factors. We believe that we have only scratched the surface of this approach; many issues remain to be addressed, and undoubtedly many more remain to be discovered. We have no principled basis for weighting the different kinds of attributes in AF and EAF; while RAF seems to converge reliably in practice, we have no theoretical guarantees that it will always do so. Finally, in spite of our motivating example being the ability to factor very large corpora, we have only tested our algorithms on small “academic” data sets; applying the AF, RAF and EAF to a web-scale corpus remains as the real (and as yet untried) criterion for success. 5Many thanks to Daniel D. Lee for this insight. 6This is related to, but distinct from the generalization of PageRank described by Richardson and Domingos [13], which is computed as a scalar quantity over each of the (manually-specified) lexical topics of the corpus. References [1] Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391–407, 1990. [2] Thomas Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI’99, Stockholm, 1999. [3] Daniel D. Lee and H. Sebastian Seung. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems 12, pages 556–562. MIT Press, 2000. [4] H.D. White and B.C. Griffith. Author cocitation: A literature measure of intellectual structure. Journal of the American Society for Information Science, 1981. [5] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604–632, 1999. [6] David Cohn and Huan Chang. Learning to probabilistically identify authoritative documents. In Proc. 17th International Conf. on Machine Learning, pages 167–174. Morgan Kaufmann, San Francisco, CA, 2000. [7] David Cohn and Thomas Hofmann. The missing link - a probabilistic model of document content and hypertext connectivity. In Neural Information Processing Systems 13, 2001. [8] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems 14, 2002. [9] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI99), pages 1300–1309, Stockholm, Sweden, 1999. Morgan Kaufman. [10] Andrew K. McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127–163, 2000. [11] T. Mitchell et. al. The World Wide Knowledge Base Project (Available at http://cs.cmu.edu/∼WebKB). 1998. [12] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1–7):107–117, 1998. [13] Mathew Richardson and Pedro Domingos. The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank. In Advances in Neural Information Processing Systems 14. MIT Press, 2002.
2006
52
3,073
Information Bottleneck for Non Co-Occurrence Data Yevgeny Seldin† Noam Slonim∗ Naftali Tishby†‡ †School of Computer Science and Engineering ‡Interdisciplinary Center for Neural Computation The Hebrew University of Jerusalem ∗The Lewis-Sigler Institute for Integrative Genomics Princeton University {seldin,tishby}@cs.huji.ac.il, nslonim@princeton.edu Abstract We present a general model-independent approach to the analysis of data in cases when these data do not appear in the form of co-occurrence of two variables X, Y , but rather as a sample of values of an unknown (stochastic) function Z(X, Y ). For example, in gene expression data, the expression level Z is a function of gene X and condition Y ; or in movie ratings data the rating Z is a function of viewer X and movie Y . The approach represents a consistent extension of the Information Bottleneck method that has previously relied on the availability of co-occurrence statistics. By altering the relevance variable we eliminate the need in the sample of joint distribution of all input variables. This new formulation also enables simple MDL-like model complexity control and prediction of missing values of Z. The approach is analyzed and shown to be on a par with the best known clustering algorithms for a wide range of domains. For the prediction of missing values (collaborative filtering) it improves the currently best known results. 1 Introduction In the situation of information explosion that characterizes todays world, the need for automatic tools for data analysis is more than obvious. Here, we focus on an unsupervised analysis of data that can be organized in matrix form. Clearly, this broad definition covers various types of data. For instance, in text analysis data, the rows of a matrix correspond to words, the columns to different documents, and entries indicate the number of occurrences of a particular word in a specific document. In a matrix of gene expression data, rows correspond to genes, columns to various experimental conditions, and entries indicate expression levels of given genes in given conditions. In movie rating data, rows correspond to viewers, columns to movies, and entries indicate ratings made by the viewers. Finally, for financial data, rows correspond to stocks, columns to different time points, and each entry indicates a price change of a particular stock at a given time point. While the text analysis case is a classical example of co-occurrence data, the remaining examples are not naturally interpreted that way. Typically, a normalized words-documents table is used as an estimator of a words-documents joint probability distribution, where each entry estimates the probability of finding a given word in a given document, whereas the words are assumed to be independent of each other [1, 2]. By contrast, values in a financial data matrix are a general function of stocks and days and in particular might include negative numerical values. Here, the data cannot be regarded as a sample from a joint probability distribution of stocks and days, even if a normalization is applied. Though it can be argued that each entry of the matrix is a sample from a joint probability distribution of three variables: stock, day, and price change - the degenerate nature of this sample must be taken into account: the rate change of a given stock on a given day occurs only once and no statistics exist. Therefore the joint probability distribution of the three variables cannot be estimated by direct sampling. A similar argument applies to survey data like movie ratings. In this case the sample might be even more degenerate, with many “missing values”, since not all the viewers rate all the movies. Although gene expression data can be considered as a repeatable experiment, very often different experimental conditions correspond to only one single column in the matrix. Thus once again a single data point represents the joint statistics of three variables: gene, condition, and expression level. Nevertheless, in most such cases there is a statistical relationship within rows and/or within columns of a matrix. For instance, people with similar interests typically give similar ratings to movies and movies with similar characteristics give rise to similar rating profiles. Such relationships can be exploited by clustering algorithms that group together similar rows and/or columns, and furthermore make it possible to complete the missing entries in the matrix [3]. The existing clustering techniques can be classified into three major categories. (i) Similarity, or distance-based methods, require a pre-defined similarity measure that can be applied to all data points and possibly to new points as well. The nature of the distance measure is crucial to these techniques and inherently requires expert knowledge of the application domain, which is often unavailable. (ii) Generative modeling techniques in which a specific class of statistical models is chosen to describe the data. As before, an a priori choice of an appropriate model is far from obvious for most real world applications. (iii) An alternative line of study, relevant to our work, is the Information Bottleneck (IB) approach [4] and its extensions. Instead of defining the clustering objective through a distortion measure or data generation process, the approach suggests using relevant variables. A tradeoff between compression of the irrelevant and prediction of the relevant variables is then optimized using information theoretic principles. Importantly, the definition of the relevant variable is often natural and obvious for the task at hand, and in turn the method yields the optimal relevant distortion for the problem. Since the original work in [4], multiple studies have highlighted the theoretical and practical importance of the IB method, in particular in the context of cluster analysis [2, 5, 6, 7, 8]. However, the original formulation is based on the availability of co-occurrence data. In practice a given cooccurrence table is treated as a finite sample out of a joint distribution of the variables; namely row and column indices. Unfortunately, as mentioned above, this assumption does do not fit many realistic datasets, thus preventing a direct application of the IB approach in various domains. To address this issue, in [9] a random walk over the data points is defined, serving to transform non co-occurrence data into a transition probability matrix that can be further analyzed via an IB algorithm. In a more recent work [10] the suggestion is made to use the mutual information between different data points as part of a general information-theoretic treatment to the clustering problem. The resulting algorithm, termed the Iclust algorithm, was demonstrated to be superior or comparable to 18 other commonly used clustering techniques over a wide range of applications where the input data cannot be interpreted as co-occurrences [10]. However, both of the approaches have limitations – Iclust requires a sufficient amount of columns in the matrix for reliable estimation of the information relations between rows, and the Markovian Relaxation algorithm involves various non-trivial steps in the data pre-process [9]. Here, we suggest an alternative approach, inspired by the multivariate IB framework [8]. The multivariate IB principle expands the original IB work to handle situations where multiple systems of clusters are constructed simultaneously with respect to different variables and the input data may correspond to more than two variables. While the multivariate IB was originally proposed for cooccurrence data, we argue here that this framework is rich enough to be rigorously applicable in the new situation. The idea is simple and intuitive: we look for a compact grouping of rows and/or columns such that the product space defined by the resulting clusters is maximally informative about the matrix content, i.e. the matrix entries. We show that this problem can be posed and solved within the original multivariate IB framework. The new choice of relevance variable eliminates the need to know the joint distribution of all the input variables (which is inaccessible in all the applications presented here). Moreover, when missing values are present, the analysis suggests an information theoretic technique for their completion. We explore the application of this approach to various domains. For gene expression data and financial data we obtain clusters of comparable quality (measured as coherence with manual labeling) to those obtained by state-of-the-art methods [10]. For movie rating matrix completion, performance is superior to the best known alternatives in the collaborative filtering literature [11]. 2 Theory 2.1 Problem Setting We henceforth denote the rows of a matrix by X, the columns by Y , and the matrix entries by Z (and small x, y and z for specific instances). The number of rows is denoted by n, and the number of columns by m. We regard X and Y as discrete coordinate space and Z(X, Y ) as a function. Generalization to higher dimensions and continuous coordinates is readily possible, but not discussed here. For a given matrix, a row x, and a column y, the value of z(x, y) is assumed to be deterministic. This can be relaxed as well. The objective is to find “good” partitions of the X-Y space that will be informative with respect to the function values Z(X, Y ). The partitions are defined by grouping of rows into clusters of rows C and grouping of columns into clusters of columns D. The complexity of such partitions is measured by the sum of the weighted mutual information values nI(X; C) + mI(Y ; D). For the hard partitions considered in the paper this sum is the number of bits required to describe the partition (see [12]). The informativeness of the partition is measured by another mutual information, I(C, D; Z). In these terms, the goal is to find minimally complex partitions that preserve a given information level about the matrix values Z. This can be expressed via the minimization of the following functional: min q(c|x),q(d|y) nI(X; C) + mI(Y ; D) −βI(C, D; Z), (1) where q(c|x) is the mapping of rows x to row clusters c, q(d|y) is the mapping of columns y to column clusters d, and β is a Lagrange multiplier controlling the tradeoff between compression and accuracy. We first derive the relations between the quantities in the above optimization problem and then describe a sequential algorithm for its minimization. We will stick to the following notation conventions: p is used for distributions that involve only input parameters and hence do not change during the analysis, ˆp is used for empirical distributions, q for the sought mapping distributions and ˆq for empirical distributions dependent on the sought mappings. By the definition of the mutual information [12]: I(X; C) = X x,c p(x)q(c|x) log q(c|x) q(c) ≈ X x,c ˆp(x)q(c|x) log q(c|x) ˆq(c) . We define the indicator function: 1x,y =  1, if the entry (x, y) is present in the matrix 0, if the entry (x, y) is absent in the matrix and denote the total number of populated entries (which is our sample size) by: N = P x,y 1x,y. Then: ˆp(x) = P y 1x,y N = Number of populated entries in row x Total number of populated entries , ˆq(c) = X x ˆp(x)q(c|x), I(Y ; D), ˆp(y), and ˆq(d) are defined similarly. I(C, D; Z) = X c,d,z q(c, d)q(z|c, d) log q(z|c, d) p(z) ≈ X c,d,z ˆq(c, d)ˆq(z|c, d) log ˆq(z|c, d) ˆp(z) . We assume Z is a categorical variable, thus: ˆp(z) = P x,y:z(x,y)=z 1 N = Number of entries equal to z Total number of populated entries, ˆq(c, d) = P x,y q(c|x)q(d|y)1x,y N = Number of populated entries in section c, d Total number of populated entries , ˆq(z|c, d) = P x,y:z(x,y)=z q(c|x)q(d|y) P x,y q(c|x)q(d|y)1x,y = Number of entries equal to z in section c, d Number of populated entries in section c, d. In the special case of complete data matrices 1x,y is identically 1 and ˆq(c, d) may be decomposed as: ˆq(c, d) = ˆq(c)ˆq(d). In addition ˆp(x) and ˆp(y) accept the form ˆp(x) = 1 n and ˆp(y) = 1 m. But in the general case considered in this paper X and Y (and thus C and D) are not independent. 2.2 Sequential Optimization Given q(c|x) and q(d|y), one can calculate all the quantities defined above, and in particular the minimization functional Lmin = nI(X; C)+mI(Y ; D)−βI(C, D; Z) defined in equation (1). To minimize Lmin (using hard partitions) we can use the sequential (greedy) optimization algorithm suggested in [13]. This algorithm is quite simple: 1. Start with a random (hard) partition q(c|x), q(d|y). 2. Iteratively until convergence (no changes at step (b) are done) traverse all rows x and columns y of a matrix in a random order. For each row/column: (a) Draw x (or y) from its cluster. (b) Reassign it to a new cluster c∗(or d∗), so that Lmin is minimized. The new cluster may appear to be the old cluster, and then no change is counted. Due to monotonic decrease in Lmin, which is lower bounded by −βH(Z) the algorithm is guaranteed to converge to some local minima of (1). Multiple random initializations may be used to improve the result. This simple algorithm is by far not the only way to optimize (1), but in practice it was shown to achieve very good results on similar optimization problems [2]. The complexity of the algorithm is analyzed in the complementary material, where it is shown to be O(M(n+m)|C||D|), when M is the number of iterations required for convergence (usually 10-40) and |C|, |D| are cardinalities of the corresponding variables. 2.3 Minimal Description Length (MDL) Formulation The minimization functional Lmin has three free parameters that have to be externally determined: the tradeoff (or resolution) parameter β, and the cardinalities |C|, and |D|. Whereas in some applications they may be given (e.g. the desired number of clusters), there are cases when they also require optimization (as in the example of matrix completion in the next section). To perform such optimization, the Minimum Description Length (MDL) principle [14] is used. The idea behind MDL is that models achieving better compression of the training data - when the compression includes a model description - also achieve better generalization on the test data. The following compression scheme is defined: |C| row and |D| column clusters define |C||D| sections each getting roughly N |C||D| samples. The corresponding distributions ˆq(z|c, d) over categorical variable Z may be described by |Z||C||D| 2 log N |C||D| bits (see [14]). As already mentioned, the matrix partition itself may be described by nI(X; C)+mI(Y ; D) bits. And given the partition and the distributions ˆq(z|c, d) the number of bits required to code the matrix entries is NH(Z|C, D) [12]. Thus the total description length is nI(X; C) + mI(Y ; D) + NH(Z|C, D) + |Z||C||D| 2 log N |C||D|. Since H(Z|C, D) = H(Z) −I(C, D; Z) and H(Z) is constant the latter can be omitted from optimization, which results in total minimization functional Fmdl = nI(X; C) + mI(Y ; D) −NI(C, D; Z) + |Z||C||D| 2 log N |C||D|. (2) Observe that constrained on |C|, and |D|, Lmin corresponding to Fmdl accepts the form of Lmin = nI(X; C) + mI(Y ; D) −NI(C, D; Z), (3) i.e. the optimal tradeoff β = N is uniquely determined. Since in practice Fmdl is roughly convex in both C and D, the optimal values for these two parameters may be easily determined by scanning. X Y Z C D (a) Gin X Y Z C D (b) Gout Figure 1: Gin and Gout in Multivariate IB formulation. 2.4 Relation with the Multivariate Information Bottleneck Multivariate Information Bottleneck (IB) [8] is an unsupervised approach for structured data exploration. Its core lies in combining the Bayesian networks formalism [15] with the Information Bottleneck method [4]. Multivariate IB searches for a meaningful structured partition of the data, defined by compression variables (in our case these are C and D). Two graphs, Gin and Gout, are defined. The former specifies the relations between the input (data) variables – in our case, X, Y , and Z – and the compression variables. The latter specifies the information terms that are to be preserved by the partition. A tradeoff between the multi-information preserved in the input structure IGin (which we want to minimize) and the multi-information expressed by the target structure IGout (which we want to maximize) is then optimized. For a set of variables V = V1, .., Vn, a directed acyclic graph (Bayesian network) G, and a joint probability distribution over V, p(V), the multi-information IG(V) is defined as: IG[p(V)] = n X i=1 I(Vi; PaG(Vi)), where PaG(Vi) are the parents of node Vi in G. The graphs Gin and Gout corresponding to our case are given in Figure 1. (The dashed link between X and Y in Gin appears when missing values are present in the matrix and may be chosen in any direction.) The corresponding optimization functional is: min q(c|x),q(d|y) IGin(X; Y ; Z; C; D) −βIGout(X; Y ; Z; C; D) = min q(c|x),q(d|y) I(X; Y ) + I(X, Y ; Z) + I(X; C) + I(Y ; D) −βI(C, D; Z). By observing that I(X, Y ; Z) and I(X; Y ) are independent of q(c|x) and q(d|y) we can eliminate them from the above optimization and obtain exactly the optimization functional defined in (1), only with equal weighting of I(X; C) and I(Y ; D). Thus the approach may be seen as a special case of the multivariate IB for the graphs Gin and Gout defined in Figure 1. An important distinction should be made though: unlike the multivariate IB we do not require the existence of the joint probability distribution p(x, y, z). This is achieved by excluding the term I(X, Y ; Z) from the optimization functional. 2.5 Relation with Information Based Clustering A recent information-theoretic approach for cluster analysis is given in [10], and is known as information based clustering, abbreviated as Iclust. In contrast to the original IB method, Iclust is equally applicable to co-occurrence as well as non co-occurrence data. In the following we highlight the relation of our work and this earlier contribution. By changing the notations used in [10] to those used here we can write the similarity measure s(x1, x2) used in [10] as: s(x1, x2) = X z1,z2 X y p(y)p(z1, z2|x1, x2, y) log p(y)p(z1, z2|x1, x2, y) P y1 p(y1)p(z1|x1, y1) P y2 p(y2)p(z2|x2, y2) Table 1: Clusters coherence for the ESR and S&P stock datasets. The table provides the coherence of the achieved solutions for Nc = 5, 10, 15 and 20 row clusters. The results achieved by the Iclust algorithm at the same settings are shown in brackets alongside the results of our algorithm. For the ESR data an average coherence according to the three GOs is shown. Separate results for each GO are provided in the supplementary material1. Dataset Nc = 5 Nc = 10 Nc = 15 Nc = 20 ESR 69 (79) 53 (49) 50 (52) 42 (42) S&P 94 (88) 83 (91) 92 (93) 86 (86) = I(Z1; Z2|x1, x2). Substituting this in the optimization functional of [10], changing maximization to minimization by flipping the sign, and substituting T = 1 β we obtain: min q(c|x) I(X; C) −β X c q(c) X x1,x2 q(x1|c)q(x2|c)s(x1, x2) = min q(c|x) I(X; C) −β X c,x1,x2 q(c, x1, x2)I(Z1; Z2|x1, x2), which is reminiscent of equation (1) if no column (Y ) grouping is done and cluster variance is measured through pairwise distances and not based on a centroid model. Importantly, in order to be able to evaluate I(Z1; Z2|x1, x2), I-Clust requires a sufficient amount of columns to be available, whereas our approach can operate with any amount of columns given. Alternately, even when the data contain many columns, but are relatively sparse, i.e., have many missing values, evaluating I(Z1; Z2|x1, x2) might be prohibitive as it requires a large enough intersection of non-missing observations for z1 and z2. In our approach it is not a limitation. On the contrary, the approach is designed to cope with this kind of data and resolves the problem by simultaneous grouping of rows and columns of the matrix to amplify statistics. 3 Applications We first compare our algorithm to I-Clust, as it was shown to be superior/comparable to 18 other commonly used clustering techniques over a wide range of application domains [10]. We then describe an experiment on matrix completion. Another application to a small dataset is provided in the supplementary material1. In the last two cases Iclust is not directly applicable. The multivariate IB is not directly applicable to all the provided examples. 3.1 One Dimensional Clustering - Comparison to I-Clust We focus on two applications reported in [10]. For purposes of comparison we restrict our algorithm to cluster only the rows dimension of the matrix by setting the number of column clusters, |D|, equal to the number of columns, m. This simplifies the objective functional defined in equation (1) to Lmin = I(X; C) −βI(C, Y ; Z). (To have a similar form to [10] we incorporate factor n multiplying I(X; C) in β.) For both applications we use exactly the same setting as [10], including row-wise quantization of the input data into five equally populated bins and choosing the same values for the β parameter. The first dataset consists of gene expression levels of yeast genes in 173 various forms of environmental stress [16]. Previous analysis identified a group of ≈300 stress-induced and ≈600 stress-repressed genes with “nearly identical but opposite patterns of expression in response to the environmental shifts” [17]. These 900 genes were termed the yeast environmental stress response (ESR) module. Following [10] we cluster the genes into |C| = 5, 10, 15, and 20 clusters. To assess 1Supplementary material is available at http://www.cs.huji.ac.il/∼seldin the biological significance of the results we consider the coherence [18] of the obtained clusters with respect to three Gene Ontologies (GOs) [19]. Specifically, the coherence of a cluster is defined as the percentage of elements within this cluster that are given an annotation that was found to be significantly enriched in the cluster [18]. The results achieved by our algorithm on this dataset are comparable to the results achieved by I-Clust in all the verified settings - see Table 1. The second dataset is the day-to-day fractional changes in the price of the stocks in the Standard & Poor (S&P) 500 list2, during 273 trading days of 2003. As with the gene expression data we take exactly the same setting used by [10] and cluster the stocks into |C| = 5, 10, 15 and 20 clusters. To evaluate the coherence of the ensuing clusters we use the Global Industry Classification Standard3, which classifies companies into four different levels, organized in a hierarchical tree: sector, industry group, industry, and subindustry. As with the ESR dataset our results are comparable with the results of I-Clust for all the configurations - see Table 1. 3.2 Matrix Completion and Collaborative Filtering Here, we explore the full power of our algorithm in simultaneous grouping of rows and columns of a matrix. A highly relevant application is matrix completion - given a matrix with missing values we would like to be able to complete it by utilizing similarities between rows and columns. This problem is at the core of collaborative filtering applications, but may also appear in other fields. We test our algorithm on the publicly available MovieLens 100K dataset4. The dataset consists of 100,000 ratings on a five-star scale for 1,682 movies by 943 users. We take the five non-overlapping splits of the dataset into 80% train on 20% test size provided at the MovieLens web site. We stress that with this division the training data are extremely sparse - only 5% of the training matrix entries are populated, whereas 95% of the values are missing. To find a “good” bi-clustering of the ratings matrix, minimization of Fmdl defined in (2) is done by scanning cluster cardinalities |C| and |D| and optimizing Lmin as defined in (3) for each fixed pair of |C|, |D|. The minimum of Fmdl is obtained at |C| ≈13 and |D| ≈6 with beyond 1% sensitivity to small changes in |C| and in |D| both in Fmdl values and in prediction accuracy. See supplementary material1 for visualization of the solution at |C| = 4 and |D| = 3. To measure the accuracy of our algorithm we use mean absolute error (MAE) metrics, which is commonly used for evaluation on this dataset [11]. The mean absolute error is defined as: MAE = 1 N PN i=1 |zi −ri|, where zi-s are the predicted and ri-s are the actual ratings. To convert the distributions ˆq(z|c, d) we obtained in our clustering procedure to concrete predictions we take the median of z values within each section c, d. Note that our algorithm is general and does not directly optimize the MAE error functional. Nevertheless we obtain 0.72 MAE (with a deviation of less than 0.01 over multiple experiments). This confidently beats the “magic barrier” of 0.73 reported in the collaborative filtering literature [11]. The root mean squared error (RMSE) measured for the same clustering with a mean of z values within each section c, d taken for prediction yields 0.96 (with a deviation below 0.01). This is much better than 1.165 RMSE reported for a dataset 20 times larger [20] and quite close to 0.9525 RMSE reported by Netflix for a dataset 1000 times larger of a similar nature5. 4 Discussion A new model independent approach to the analysis of data given in the form of samples of a function Z(X, Y ) rather than samples of co-occurrence statistics of X and Y is introduced. From a theoretical viewpoint the approach is a much required extension of the Information Bottleneck method that allows for its application to entirely new domains. The approach also provides a natural way for bi-clustering and matrix completion. From a practical viewpoint the major contribution of the paper is the achievement of the best known results for a wide range of applications with a single algorithm. As well, we improve on the results of prediction of missing values (collaborative filtering). 2Available at http://www.standardpoors.com 3Available at http://wrds.wharton.upenn.edu 4Available at http://www.grouplens.org 5See http://www.netflixprize.com/rules Possible directions for further research include generalization to continuous data values, such as those obtained in gene expression and stock price data, and relaxation of the algorithm to “soft” clustering solutions. Another interesting extension would be to dimensionality reduction, rather than clustering, as occurs in IB when applied to continuous variables [21]. The proposed framework also provides a natural platform for derivation of generalization bounds for missing values prediction that will be discussed elsewhere. References [1] Noan Slonim and Naftali Tishby. Document clustering using word clusters via the information bottleneck method. In Proceedings of 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2000. [2] Noam Slonim. The Information Bottleneck: Theory and Applications. PhD thesis, The Hebrew University of Jerusalem, 2002. [3] Sara C. Madeira and Arlindo L. Oliveira. Biclustering algorithms for biological data analysis: A survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 1(1):24–45, January 2004. [4] Naftali Tishby, Fernando Pereira, and William Bialek. The information bottleneck method. In Allerton Conference on Communication, Control and Computation, volume 37, pages 368–379. 1999. [5] Janne Sinkkonen and Samuel Kaski. Clustering based on conditional distributions in an auxiliary space. Neural Computation, 14(1):217–239, 2002. [6] David Gondek and Thomas Hofmann. Non-redundant data clustering. In 4th IEEE International Conference on Data Mining, 2004. [7] Susanne Still, William Bialek, and L´eon Bottou. Geometric clustering using the information bottleneck method. In Advances in Neural Information Processing Systems 16. [8] Noam Slonim, Nir Friedman, and Naftali Tishby. Multivariate information bottleneck. Neural Computation, 18, 2006. [9] Naftali Tishby and Noam Slonim. Data clustering by markovian relaxation and the information bottleneck method. In NIPS, 2000. [10] Noam Slonim, Gurinder Singh Atwal, Gasper Tracik, and William Bialek. Information-based clustering. In Proceedings of the National Academy of Science (PNAS), volume 102, pages 18297–1830, Dec. 2005. [11] J. Herlocker, J. Konstan, L. Terveen, and J. Riedl. Evaluating collaborative filtering recommender systems. In ACM Transactions on Information Systems, volume 22(1), pages 5–53, January 2004. [12] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley Series in Telecommunications. John Wiley & Sons, New York, NY, 1991. [13] Noam Slonim, Nir Friedman, and Naftali Tishby. Unsupervised document classification using sequential information maximization. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2002. [14] A. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling. IEEE Trans. Info. Theory, 44:2743–2760, 1998. [15] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufman Publishers, 1988. [16] Gasch A.P., Spellman P.T., Kao C.M., Carmel-Harel O., Eisen M.B., Storz G., Botstein D., and Brown P.O. Genomic expression programs in the response of yeast cells to environmental changes. Molecular Biology. Cell, 11(12):4241–57, December 2000. [17] A.P. Gasch. The environmental stress response: a common yeast response to environmental stresses. Topics in Current Genetics (series editor S. Hohmann), 1:11–70, 2002. [18] Segal E., Shapira M., Regev A., Pe’er D., Botstein D., Koller D., and Friedman N. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Natural Genetics, 34(2):166–76, 2003. [19] M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, and G. Sherlock. Gene ontology: tool for the unification of biology. Nature Genetics, 25:25–29, May 2000. [20] Thomas Hoffman. Latent semantic models for collaborative filtering. In ACM Transactions on Information Systems, volume 22, January 2004. [21] Gal Chechik, Amir Globerson, Naftali Tishby, and Yair Weiss. Gaussian information bottleneck. Journal of Machine Learning Research, 6:165–188, January 2005.
2006
53
3,074
A Probabilistic Algorithm Integrating Source Localization and Noise Suppression of MEG and EEG Data Johanna M. Zumer Biomagnetic Imaging Lab Department of Radiology Joint Graduate Group in Bioengineering University of California, San Francisco San Francisco, CA 94143-0628 johannaz@mrsc.ucsf.edu Hagai T. Attias Golden Metallic, Inc. San Francisco, CA htattias@goldenmetallic.com Kensuke Sekihara Dept. of Systems Design and Engineering Tokyo Metropolitan, University Tokyo, 191-0065 Japan ksekiha@cc.tmit.ac.jp Srikantan S. Nagarajan Biomagnetic Imaging Lab Department of Radiology Joint Graduate Group in Bioengineering University of California, San Francisco San Francisco, CA 94143-0628 sri@mrsc.ucsf.edu Abstract We have developed a novel algorithm for integrating source localization and noise suppression based on a probabilistic graphical model of stimulus-evoked MEG/EEG data. Our algorithm localizes multiple dipoles while suppressing noise sources with the computational complexity equivalent to a single dipole scan, and is therefore more efficient than traditional multidipole fitting procedures. In simulation, the algorithm can accurately localize and estimate the time course of several simultaneously-active dipoles, with rotating or fixed orientation, at noise levels typical for averaged MEG data. Furthermore, the algorithm is superior to beamforming techniques, which we show to be an approximation to our graphical model, in estimation of temporally correlated sources. Success of this algorithm for localizing auditory cortex in a tumor patient and for localizing an epileptic spike source are also demonstrated. 1 Introduction Mapping functional brain activity is an important problem in basic neuroscience research as well as clinical use. Clinically, such brain mapping procedures are useful to guide neurosurgical planning, navigation, and tumor and epileptic spike removal, as well as guiding the surgeon as to which areas of the brain are still relevant for cognitive and motor function in each patient. Many non-invasive techniques have emerged for functional brain mapping, such as functional magnetic resonance imaging (fMRI) and electromagnetic source imaging (ESI). Although fMRI is the most popular method for functional brain imaging with high spatial resolution, it suffers from poor temporal resolution since it measures blood oxygenation level signals with fluctuations in the order of seconds. However, dynamic neuronal activity has fluctuations in the sub-millisecond time-scale that can only be directly measured with electromagnetic source imaging (ESI). ESI refers to imaging of neuronal activity using magnetoencephalography (MEG) and electroencephalography (EEG) data. MEG refers to measurement of tiny magnetic fields surrounding the head and EEG refers to measurement of voltage potentials using an electrode array placed on the scalp. The past decade has shown rapid development of whole-head MEG/EEG sensor arrays and of algorithms for reconstruction of brain source activity from MEG and EEG data. Source localization algorithms, which can be broadly classified as parametric or tomographic, make assumptions to overcome the ill-posed inverse problem. Parametric methods, including equivalent current dipole (ECD) fitting techniques, assume knowledge about the number of sources and their approximate locations. A single dipolar source can be localized well, but ECD techniques poorly describe multiple sources or sources with large spatial extent. Alternatively, tomographic methods reconstruct an estimate of source activity at every grid point across the whole brain. Of many tomographic algorithms, the adaptive beamformer has been shown to have the best spatial resolution and zero localization bias [1, 2]. All existing methods for brain source localization are hampered by the many types of noise present in MEG/EEG data. The magnitude of the stimulus-evoked neural sources are on the order of noise on a single trial, and so typically 50-200 trials are needed to average in order to distinguish the sources above noise. This can be time-consuming and difficult for a subject or patient to hold still or pay attention through the duration of the experiment. Gaussian thermal noise is present at the sensors themselves. Background room interference such as from powerlines and electronic equipment can be problematic. Biological noise such as heartbeat, eyeblink or other muscle artifact can also be present. Ongoing brain activity itself, including the drowsy-state alpha (∼10Hz) rhythm can drown out evoked brain sources. Finally, most localization algorithms have difficulty in separating neural sources of interest that have temporally overlapping activity. Noise in MEG and EEG data is typically reduced by a variety of preprocessing algorithms before being used by source localization algorithms. Simple forms of preprocessing include filtering out frequency bands not containing a brain signal of interest. Additionally and more recently, ICA algorithms have been used to remove artefactual components, such as eyeblinks. More sophisticated techniques have also recently been developed using graphical models for preprocessing prior to source localization [3, 4]. This paper presents a probabilistic modeling framework for MEG/EEG source localization that is robust to interference and noise. The framework uses a probabilistic hidden variable model that describes the observed sensor data in terms of activity from unobserved brain and interference sources. The unobserved source activities and model parameters are inferred from the data by a VariationalBayes Expectation-Maximization algorithm. The algorithm then creates a spatiotemporal image of brain activity by scanning the brain, inferring the model parameters and variables from sensor data, and using them to compute the likelihood of a dipole at each grid location in the brain. We also show that an established source localization method, the minimum variance adaptive beamformer (MVAB), is an approximation of our framework. 2 Probabilistic model integrating source localization and noise suppression This section describes the generative model for the data. We assume that the MEG/EEG data has been collected such that stimulus onset or some other experimental marker indicated the ’zero’ time point. Ongoing brain activity, biological noise, background environmental noise, and sensor noise are present in both pre-stimulus and post-stimulus periods; however, the evoked neural sources of interest are only present in the post-stimulus time period. We therefore assume that the sensor data can be described as coming from four types of sources: (1) evoked source at a particular voxel (grid point), (2) all other evoked sources not at that voxel, (3) all background noise sources with spatial covariance at the sensors (including brain, biological, or environmental sources), and (4) sensor noise. We first infer the model describing source types (3) and (4) from the pre-stimulus data, then fix certain quantities (described in section 2.2) and infer the full model describing the remaining source types (1) and (2) from the post-stimulus data (described in section 2.1). After inference of the model, a map of the source activity is created as well as a map of the likelihood of activity across voxels. Let yn denote the K × 1 vector of sensor data for time point n, where K is the number of sensors (typically 200). Time ranges from −Npre : 0 : Npost −1 where Npre (Npost) indicates the number β s α B Α y F u x λ Figure 1: (Left) Graphical model for proposed algorithm. Variables are inside dotted box, parameters outside dotted box. Values in circles unknown and learned from the model, and values in squares known. (Right) Representation of factors influencing the data recorded at the sensors. In orange, a post-stimulus source at the voxel of interest, focused on by the lead field F. In red, other post-stimulus sources not at that particular voxel. In green, all background sources, including ongoing brain activity, eyeblinks, heartbeat, and electrical noise. In blue, thermal noise present in each sensor. of time samples in the pre-(post-)stimulus period. The generative model for data yn is yn =  Bun + vn n=−Npre, . . . , −1 F rsr n + Arxr n + Bun + vn n=0, . . . , Npost −1 (1) The K × 3 forward lead field matrix F r represents the physical (and linear) relationship between a dipole source at voxel r for each orientation, and its influence on sensor k = 1 : K [5]. The lead field F r is calculated from knowing the geometry of the source location to the sensor location, as well as the conducting medium in which the source lies: the human head is most commonly approximated as a single-shell sphere volume conductor. The source activity sr n is a 3 × 1 vector of dipole strength in each of the three orientations at time n for the voxel r. The K × L matrix A and the L×1 vector xn represent the post-stimulus mixing matrix and evoked non-localized factors, respectively, corresponding to source type (2) discussed above. The K ×M matrix B and the M ×1 vector un represent the background mixing matrix and background factors, respectively. The K × 1 vector vn represents the sensor-level noise. All quantities depend on r in the post-stimulus period except for B, un and λ (the sensor precision), which will be learned from the pre-stimulus data and fixed as the other quantities are learned for each voxel. Note however the posterior update for ¯un does depend on the voxel r. The graphical model is shown in Fig. 1. This generative model becomes a probabilistic model when we specify prior distributions, as described in the next two sections. 2.1 Localization of evoked sources learned from post-stimulus data In the stimulus-evoked paradigm, the source strength at each voxel is learned from the post-stimulus data. The background mixing matrix B and sensor noise precision λ are fixed, after having been learned from the pre-stimulus data, described in section 2.2. We assume those quantities remain constant through the post-stimulus period and are independent of source location. We assume Gaussian prior distributions on the source factors and interference factors. We further make the assumption that the signals are independent and identically distributed (i.i.d.) across time. The source factors have prior precision given by the 3 × 3 matrix Φ, which relates to the strength of the dipole in each of 3 orientations. All Normal distributions specified in this paper are defined by their mean and precision (inverse covariance). p(s) = Y n p(sn); p(sn) = 3 Y j=1 p(sjn) = N(0, Φ) (2) The interference and background factors are assumed to have identity precision. To complete specification of this model, we need to specify prior distributions on the model parameters. We use a conjugate prior for the interference mixing matrix A, where the αj is a hyperparameter over the jth column of A and λi is the precision of the ith sensor. The hyperparameter α (a diagonal matrix) provides a robust mechanism for automatic model order selection, so that the optimal size of A is inferred from the data through α. p(x) = Y n p(xn); p(xn) = L Y j=1 p(xjn) = N(0, I); p(u) = Y n p(un); p(un) = M Y j=1 p(ujn) = N(0, I) p(A) = Y ij N(Aij|0, λiαj) (3) We now specify the full model: p(y|s, x, u, A, B) = Y n p(yn|sn, xn, un, A, B); p(yn|sn, xn, un, A, B, λ) = N(yn|Fsn + Axn + Bun, λ) (4) Exact inference on this model is intractable using the joint posterior over the interference factors and interference mixing matrix; thus the following variational-Bayesian approximation for the posteriors is used: p(s, x, A|y) ≈q(s, x, A|y) = q(s, x|y)q(A|y) (5) We learn the hidden variables and parameters from the post-stimulus data, iterating through each voxel in the brain, using a variational-Bayesian Expectation-Maximization (EM) algorithm. All variables, parameters and hyperparameters are hidden and are learned from the data. In place of maximizing the logp(y), which would be mathematically intractable, we maximize a lower bound to logp(y) defined by F in the following equation F = Z dx ds dA q(s, x, A|y) [logp(y, s, x, A) −logq(s, x, A|y)] = log p(y) −KL[q(s, x, A|y)||p(s, x, A|y)] (6) where KL(q||p) is the Kullback-Leibler divergence between distributions q and p. F is equal to logp(y) when the approximation in Eq. 5 is true, thus making the KL-distance zero. We use a variational-Bayesian EM algorithm which alternately maximizes the function F with respect to the posteriors q(s, x|y) and q(A|y). In the E-step, F is maximized w.r.t. q(s, x|y), keeping q(A|y) constant, and the sufficient statistics of the hidden variables are computed. In the M-step, F is maximized w.r.t. q(A|y), keeping q(s, x|y) constant, and the MAP estimate of the parameters and hyperparameters are computed. In the E-step, the posterior distribution of the background factors given the data is computed: q(x′ n|yn) = N(¯x′ n, Γ); ¯x′ n = Γ−1 ¯A′T λyn; Γ = ¯A′T λ ¯A′ + KΨ + I′ (7) where we define: ¯x′ n = ¯sn ¯xn ¯un ! ; ¯A′ = F ¯A ¯B  ; I′ = Φ 0 0 0 I 0 0 0 I ! ; Ψ = 0 0 0 0 ΨAA 0 0 0 0 ! (8) In the M-step, we maximize the function F w.r.t. q(A|y) holding q(s, x|y) fixed. We update the posterior distribution of the interference mixing matrix A including its precision ΨAA. Note that the lead field F is fixed based on the geometry of the sensors relative to the head, and ¯B was learned and fixed from the pre-stimulus data. The sensor noise precision λ is also kept fixed from the prestimulus period. The MAP values of the hyperparameter α and source factor precision Φ are learned here from the post-stimulus data. ¯A = (Ryx −FRsx −¯BRux)ΨAA; ΨAA = (Rxx + α)−1; Φ−1 = 1 N Rss; α−1 = diag( 1 K ¯ATλ¯A + ΨAA) (9) The matrices, such as Ryx, represent the posterior covariance between the two subscripts and explicit definitions are omitted for space. In each iteration of EM, the marginal likelihood is increased. The variational likelihood function (the lower bound on the exact marginal likelihood) is given as follows: Lr = N 2 log|λ||Φr| |Γr| −1 2 N X n=1  yT n λyn −¯x ′T r n Γr¯x ′r n  + K 2 log|αr||Ψr| (10) This likelihood function is dependent on the source voxel r and thus a map of the likelihood across the brain can be displayed. Furthermore, we can also plot an image of the source power estimates and the time course of activity at each voxel. -10 -5 0 5 0 5 10 15 20 Localization error of proposed model (blue) relative to beamforming (green) SNIR (dB) Error (mm) -10 -5 0 5 0 5 10 15 20 MVAB, real brain noise MVAB, sim. inteference Algorithm, real brain noise Algortihm, sim. interference Figure 2: Performance of algorithm relative to beamforming for simulated datasets. See text for details. We note that the computational complexity of the proposed algorithm is on the order O(KLNS), roughly equivalent to a single dipole scan, which is of order O(N(K2+S)). These are much smaller than the complexity of a multi-dipole scan which is order O(NSP ) where P is the number of dipoles, and if S represents roughly several thousand voxels. We further note that the number of hidden variables to be estimated is less than the number of data points observed, thus not posing significant problems for estimation accuracy. 2.2 Separation of background sources learned from pre-stimulus data We learn the background mixing matrix and sensor noise precision from the pre-stimulus data using a variational-Bayes factor analysis model. We assume Gaussian prior distributions on the background factors and sensor noise, with zero mean and identity precision; we assume a flat prior on the sensor precision. We again use a conjugate prior for the background mixing matrix B, where βj is a hyperparameter, similar to the expression for the interference mixing matrix. All variables, parameters and hyperparameters are hidden and are learned from the pre-stimulus data. We make the variational-Bayesian approximation for the background mixing matrix and background factors p(u, B|y) ≈q(u, B|y) = q(u|y)q(B|y). In the E-step, we maximize the function F w.r.t. q(u|y) holding q(B|y) fixed. We update the posterior distribution of the factors: q(u|y) = Y n q(un|yn); q(un|yn) = N(¯un, γ) ¯un = γ−1BT λyn; γ = ¯BT λ ¯B + Kψ−1 + I In the M-step, we compute the full posterior distribution of the background mixing matrix B, including its precision matrix ψ, and the MAP estimates of the noise precision λ and the hyperparameter β. We assume the noise precision is diagonal. ¯B = Ryuψ; ψ = (Ruu + β)−1 β−1 = diag( 1 K ¯BTλ¯B + ψ); λ−1 = 1 Ndiag(Ryy −¯BRT yu) (11) 2.3 Relationship to minimum-variance adaptive beamforming Minimum variance adaptive beamforming (MVAB) is one of the best performing source localization techniques. MVAB estimates the dipole source time series by ˆsn = WMV AByn, where WMV AB = (F T R−1 yy F)−1F T R−1 yy and Ryy is the measured data covariance matrix. Thus, MVAB also has computational complexity equivalent to a single-dipole scan, on the order O(K 2 + S). MVAB attempts to suppress interference, but recent studies have shown the MVAB is ineffective in cancellation of interference from other brain sources, especially if there are many such sources. In this section, we derive that MVAB is an approximation to inference on our model. -600 -200 200 600 1000 -1 0 1 -600 -200 200 600 1000 -0.5 0.5 1 -600 -200 200 600 1000 -1 0 1 x (mm) z (mm) -50 0 50 -20 0 20 40 x (mm) z (mm) -50 0 50 -20 0 20 40 Figure 3: Example of algorithm and MVAB for correlated source simulation. See text for details We start by rewriting Eq. (1) as yn = Fsn + zn, where zn is termed the total noise and is given by zn = Axn + Bun + vn. It has mean zero and precision matrix Υ = (AAT + BBT + λ−1)−1. Assuming we have estimated the model parameters A, B, λ, Φ, the MAP estimate of the dipole source time series is ¯sn = Wyn, where W = Γ−1F T Υ and Γ = F T ΥF + Φ. It can be shown that this expression is equivalent to Eq. 8. In the infinite data limit, the data covariance satisfies Ryy = FΦ−1F T + Υ−1. Its inverse is found, using the matrix inversion lemma, to be R−1 yy = Υ −ΥFΓ−1F T Υ. Hence, we obtain F T R−1 yy = (I −F T ΥFΓ−1)F T Υ = ΦΓ−1F T Υ (12) where the last step used the expression for Γ. Next, we approximate Γ ≈F T ΥF. We then use Eq. (12) to obtain: W ≈(F T ΥF)−1F T Υ = (F T ΥF)−1ΓΦ−1ΦΓ−1F T Υ = (F T R−1 yy F)−1F T R−1 yy = WMV AB 3 Results 3.1 Simulations Algorithm, uncorrelated sources MVAB, uncorrelated sources Algorithm, correlated sources MVAB, correlated sources Correlation of estimated with true source Performance of SAKETINI (blue) relative to beamforming (green) in ability to estimate source time course -5 0 5 0 0.2 0.4 0.6 0.8 1 -5 0 5 0 0.2 0.4 0.6 0.8 1 -5 0 5 0 0.2 0.4 0.6 0.8 1 SNIR (dB) SNIR (dB) SNIR (dB) NA RE IN Figure 4: Performance of algorithm relative to beamforming for simulated datasets. See text for details. The proposed method was tested in a variety of realistic source configurations reconstructed on a 5mm voxel grid. A single-shell spherical volume conductor model was used to calculate the forward lead field [5]. Simulated datasets were constructed by placing Gaussian-damped sinusoidal time courses at specific locations inside a voxel grid based on realistic head geometry. Sources were assumed to be present in the post-stimulus period with 437 samples along with a pre-stimulus period of 263 samples. In the ”noise-alone (NA)” cases, Gaussian noise only was added to all time points at the sensors. In the ”interference (IN)” cases, Gaussian noise time courses occurring in both pre- and post-stimulus periods, representing simulated ”ongoing” activity, were placed at 50 random locations throughout the brain voxel grid, and their activity was projected onto the sensors and added to both the sensor noise and source activity. Finally, in the ”real (RE)” cases, 700 samples of real MEG sensor data averaged over 100 trials collected while a human subject was alert but not performing tasks or receiving stimuli. This real background data thus includes real sensor noise plus real ”ongoing” brain activity that could interfere with evoked sources and adds spatial correlation to the sensor data. We varied the Signal-to-Noise Ratio (SNR) and the corresponding Signal to Noise-plus-Interference Ratio (SNIR). SNIR is calculated from the ratio of the sensor data resulting from sources only to sensor data from noise plus interference. The first performance figure (Fig. 2) shows the localization error of the proposed method relative to the MVAB. For this data, a single dipole was placed randomly within -100 0 100 200 300 400 -1000 -500 0 500 1000 Time (ms) Normalized Intensity (-1000 to 1000) Figure 5: Algorithm applied to auditory MEG dataset in patient with temporal lobe tumor. the voxel grid space. The largest peak in the likelihood map was found and the distance from this point to the true source was recorded. Each datapoint is an average of 20 realizations of the source configuration, with error bars showing standard error. This simulation was performed for a variety of SNIR’s and for all three cases of noise described above. The results from NA were omitted since both the proposed method and MVAB performed perfectly (zero error). This figure clearly shows that the error in localization is smaller for the proposed method (black) than for MVAB (green). The next set of simulations examines the proposed method’s ability to estimate the source time course sn. Three sources were placed in the brain voxel grid. The locations of these sources were fixed, but the orientation and time courses were allowed to vary across realizations of the simulations. In half the cases, two of the three sources were forced to be perfectly correlated in time (a scenario where the MVAB is known to fail), while the time course of the third source was random relative to the other two. An example of the likelihood map and estimated time courses are shown in Fig. 3. The likelihood map from the proposed method (on the left) has peaks near all three sources, including the two that were perfectly correlated (depicted by squares). However, the MVAB (middle plot) largely misses the source on the left. On the right plot, the estimated time courses from the proposed method (dashes) and MVAB (dots) are plotted relative to the true time course (solid). The top and middle plots correspond to the (square) correlated sources. While both methods estimate the time courses well, MVAB underestimates the overall strength of the source on the top plot, and exhibits extra noise in the pre-stimulus period for the middle plot. The performance of the proposed model on the same set of simulations of correlated sources, compared to beamforming, are shown in Fig. 4. This figure shows the correlation of the estimated with the true time course, for three cases of NA, IN, and RE, and for both correlated and uncorrelated sources, as a function of SNIR. The proposed method consistently out-performs the MVAB whether the simulated sources are highly correlated with each other (dashed lines), or uncorrelated (solid), and especially in the RE case. Each datapoint represents an average of 10 realizations of the simulation, with standard errors on the order of 0.05 (not shown). 3.2 Real data Stimulus-evoked data was collected in a 275-channel CTF System MEG device from a patient with temporal lobe tumor near auditory cortex. The stimulus was a noise burst presented binaurally in 120 trials. A large peak is typically seen around 100ms after presentation of an auditory stimulus, termed the M100 peak. Figure 5 shows the results of the proposed method applied to this dataset. On the right, the likelihood map show a spatial peak in auditory cortex near the tumor. At that peak voxel, the time course was extracted and plotted on the left, showing the clear M100 peak. This information can be useful to the neurosurgical team for guiding the location of surgical lesion and for providing knowledge of the patient’s auditory processing abilities. We next tested the proposed method on its ability to localize interictal spikes obtained from a patient with epilepsy. No sensory stimuli were presented to this patient in this dataset, which was collected in the same MEG device described above. A Registered EEG/Evoked Potential Technologist marked segments of the continuously-collected dataset which contained spontaneous spikes, as well as segments that clearly contained no spikes. One segment of data with a spike marked at 400ms was used here as the ”post-stimulus” period and a separate, spike-free, segment of equal length was used as the ”pre-stimulus” period. Figure 6 shows the proposed method’s performance on this dataset. The top left subplot shows the raw sensor data for the segment containing the marked spike. The bottom left shows the location of the equivalent-current dipole (ECD) fit to several spikes from this patient; this location from the ECD fit would normally be used clinically. The middle bottom figure shows the likelihood map from the proposed model; the peak is in clear agreement with the standard ECD localization. The middle top figure shows the time course estimated for the likelihood spatial peak. 0 100 200 300 400 500 600 700 1 0. 8 0. 6 0. 4 0. 2 0 0.2 0.4 0.6 0.8 1 x 10 12 Time (ms) Magnetic Field (T) RMS = 311.2 fT 0 100 200 300 400 500 600 700 800 900 1000 0 200 400 600 -1000 -500 0 500 1000 Time (ms) Normalized Intensity (-1000 to 1000) 1000 800 600 400 200 0 200 400 600 800 1000 0 200 400 600 -1000 -500 0 500 1000 Time (ms) Normalized Intensity (-1000 to 1000) Figure 6: Performance of algorithm applied to data from an epileptic patient. See text for details. The spike at 400ms is clearly seen; this cleaned waveform could be of use to the clinician in analyzing peak shape. Finally, the top right plot shows a source time course from a randomly selected location far from the epileptic spike source (shown with cross-hairs on bottom right plot), in order to show the low noise level and to show lack of cross-talk onto source estimates elsewhere. 4 Extensions We have described a novel probabilistic algorithm which performs source localization while robust to interference and demonstrated its superior performance over a standard method in a variety of simulations and real datasets. The model takes advantage of knowledge of when sources of interest are not occurring (such as in the pre-stimulus period of a evoked response paradigm). This model currently assumes averaged data from an evoked response paradigm, but could be extended to examine variations from the average in individual trials, only involving a few extra parameters to estimate. Furthermore, the model could be extended to take advantage of temporal smoothness in the data as well as frequency content. Additionally, spatial smoothness or spatial priors from other modalities, such as structural or functional MRI, could be incorporated. Furthermore, one is not limited to sn in a single voxel; the above formulation holds for any P arbitrarily chosen dipole components, no matter which voxels they belong to, and for any value of P. Of course, as P increases the inferred value of Φ becomes less accurate, and one might choose to restrict it to a diagonal or block-diagonal form. References [1] K. Sekihara, M. Sahani, and S.S. Nagarajan, “Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction,” NeuroImage, vol. 25, pp. 1056– 1067, 2005. [2] K. Sekihara, S.S. Nagarajan, D. Poeppel, and A. Marantz, “Performance of an MEG adaptivebeamformer technique in the presence of correlated neural activities: Effects on signal intensity and time-course estimates,” IEEE Trans Biomed Eng, vol. 49, pp. 1534–1546, 2002. [3] Srikantan S. Nagarajan, Hagai T. Attias, Kenneth E. Hild, and Kensuke Sekihara, “A graphical model for estimating stimulus-evoked brain responses from magnetoencephalography data with large background brain activity,” Neuroimage, vol. 30, pp. 400–416, 2006. [4] S.S. Nagarajan, H.T. Attias, K.E. Hild, and K. Sekihara, “Stimulus evoked independent factor analysis of MEG data with large background activity,” in Adv. Neur. Info. Proc. Sys., 2005. [5] J. Sarvas, “Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem,” Phys Med Biol, vol. 32, pp. 11–22, 1987.
2006
54
3,075
Attentional Processing on a Spike-Based VLSI Neural Network Yingxue Wang, Rodney Douglas, and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 CH-8057 Zurich, Switzerland yingxue,rjd,shih@ini.phys.ethz.ch Abstract The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1 Introduction The network models described in the neuroscience literature have frequently used rate equations to avoid the difficulties of formulating mathematical descriptions of spiking behaviors; and also to avoid the excessive computational resources required for simulating spiking networks. Now, the construction of multi-chip hybrid VLSI (hVLSI) systems that implement large-scale networks of real-time spiking neurons and spike-based sensors is rapidly becoming a reality [3–5, 7], and so it becomes possible to explore the performance of event-based systems in various processing tasks and network behavior of populations of spiking, rather than rate neurons. In this paper we use an hVLSI network to implement a spiking version of the ’pointer-map’ architecture previously described for rate networks by Hahnloser and colleagues [2]. In this architecture, a small number of pointer neurons are incorporated in the feedback of a recurrently connected network. The pointers steer the feedback onto the map, and so focus processing on the attended map neurons. This is an interesting architecture because it reflects a general computational property of sensorimotor and attentional/intentional processing based on pointing. Directing attention, foveating eyes, and reaching limbs all appeal to a pointer like interaction with the world, and such pointing is known to modulate the responses of neurons in a number of cortical and subcortical areas. The operation of the pointer-map depends on the steady focusing of feedback on the map neurons during the period of attention. It is easy to see how this steady control can be achieved in the neurons have continuous, rate outputs; but it is not obvious whether this behavior can be achieved also with intermittently spiking neural outputs. Our objective was thus to evaluate whether networks of spiking neurons would be able to combine the benefits of both event-based processing and the attentional properties of pointer-map architecture. 2 Pointer-Map Architecture A pointer-map network consists of two reciprocally connected populations of excitatory neurons. Firstly, there is a large population of map neurons that for example provide a place encoding of some variable such as the orientation of a visual bar stimulus. A second, small population of pointer neurons exercises attentional control on the map. In addition to the reciprocal connections between the two populations, the map neurons receive feedforward (e.g. sensory) input; and the pointer neurons receive top-down attentional inputs that instruct the pointers to modulate the location and intensity of the processing on the map (see Fig. 1(a)). The important functional difference between conventional recurrent networks (equivalently, ’recurrent maps’) and the pointer-map, is that the pointer neurons are inserted in the feedback loop, and so are able to modulate the effect of the feedback by their top-down inputs. The usual recurrent excitatory connections between neurons are replaced in the pointer-map by recurrent connections between the map neurons and the pointer neurons that have sine and cosine weight profiles. Consequently, the activities of the pointer neurons generate a vectorial pattern of recurrent excitation whose direction points to a particular location on the map (Fig. 1(b)). Global inhibition provides competition between the map neurons, so that overall the pointer-map behaves as an attentionally selective soft winner-take-all network. Figure 1: Pointer-map architecture. (a) Network consists of two layers of excitatory neurons. The map layer receives feedforward sensory inputs and inputs from two pointer neurons. The pointer neurons receive top-down attentional inputs and also inputs from the map layer. The recurrent connections between the map neurons and pointer neurons are set according to sine and cosine profiles. (b) The interaction between pointer neurons and map neurons. Each circle indicates the activity of one neuron. Clear circles indicate silent neurons and the sizes of the gray circles are proportional to the activation of the active neurons. The vector formed by the activities of the two pointer neurons on this angular plot points in the direction (the pointer angle γ) of the map neurons where the pointer-to-map input is the largest. The map-to-pointer input is proportional to the population vector of activities of the map neurons. 3 Spiking Network Chip Architecture We implemented the pointer-map architecture on a multi-neuron transceiver chip fabricated in a 4metal, 2-poly 0.35µm CMOS process. The chip (Fig. 2) has 16 VLSI integrate-and-fire neurons, of which one acts as the global inhibitory neuron. Each neuron has 8 input synapses (excitatory and inhibitory). The circuit details of the soma and synapses are described elsewhere [4]. Input and output spikes are communicated within and between chips using the asynchronous Address Event Representation (AER) protocol [3]. In this protocol the action potentials that travel along point-to-point axonal connections are replaced by digital addresses on a bus that are usually Figure 2: Architecture of multi-neuron chip. The chip has 15 integrate-and-fire excitatory neurons and one global inhibitory neuron. Each neuron has 8 input synapses. Input spikes and output spikes are communicated using an asynchronous handshaking protocol called Address Event Representation. When an input spike is to be sent to the chip, the handshaking signals, Req and Ack, are used to ensure that only valid addresses on a common digital bus are latched and decoded by X- and Y-decoders. The arbiter block arbitrates between all outgoing neuron spikes; and the neuron spike is sent off as the address of the neuron on a common digital bus through two handshaking signals (Reqout and Ackout). The synaptic weights of 2 out of the 8 synapses can be specified uniquely through an on-chip Digital-to-Analog converter that sets the synaptic weight of each synapse before that particular synapse is stimulated. The synaptic weight is specified as part of the digital address that normally codes the synaptic address. the labels of source neurons and/or target synapses. In our chip, five bits of the AER address space are used to encode also the synaptic weight [1]. An on-chip Digital-to-Analog Converter (DAC) transforms the digital weights into the analog signals that set the individual efficacy of the excitatory synapses and inhibitory synapses for each neuron (Fig. 2). (a) (b) Figure 3: Resulting spatial distribution of activity in map neurons in response to attentional input to pointer neurons. The frequencies of the attentional inputs to P1, P2 are (a) [200Hz, 0Hz] (b) [0Hz,200Hz]. The y-axis shows the firing rate (Hz) of the map neurons (1–9) listed on the x-axis. The polar plot on the side of each figure shows the pointer angle γ described by the pointer neuron activities. 4 Experiments Our pointer-map was composed of a total of 12 neurons: 2 served as pointer neurons; 9 as map neurons; and 1 as the global inhibitory neuron. The synaptic weights of these neurons have a coefficient of variance in synaptic efficacy of about 0.25 due to silicon process variations. Through the on-chip DAC, we were able to reduce this variance for the excitatory synapses by a factor of 10. We did not compensate for the variance in the inhibitory synapses because it was technically more challenging to do that. The synaptic weights from each pointer neuron to every map neuron j = 1, 2, ..., 9 (Fig. 4(a)) were set according to the profile shown in Fig. 1(a). We compared the match between the programmed spatial connnectivity and the desired sine/cosine profile by activating only the topdown connections from the pointer neurons to the map neurons, while the bottom-up connections from the map neurons to the pointer neurons, and global inhibition were inactivated. In fact, because of the lower signal resolution, chip mismatch, and system noise, the measured profiles were only a qualitative match to a sine and cosine (Fig. 3), and the worst case variation from the ideal value was up to 50% for very small weights. Nevertheless, in spite of the imperfect match, we were able to reproduce most of the observations of [2]. (a) (b) Figure 4: Network architecture used on the chip for attentional modulation. (a) Originally proposed pointer-map architecture. (b) New network architecture with no requirement for strong excitatory recurrent connections. The global inhibition is now replaced by neuron-specific inhibition. 4.1 Attentional Input Control We tested the attentional control of the pointer neurons for this network (Fig. 4(a)) with activated recurrent connections and global inhibition. In addition, a common small constant input was applied to all map neurons. The location and activity on the map layer can be steered via the inputs to the pointer neurons, as seen in the three examples of Fig. 5. These results are similar to those observed by Hahnloser and colleagues in their rate model. 4.2 Attentional Target Selection One computational feature of the rate pointer-map is its multistablity: If two or more sensory stimuli are presented to the network, strong attentional inputs to the pointer can select one of these stimuli even if the stimulus is not the strongest one. The preferred stimulus depends on the initial activities of the map and pointer neurons. Moreover, attentional inputs can steer the attention to a different location on the map, even after a stronger stimulus is selected initially. We repeated these experiments (Fig. 4 of [2]) for the spiking network. In our experiments, only two map neurons received feedforward sensory inputs which consist of two regular spike trains of different frequencies. As shown in Fig. 6(a), the map neuron with the stronger feedforward input was selected. Attention could be steered to a different part of the map array by providing the necessary attentional inputs. And, the map neuron receiving the weaker stimulus could suppress the activity of another map neuron. Furthermore, the original rate model can produce attentional memorization effects, that is, the location of the map layer activity is retained even after the inputs to the pointer neurons are withdrawn. However, we were unsuccessful in duplicating the results of these experiments (see Fig. 6(a)) because the recurrent connection strength parameter α had to be greater than 1. To explain why this strong recurrent connection strength was necessary, we first describe the steady state rate activities M1 and M2, of two arbitrary map neurons that are active: M1 = ⌊m1 −β (M1 + M2) + α (cos θ1P1 + sin θ1P2)⌋+ (1) M2 = ⌊m2 −β (M1 + M2) + α (cos θ2P1 + sin θ2P2)⌋+ (2) where P1, P2 are the steady state rate activities of the pointer neurons; m1, m2 are the activities of the map neurons due to the sensory inputs; β and α determine the strength of inhibitory and excitatory connections respectively; α cos θi and α sin θi determine the connection strengths between pointer neurons and map neurons i for θi ∈[0o, 90o]. The activities of the pointer neurons are given by P1 = ⌊p1 + α (cos θ1M1 + cos θ2M2)⌋+ (3) P2 = ⌊p2 + α (sin θ1M1 + sin θ2M2)⌋+ (4) where p1 and p2 are the acitivities induced by inputs to the two pointer neurons. Through substitution of Eqn.(3)(4) into Eqn.(1)(2) respectively, and assuming p1, p2 = 0, it shows that in order to satisfy the condition that M1 > M2 for m1 < m2, we need α > 1 p 1 −cos (θ1 −θ2) > 1. (5) There are several factors that make it difficult for us to reproduce the attentional memorization experiments. Firstly, since we are only using a small number of neurons, each input spike has to create more than one output spike from a neuron in order to satisfy the above condition. On the one hand, this is very hard to implement, because the neurons have a refractory period, any input currents during this time will not influence the neuron. It means that we can not use self-excitation to get an effective α > 1. On the other hand, even for α = 1 (one input spike causes one output spike), it can easily lead to instability in the network because the timing of the arrival of the inhibitory and the excitatory inputs becomes a critical factor of the system stability. Secondly, the network has to operate in a hard winner-take-all mode because of the variance in the inhibitory synaptic efficacies. This means that the neuron is reset to its resting potential whenever it receives an inhibitory spike, thus removing all memory. 4.3 Attentional Memorization By modifying the network architecture (see Fig. 4(b)), we were able to avoid using the strong excitatory connections as required in the original network. In our modified architecture, the inhibition is no longer global. Instead, each neuron inhibits all other neurons in the map population but itself. The steady state rate activities M1 and M2 are now given by M1 = ⌊m1 −β M2 + α (cos θ1P1 + sin θ1P2)⌋+ (6) M2 = ⌊m2 −β M1 + α (cos θ2P1 + sin θ2P2)⌋+ (7) The equations for the steady-state pointer neuron activities P1 and P2 remain as before. The new condition for α is now α > 1 −β p 1 −cos (θ1 −θ2) (8) which means that α can be smaller than one. The intuitive explanation for the decrease of α is that, in the original architecture, the global inhibition inhibits all the map neurons including the winner. Therefore, in order to memorize the attended stimulus, the excitatory connections need to be strengthen to compensate for the self inhibition. But in the new scenario, we delete the self inhibitions, which then releases the requirement for strong excitations. Using this new architecture, we performed the same experiments as described in Section 4.2 and we were now able to demonstrate attentional memorization. That is, the attended neuron with the weaker sensory input stimulus survived even after the attentional inputs were withdrawn. The same qualitative results were obtained even if all the remaining map neurons had a low background firing rate which mimic the effect of weak sensory inputs to different locations. (a) (b) (c) Figure 5: Results of experiments showing responses of map neurons for 3 settings of input strengths to the pointer neurons. Each map neuron has a background firing rate of 30Hz measured in the absence of activated recurrent connections and global inhibition. The attentional inputs to pointer neurons P1 and P2 are (a) [700Hz,50Hz], (b) [700Hz,700Hz], (c) [50Hz,700Hz]. The y-axis shows the firing rate (Hz) of the map neurons (1–9) listed on the x-axis. 5 Conclusion In this paper, we have described a hardware ’pointer-map’ neural network composed of spiking neurons that performs an interesting task of selective attentional processing previously described in a simulated ’pointer-map’ rate model by Hahnloser and colleagues. Neural network behaviors in computer simulations that use rate equations would likely be observed also in spiking networks if many input spikes can be integrated before the post-synaptic neuron’s threshold is reached. However, extensive integration is not possible for practical electronic networks, in which there are relatively small numbers of neurons and synapses. We were find that most of the computational features of their simulated rate model could be reproduced in our hardware spiking implementation despite imprecisions of synaptic weights, and the inevitable fabrication related variability in the performance of individual neurons. One significant difference between our spiking implementation and the rate model is the mechanism required to memorize a previously attended target. In our spike-based implementation, it was necessary to modify the original pointer-map architecture so that the inhibition no longer depends on a single global inhibitory neuron. Instead, each excitatory neuron inhibits all other neurons in the map population but itself. Unfortunately, this approximate equivalence between excitatory and inhbitory neurons is inconsistent with the anatomical observation that only about 15% of cortical neurons are inhibitory. However, the original architecture could probably work if we had larger populations of map neurons, more synapses, and/or NMDA-like synapses with longer time constants. This is a scenario that we will explore in the future along with better characterization of the switching time dynamics of the attentional memorization experiments. (a) (b) Figure 6: Results of attentional memorization experiments using the two different architectures in Fig. 4. (a) Results from original architecture. The sensory inputs to two map neurons M3 and M7 were set to [200Hz,230Hz]. The experiment was divided into 5 phases. In phase 1, the bottomup connections and inhibitory connections were inactivated. In phase 2, the inhibitory connections were activated thus map neuron M3 which received the weaker input, was suppressed. In phase 3, the bottom-up connections were activated. Map neuron M3 was now active because of the steering activity from the pointer neurons. In phase 4, the pointer neurons P1 and P2 were stimulated by attentional inputs of frequencies [700Hz,0Hz] which amplified the activity of M3 but the map activity returned back to the activity shown in phase 3 once the attentional inputs were withdrawn in phase 5. (b) Results from modified architecture. The sensory inputs to M3 and M7 were of frequencies [200Hz,230Hz] for the red curve and [40Hz,50Hz] for the blue curve. The 5 phases in the experiment were as described in (a). However in phase 5, we could see that map neuron M3 retained its activity even after the attentional inputs were withdrawn (attentional inputs to P1 and P2 were [700Hz,0Hz] for the red curve and [300Hz,0Hz] for the blue curve). Acknowledgments The authors would like to thank M. Oster for help with setting up the AER infrastructure, S. Zahnd for the PCB design, and T. Delbr¨uck for discussions on the digital-to-analog converter circuits. This work is partially supported by ETH Research Grant TH-20/04-2, and EU grant ”DAISY” FP6-2005015803. References [1] Y. X. Wang and S. C. Liu, “Programmable synaptic weights for an aVLSI network of spiking neurons,” in Proceedings of the 2006 IEEE International Symposium on Circuits and Systems, pp. 4531–4534, 2006. [2] R. Hahnloser, R. J. Douglas, M. A. Mahowald, and K. Hepp, “Feedback interactions between neuronal pointers and maps for attentional processing,” Nature Neuroscience, vol. 2, pp. 746– 752, 1999. [3] K. A. Boahen, “Point-to-point connectivity between neuromorphic chips using address event,” IEEE Transactions on Circuits and Systems II, vol. 47, pp. 416-434, 2000. [4] S.-C. Liu, and R. Douglas, “Temporal coding in a network of silicon integrate-and-fire neuron,” IEEE Transactions on Neural Networks: Special Issue on Temporal Coding for Neural Information Processing, vol 15, no 5, Sep., pp. 1305-1314, 2004. [5] S. R. Deiss, R. J. Douglas, and A. M. Whatley, “A pulse-coded communications infrastructure for neuromorphic systems,” in Pulsed Neural Networks, W. Maass and C. M. Bishop, Eds. Boston, MA: MIT Press, 1999, ch. 6, pp. 157–178, ISBN 0-262-13350-4. [6] C. Itti, E. Niebur, and C. Koch, “A model of saliency-based fast visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, Apr 1998. [7] G. Indiveri, T. Horiuchi, E. Niebur, and R. Douglas, “A competitive network of spiking VLSI neurons,” in World Congress on Neuroinformatics, F. Rattay, Ed. Vienna, Austria: ARGESIM/ASIM Verlag, Sept 24–29 2001, aRGESIM Reports. [8] M. Oster and S.-C. Liu, “Spiking inputs to a winner-take-all network,” in Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2006, vol. 18.
2006
55
3,076
A Bayesian Approach to Diffusion Models of Decision-Making and Response Time Michael D. Lee∗ Department of Cognitive Sciences University of California, Irvine Irvine, CA, 92697-5100. mdlee@uci.edu Ian G. Fuss Defence Science and Technology Organisation PO Box 1500, Edinburgh, SA 5111, Australia ian.fuss@dsto.defence.gov.au Daniel J. Navarro School of Psychology University of Adelaide, SA 5005, Australia daniel.navarro@adelaide.edu.au Abstract We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decision-making. We first develop a general closed-form analytic approximation to the response time distributionsfor one-dimensional diffusionprocesses, and derive the required Wiener diffusion as a special case. We use this result to undertake Bayesian modeling of benchmark data, using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of the benchmark data, we show the Bayesian account has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 1 Introduction In the past decade, modern computational Bayesian methods have been productively applied to the modeling of many core psychological phenomena. These areas include similarity modeling and structure learning [1], concept and category learning [2, 3], inductive inference and decisionmaking [4], language processes [5], and individual differences [6]. One central area that has been less affected is the modeling of response times in decision-making. Nevertheless, the time people take to produce behavior is a basic and ubiquitious measure that can constrain models and theories of human cognitive processes [7]. There is a large and well-developed set of competing models that aim to account for accuracy, response time distributions and (sometimes) confidence in decision-making. However, besides the effective application of hierarchical Bayesian methods to models that assume response times follow a Weibull distribution [8], most of the inference remains frequentist. In particular, sequential sampling models of response time, which are the dominant class in the field, have not adopted modern Bayesian methods for inference. The prominent recent review paper by Ratcliff and Smith, for example, relies entirely on frequentist methods for parameter estimation, and does not go beyond the application of the Bayesian Information Criterion for model selection [9]. ∗Address correspondence to: Michael D. Lee, Department of Cognitive Sciences, 3151 Social Sciences Plaza, University of California, Irvine, CA 92697-5100. Telephone: (949) 824 5074. Facsimile: (949) 824 2307. URL: www.socsci.uci.edu/∼mdlee. 0 δ ξ α β Response Time (t) Evidence (x) fα(tα,β,δ,ξ) fβ(tα,β,δ,ξ) Figure 1: A diffusion model for response time distributions for both decisions in a two-choice decision-making task. See text for details. Much of the utility, however, in using sequential sampling models to understand decision-making, and their application to practical problems [10], requires making inferences about variations in parameters across subjects, stimuli, or experimental conditions. These inferences would benefit from the principled representation of uncertainty inherent in the Bayesian approach. In addition, many of the competing models have many parameters, are non-linear, and are non-nested. This means their comparison would benefit from Bayesian methods for model selection that do not approximate model complexity by counting the number of free parameters, as the Bayesian Information Criterion does. In this paper, we present a computational Bayesian approach for Wiener diffusion models [11], which are the most widely used special case of the sequential sampling approach. We apply our Bayesian method to the benchmark data of Ratcliff and Rouder [12], using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of this application, we show that adopting the Bayesian perspective has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 2 The Diffusion Model and its Application to Benchmark Data 2.1 The Basic Model The basic one-dimensional diffusion model for accuracy and response time distributions in a twochoice decision-making task is shown in Figure 2.1. Time, t, progresses from left to right, and includes an fixed offset δ that parameterizes the time taken for the non-decision component of response time, such as the time taken to encode the stimulus and complete a motor response. The decision-making component itself is driven by independent samples from an stationary distribution that represents the evidence the stimulus provides in favor of the two alternative decisions. In the Wiener diffusion, this distribution is assumed to be Gaussian, with mean ξ. Evidence sampled from this distribution is accrued over time, leading to a diffusion process that is finally absorbed by boundaries above and below at distances α and β from the origin. The response time distribution is then given by the first-passage distribution p (t | α, β, δ, ξ) = fα (t | α, β, δ, ξ) + fβ (t | α, β, δ, ξ), with the areas under fα and fβ giving the proportion of decisions at each boundary. A natural reparameterization is to consider the starting point of evidence accrual z = (β −α)/ (α + β), which is considered a measure of bias, and the boundary separation a = α + β, which is considered a measure of caution. In either case, this basic form of the model has four free parameters: ξ, δ and either α and β or z and a. 2.2 Previous Application to Benchmark Data The evolution of Wiener diffusion models of decision-making has involved a series of additional assumptions to address shortcomings in its ability to capture basic empirical regularities. This evolution is well described by Ratcliff and Rouder [12], who, in their Experiment 1 present a diffusion model analysis of a benchmark data set [8, 13]. In this experiment, three observers completed ten 35 minute sessions, each consisting of ten blocks with 102 trials per block. The task of observers was to decide between ‘bright’ and ‘dark’ responses for simple visual stimuli with different proportions of white dots, given noisy feedback about the accuracy of the responses. There were 33 different types of stimuli, ranging from 0% to 100% white in equal increments. In addition, the subjects were required to switch between adherence to ‘speed’ instructions and ‘accuracy’ instructions every two blocks. In accord with the experimental design, Ratcliff and Rouder fitted separate drift rates ξi for each of the i = 1, . . ., 33 stimuli; fitted separate boundaries for speed and accuracy instructions, but assumed the boundaries were symmetric (i.e., there was no bias), so that αj = βj for j = 1, 2; and fitted one offset δ for all stimuli and instructions. The data from this experiment are considered benchmark because they show a cross-over effect, whereby errors are faster than correct decisions for easy stimulus conditions under speed instructions, but errors are as slow or slower than correct decisions for hard stimulus conditions under accuracy instructions. As Ratcliff and Rouder point out, these trends are not accommodated by the basic model without allowing for variation in the parameters. Accordingly, to predict fast errors, the basic model is extended by assuming that the starting point is subject to between-trial variation, and so is convolved with a Gaussian or uniform distribution. Similarly, to predict slow errors, it is assumed that the mean drift rate is also subject to between-trial variation, and so is convolved with a Gaussian distribution. Both of these noise processes are parameterized with the standard sufficient statistics, which become additional parameters of the model. 3 Closed-form Response Time Distributions for Diffusion Models One practical reason diffusion models have resisted a Bayesian treatment is that the evaluation of their likelihood function through standard methods is computationally intensive, typically requiring the estimation of an oscillating but convergent infinite sum for each datum [14]. Instead, we use a new closed-form approximation to the required response time distribution. We give a very brief presentation of the approximation here. A more detailed technical note is available from the first author’s web page. The key assumption in our approximation is that the evolving diffusion distributions always assume a limiting form f. Given this form, we define the required limit for a sampling distribution with respect to an arbitrary time dependent mean µ (t, θ) and variance σ2 (t, θ), both which depend on parameters θ of the sampling distribution, in terms of the evidence accumulated x, as f x; µ (t, θ) , σ2 (t, θ)  , (1) from which the cumulative function at an upper boundary of one unit is obtained as F1 µ (t, θ) , σ2 (t, θ)  = Z ∞ 1 f x; µ (t, θ) , σ2 (t, θ)  dx. (2) Differentiation, followed by algebraic manipulation gives the general result f1 µ (t, θ) , σ2 (t, θ)  = d dtF1 µ (t, θ) , σ2 (t, θ)  = ∂ ∂tσ (t, θ) [1 −µ (t, θ)] + σ (t, θ) ∂ ∂tµ (t, θ) σ2 (t, θ) f 1 −µ (t, θ) σ (t, θ)  .(3) Weiner diffusionfrom the originto boundaries α and β with mean drift rate ξ and variance σ2 (t) = t can be represented in this model by defining f (y) = exp −1 2y2 , rescaling to a variance σ2 (t) = 1 a2 t and setting µ (t, ξ) = 1 aξt + z. Thus the response time distributions for Weiner diffusion in this α=β=10 α=β=15 α=15,β=10 ξ=0.1 Response Time Distribution ξ=0.05 Response Time ξ=0.01 Figure 2: Comparison between the closed form approximation (dark broken lines) and the infinite sum distributions (light solid lines) for nine realistic combinations of drift rate and boundaries. approximation are fα (t | α, β, δ, ξ) = 2α + ξ (t −δ) 2 (t −δ) 3 2 exp −(2α −ξ (t −δ))2 2 (t −δ) ! , fβ (t | α, β, δ, ξ) = 2β −ξ (t −δ) 2 (t −δ) 3 2 exp −(2β + ξ (t −δ))2 2 (t −δ) ! . (4) 3.1 Adequacy of the Wiener Approximation Figure 2 shows the relationship between the response time distributions found by the previous infinite sum method, and those generated by our closed form approximation. For every combination of drift rates ξ = 0.01, 0.05 and 0.10, and boundary combinations α = β = 10, α = β = 15 and α = 15, β = 10 we found the best (least-squares) match between the infinite-sum distribution and those distributions indexed by our approximation. These generating parameter combinations were chosen because they cover the range of the posterior distributions we infer from data later. Figure 2 shows that the approximation provides close matches across these parameterizations, although we note the approximation distributions do seem to use slightly (and apparently systematically) different parameter combinations to generate the best-matching distribution. While additional work is required to understand the exact relationship between the infinite-sum method and our approximation, the approximation is sufficiently accurate over the range of parameterizations of interest to be used as the basis for beginning to apply Bayesian methods to diffusion models. 4 Bayesian Modeling of Benchmark Data 4.1 General Model Our log likelihood function evaluates the density of each response time at the boundary corresponding to its associated decision, and assumes independence, so that ln L (T | α, β, δ, ξ) = X t∈Dα lnfα (t | α, β, δ, ξ) + X t∈Dβ lnfβ (t | α, β, δ, ξ), (5) δ tijk βj αj ξi j = 1, 2 i = 1, . . ., 33 k = 1, . . ., n Figure 3: Graphical model for the benchmark data analysis. where Dα and Dβ are the set of all response times at the upper and lower boundaries respectively. We threshold at 10−30 to guard against degeneracy to negative values inherent in the approximation. The graphical model representation of the benchmark data is shown in Figure 3, where the observed response time data are now denoted T = {tijk}, with i = 1, . . ., 33 indexing the presented stimulus, j = 1, 2 indexing speed or accuracy instructions, and k = 1, . . ., n indexing all of the trials with this stimulus and instruction combination. We place proper approximations to non-informative distributions on all the parameters, so that they are all essentially flat over the values of interest. Specifically we assume the 33 drift rates are independent and each have a zero mean Gaussian prior with very small precision: ξi ∼Gaussian (0, τ), with τ = 10−6. The boundary parameters α and β are given the same priors, but because they are constrained to be positive, their sampling is censored accordingly: αj, βj ∼Gaussian (0, τ); αj, βj > 0. Since δ is bounded by the minimum time observed, we use the Uniform prior distribution: δ ∼Uniform(0, minT). This is a data-dependent prior, but the same results could be achieved with a fixed prior, and scaling of the time data, which are arbitrary up to scalar multiplication. 4.2 Formalizing Model Construction The Bayesian approach allows us to test the intuitively plausible model construction decisions made previously by Ratcliff and Rouder. Using the data from one of the observers (N.H.), we considered the marginal likelihoods, denoted simply L, based on the harmonic mean approximation [15], calculated from three chains of 105 samples from the posterior obtained using Winbugs. • The full model described by Figure 3, with asymmetric boundaries, varying across speed and accuracy instructions. This model had lnL = −48, 416. • The restricted model with symmetric boundaries, still varying across instructions, as assumed by Ratcliff and Rouder. This model had ln L = −48, 264. • The restricted model with asymmetric boundaries not varying across instructions. This model had lnL = −48, 964. • The restricted model with symmetric boundaries not varying across instructions. This model had lnL = −48, 907. These marginal log likelihoods make it clear that different boundaries are needed for the speed and accuracy instructions, but it is overly complicated to allow them to be asymmetric (i.e., there is no need to parameterize bias). We tested the robustness of these values by halving and doubling the prior variances, and using an adapted form of the ‘informative’ priors collated in [16], all of which lead to similar quantitative and identical qualitative conclusions. These results formally justify the model construction decisions made by Ratcliff and Rouder, and the remainder of our analysis applies to this restricted model. 4.3 Posterior Distributions Figure 4 shows the posterior distributions for the symmetric boundaries under both speed and accuracy instructions. These distributionsare consistent with traditional analyses and the speed boundary is clearly significantly smaller than the accuracy boundary. We note that, for historical reasons only, Wiener diffusion models have assumed σ2 (t) = (0.1t)2, and so our scale is 100 times larger. 0 5 10 15 20 Boundary Values p(αT) Speed Accuracy Figure 4: Posterior distributions for the boundaries under speed and accuracy instructions. The main panel of Figure 5 shows the posterior distributionsfor all 33 drift rate parameters. The posteriors are shown against the vertical axes, with wider bars corresponding to greater density, and are located according to their proportion of white dots on the horizontalaxis. The approximately monotonic relationship between drift rate and proportion shows that the model allows stimulus properties to be inferred from the behavioral decision time data, as found by previous analyses. The right hand panel of Figure 5 shows the projection of three of the posterior distributions, labelled 4, 17 and 22. It is interesting to note that the uncertainty about the drift rate of stimuli 4 and 17 both take a Gaussian form, but with very different variances. More dramatically, the uncertainty about the drift rate for stimulus 22 is clearly bi-modal. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 4 17 22 Stimulus Proportion White Drift Rate (ξi) 22 17 4 p(ξiT) Figure 5: Posterior distributions for the 33 drift rates, in terms of the proportion of white dots in their associated stimuli. 4.4 Accuracy and Fast and Slow Errors Figure 6 follows previous analyses of these data, and shows the relationship between the empirical proportions, and the model prediction of decision proportions for each stimulus type. For both the speed and accuracy instructions, there is close agreement between the model and data. Figure 7 shows the posterior predictive distribution of the model for two cases, analogous to those highlighted previously [13, Figure 6]. The left panel involves a relatively easy decision, corresponding to stimulus number 22 under speed instructions, and shows the models predictions for the response time for both correct (upper) and error (lower) decisions, together with the data, indicated by short vertical lines. For this easy decision, it can be seen the model predicts relatively fast errors. The right panel of Figure 7 involves a harder decision, corresponding to stimulus number 18 under 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Model Data Speed 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Model Data Accuracy Figure 6: Relationship between modeled and empirical accuracy, for the speed instructions (left panel) and accuracy instructions (right panel). Each marker corresponds to one of the 33 stimuli. accuracy instructions. Here the model predicts much slower errors, with a heavier tail than for the easy decision. These are the basic qualitative properties of prediction that motivated the introduction of betweentrial variabilitythrough noise processes in the traditional account. In the present Bayesian treatment, the required predictions are achieved because the posterior predictive automatically samples from a range of values for the drift and boundary parameters. By representing this variation in parameters as uncertainty about fixed values, we are making different basic assumptions from the traditional Wiener diffusion model. It is interesting to speculate that, if Bayesian results like those in Figure 7 had always been available, the introduction of additional variability processes described in [12] might never have eventuated. These processes seem solely designed to account for empirical effects like the cross-over effect; in particular, we are not aware of the parameters of the additional variability processes being used to draw substantive psychological inferences from data. 0 1500 Response Time (ms) Response Time Distribution 0 6000 Response Time (ms) Figure 7: Posterior predictive distributions for both correct (solid line) and error (broken line) responses, for two stimuli corresponding to easy (left panel) and hard (right panel) decisions. The density for error decisions in the easy responses has been scaled to allow its shape to be visible. 5 Conclusions Our analyses of the benchmark data confirm many of the central conclusions of previous analyses, but also make several new contributions. The posterior distributions shown in Figure 5 suggest that current parametric assumptions about drift rate variability may not be entirely appropriate. In particular, there is the intriguing possibility of multi-modalities evident in the drift rate of stimulus 22, and the associated raw data in Figure 7. Figure 5 also suggests a hierarchical account of the benchmark data, modeling the 33 drift rates ξi in terms of, for example, a low-dimensional psychometric function. This would be easily achieved in the current Bayesian framework. It should also be possible to introduce contaminant distributions in a mixture model, following previous suggestions [8, 14], using latent variable assignments for each response time. If it was desirable to replicate the current assumptions of starting-point and drift-rate variability, that would also easily be done in an extended hierarchical account. Finally, the availability of marginal likelihoodmeasures, accounting for both model fit and complexity, offer the possibility of rigorous quantitative comparisons of alternative sequential sampling accounts, such as the Ornstein-Uhlenbeck and accumulator models [9], of response times in decision-making. Acknowledgments We thank Jeff Rouder for supplying the benchmark data, and Scott Brown, E.-J. Wagenmakers, and the reviewers for helpful comments. References [1] C. Kemp, A. Bernstein, and J. B. Tenenbaum. A generative theory of similarity. In B. G. Bara, L. W. Barsalou, and M. Bucciarelli, editors, Proceedings of the 27th Annual Conference of the Cognitive Science Society. Erlbaum, Mahwah, NJ, 2005. [2] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):409–429, 1991. [3] J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24(4):629–640, 2001. [4] T. L. Griffiths and J. B. Tenenbaum. Structure and strength in causal induction. Cognitive Psychology, 51:354–384, 2005. [5] T. L. Griffiths, M. Steyvers, and D. Blei, and J. B. Tenenbaum. Integrating topics and syntax. Advances in Neural Information Processing Systems, 17, 2005. [6] D. J. Navarro, T. L. Griffiths, M. Steyvers, and M. D. Lee. Modeling individual differences using Dirichlet processes. Journal of Mathematical Psychology, 50:101–122, 2006. [7] R. D. Luce. Response Times: Their Role in Inferring Elementary Mental Organization. Oxford University Press, New York, 1986. [8] J. N. Rouder, J. Lu, P. L. Speckman, D. Sun, and Y. Jiang. A hierarchical model for estimating response time distributions. Psychonomic Bulletin & Review, 12:195–223, 2005. [9] R. Ratcliff and P. L. Smith. A comparison of sequential sampling models for two–choice reaction time. Psychological Review, 111:333–367, 2004. [10] R. Ratcliff, A. Thapar, and G. McKoon. The effects of aging on reaction time in a signal detection task. Psychology and Aging, 16:323–341, 2001. [11] R. Ratcliff. A theory of memory retrieval. Psychological Review, 85:59–108, 1978. [12] R. Ratcliff and J. N. Rouder. Modeling response times for two–choice decisions. Psychological Science, 9:347–356, 1998. [13] S. Brown and A. Heathcote. A ballistic model of choice response time. Psychological Review, 112:117–128, 1 2005. [14] R. Ratcliff and F. Tuerlinckx. Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychonomic Bulletin & Review, 9:438–481, 2002. [15] A. E. Raftery. Hypothesis testing and model selection. In W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors, Markov chain Monte Carlo in Practice, pages 163–187. Chapman & Hall/CRC, Boca Raton (FL), 1996. [16] E.-J. Wagenmakers, H. J. L. van der Maas, and R. P. P. P. Grasman. An EZ–diffusion model for response time and accuracy. Psychonomic Bulletin & Review, in press.
2006
56
3,077
Graph-Based Visual Saliency Jonathan Harel, Christof Koch , Pietro Perona California Institute of Technology Pasadena, CA 91125 {harel,koch}@klab.caltech.edu, perona@vision.caltech.edu Abstract A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists of two steps: rst forming activation maps on certain feature channels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps. The model is simple, and biologically plausible insofar as it is naturally parallelized. This model powerfully predicts human xations on 749 variations of 108 natural images, achieving 98% of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch ([2], [3], [4]) achieve only 84%. 1 Introduction Most vertebrates, including humans, can move their eyes. They use this ability to sample in detail the most relevant features of a scene, while spending only limited processing resources elsewhere. The ability to predict, given an image (or video), where a human might xate in a xed-time freeviewing scenario has long been of interest in the vision community. Besides the purely scientic goal of understanding this remarkable behavior of humans, and animals in general, to consistently xate on "important" information, there is tremendous engineering application, e.g. in compression and recognition [13]. The standard approaches (e.g., [2], [9]) are based on biologically motivated feature selection, followed by center-surround operations which highlight local gradients, and nally a combination step leading to a "master map". Recently, Bruce [5] and others [4] have hypothesized that fundamental quantities such as "self-information" and "surprise" are at the heart of saliency/attention. However, ultimately, Bruce computes a function which is additive in feature maps, with the main contribution materializing as a method of operating on a feature map in such a way to get an activation, or saliency, map. Itti and Baldi dene "surprise" in general, but ultimately compute a saliency map in the classical [2] sense for each of a number of feature channels, then operate on these maps using another function aimed at highlighting local variation. By organizing the topology of these varied approaches, we can compare them more rigorously: i.e., not just endto-end, but also piecewise, removing some uncertainty about the origin of observed performance differences. Thus, the leading models of visual saliency may be organized into the these three stages: (s1) extraction: extract feature vectors at locations over the image plane (s2) activation: form an "activation map" (or maps) using the feature vectors (s3) normalization/combination: normalize the activation map (or maps, followed by a combination of the maps into a single map) In this light, [5] is a contribution to step (s2), whereas [4] is a contribution to step (s3). In the classic algorithms, step (s1) is done using biologically inspired lters, step (s2) is accomplished by subtracting feature maps at different scales (henceforth, "c-s" for "center" - "surround"), and step (s3) is accomplished in one of three ways: 1. a normalization scheme based on local maxima [2] ( "max-ave"), 2. an iterative scheme based on convolution with a difference-of-gaussians lter ("DoG"), and 3. a nonlinear interactions ("NL") approach which divides local feature values by weighted averages of surrounding values in a way that is modelled to t psychophysics data [11]. We take a different approach, exploiting the computational power, topographical structure, and parallel nature of graph algorithms to achieve natural and efcient saliency computations. We dene Markov chains over various image maps, and treat the equilibrium distribution over map locations as activation and saliency values. This idea is not completely new: Brockmann and Geisel [8] suggest that scanpaths might be predicted by properly dened Levy ights over saliency elds, and more recently Boccignone and Ferraro [7] do the same. Importantly, they assume that a saliency map is already available, and offer an alternative to the winner-takes-all approach of mapping this object to a set of xation locations. In an unpublished pre-print, L.F. Costa [6] notes similar ideas, however offers only sketchy details on how to apply this to real images, and in fact includes no experiments involving xations. Here, we take a unied approach to steps (s2) and (s3) of saliency computation, by using dissimilarity and saliency to dene edge weights on graphs which are interpreted as Markov chains. Unlike previous authors, we do not attempt to connect features only to those which are somehow similar. We also directly compare our method to others, using power to predict human xations as a performance metric. The contributions of this paper are as follows: (1) A complete bottom-up saliency model based on graph computations, GBVS, including a framework for "activation" and "normalization/combination". (2) A comparison of GBVS against existing benchmarks on a data set of grayscale images of natural environments (viz., foliage) with the eye-movement xation data of seven human subjects, from a recent study by Einhäuser et. al. [1]. 2 The Proposed Method: Graph-Based Saliency (GBVS) Given an image I, we wish to ultimately highlight a handful of `signicant' locations where the image is `informative' according to some criterion, e.g. human xation. As previously explained, this process is conditioned on rst computing feature maps (s1), e.g. by linear ltering followed by some elementary nonlinearity [15]. "Activation" (s2), "normalization and combination" (s3) steps follow as described below. 2.1 Forming an Activation Map (s2) Suppose we are given a feature map1 M : [n]2 ! R. Our goal is to compute an activation map A : [n]2 ! R, such that, intuitively, locations (i; j) 2 [n]2 where I, or as a proxy, M(i; j); is somehow unusual in its neighborhood will correspond to high values of activation A. 2.1.1 Existing Schemes Of course "unusual" does not constrain us sufciently, and so one can choose several operating denitions. "Improbable" would lead one to the formulation of Bruce [5], where a histogram of M(i; j) values is computed in some region around (i; j), subsequently normalized and treated as a probability distribution, so that A(i; j) = log(p(i; j)) is clearly dened with p(i; j) = PrfM(i; j)jneighborhoodg: Another approach compares local "center" distributions to broader "surround" distributions and calls the Kullback-Leibler tension between the two "surprise" [4]. 1in the context of a mathematical formulation, let [n] , f1; 2; :::; ng. Also, the maps M, and later A, are presented as square (n  n) only for expository simplicity. Nothing in this paper will depend critically on the square assumtion, and, in practice, rectangular maps are used instead. 2.1.2 A Markovian Approach We propose a more organic (see below) approach. Let us dene the dissimilarity of M(i; j) and M(p; q) as d((i; j)jj(p; q)) , log M(i; j) M(p; q) : This is a natural denition of dissimilarity: simply the distance between one and the ratio of two quantities, measured on a logarithmic scale. For some of our experiments, we use jM(i; j) M(p; q)j instead, and we have found that both work well. Consider now the fully-connected directed graph GA, obtained by connecting every node of the lattice M, labelled with two indices (i; j) 2 [n]2, with all other n 1 nodes. The directed edge from node (i; j) to node (p; q) will be assigned a weight w1((i; j); (p; q)) , d((i; j)jj(p; q))  F(i p; j q), where F(a; b) , exp  a2 + b2 22  :  is a free parameter of our algorithm2. Thus, the weight of the edge from node (i; j) to node (p; q) is proportional to their dissimilarity and to their closeness in the domain of M. Note that the edge in the opposite direction has exactly the same weight. We may now dene a Markov chain on GA by normalizing the weights of the outbound edges of each node to 1, and drawing an equivalence between nodes & states, and edges weights & transition probabilities . The equilibrium distribution of this chain, reecting the fraction of time a random walker would spend at each node/state if he were to walk forever, would naturally accumulate mass at nodes that have high dissimilarity with their surrounding nodes, since transitions into such subgraphs is likely, and unlikely if nodes have similar M values. The result is an activation measure which is derived from pairwise contrast. We call this approach "organic" because, biologically, individual “nodes” (neurons) exist in a connected, retinotopically organized, network (the visual cortex), and communicate with each other (synaptic ring) in a way which gives rise to emergent behavior, including fast decisions about which areas of a scene require additional processing. Similarly, our approach exposes connected (via F) regions of dissimilarity (via w), in a way which can in principle be computed in a completely parallel fashion. Computations can be carried out independently at each node: in a synchronous environment, at each time step, each node simply sums incoming mass, then passes along measured partitions of this mass to its neighbors according to outbound edge weights. The same simple process happening at all nodes simultaneously gives rise to an equilibrium distribution of mass. Technical Notes The equilibrium distribution of this chain exists and is unique because the chain is ergodic, a property which emerges from the fact that our underlying graph GA is by construction strongly connected. In practice, the equilibrium distribution is computed using repeated multiplication of the Markov matrix with an initially uniform vector. The process yields the principal eigenvector of the matrix. The computational complexity is thus O(n4K) where K  n2 is some small number of iterations required to meet equilibrium3. 2.2 "Normalizing" an Activation Map (s3) The aim of the "normalization" step of the algorithm is much less clear than that of the activation step. It is, however, critical and a rich area of study. Earlier, three separate approaches were mentioned as existing benchmarks, and also the recent work of Itti on surprise [4] comes into the saliency computation at this stage of the process (although it can also be applied to s2 as mentioned above). We shall state the goal of this step as: concentrating mass on activation maps. If mass is not concentrated on individual activation maps prior to additive combination, then the resulting master map may be too nearly uniform and hence uninformative. Although this may seem trivial, it is on some level the very soul of any saliency algorithm: concentrating activation into a few key locations. 2In our experiments, this parameter was set to approximately one tenth to one fth of the map width. Results were not very sensitive to perturbations around these values. 3Our implementation, not optimized for speed, converges on a single map of size 25  37 in fractions of a second on a 2.4 GHz Pentium. Armed with the mass-concentration denition, we propose another Markovian algorithm as follows: This time, we begin with an activation map4 A : [n]2 ! R, which we wish to "normalize". We construct a graph GN with n2 nodes labelled with indices from [n]2. For each node (i; j) and every node (p; q) (including (i; j)) to which it is connected, we introduce an edge from (i; j) to (p; q) with weight: w2((i; j); (p; q)) , A(p; q)  F(i p; j q): Again, normalizing the weights of the outbound edges of each node to unity and treating the resulting graph as a Markov chain gives us the opportunity to compute the equilibrium distribution over the nodes5. Mass will ow preferentially to those nodes with high activation. It is a mass concentration algorithm by construction, and also one which is parallelizable, as before, having the same natural advantages. Experimentally, it seems to behave very favorably compared to the standard approaches such as "DoG" and "NL". 3 Experimental Results 3.1 Preliminaries and paradigm We perform saliency computations on real images of the natural world, and compare the power of the resulting maps to predict human xations. The experimental paradigm we pursue is the following: for each of a set of images, we compute a set of feature maps using standard techniques. Then, we proccess each of these feature maps using some activation algorithm, and then some normalization algorithm, and then simply sum over the feature channels. The resulting master saliency map is scored (using an ROC area metric described below) relative to xation data collected for the corresponding image, and labelled according to the activation and normalization algorithms used to obtain it. We then pool over a corpus of images, and the resulting set of scored and labelled master saliency maps is analyzed in various ways presented below. Some notes follow: Algorithm Labels: Hereafter, "graph (i)" and "graph (ii)" refer to the activation algorithm described in section 2.1.2. The difference is that in graph (i), the parameter  = 2:5, whereas in graph (ii),  = 5. "graph (iii)" and "graph (iv)" refer to the an iterated repitition of the normalization algorithm described in section 2.2. The difference is the termination rule associated with the iterative process: for graph (iii), a complicated termination rule is used which looks for a local maximum in the number of matrix multiplications required to achieve a stable equilibrium distribution6, and for graph (iv), the termination rule is simply "stop after 4 iterations". The normalization algorithm referred to as "I" corresponds to "Identity", with the most naive normalization rule: it does nothing, leaving activations unchanged prior to subsequent combination. The algorithm "max-ave" and "DoG" were run using the publicly available "saliency toolbox"7. The parameters of this were checked against the literature [2] and [3], and were found to be almost identical, with a few slight alterations that actually improved performance relative to the published parameters. The parameters of "NL" were set according to the better of the two sets of parameters provided in [11]. Performance metric: We wish to give a reward quantity to a saliency map, given some target locations, e.g., in the case of natural images, a set of locations at which human observers xated. For any one threshold saliency value, one can treat the saliency map as a classier, with all points above threshold indicated as "target" and all points below threshold as "background". For any particular value of the threshold, there is some fraction of the actual target points which are labelled as such (true positive rate), and some fraction of points which were not target but labelled as such anyway (false positive rate). Varying over all such thresholds yields an ROC curve [14] and the area beneath it is generally regarded as an indication of the classifying power of the detector. This is the performance metric we use to measure how well a saliency map predicts xation locations on a given image. 4To be clear, if A is the result of the eigenvector computation described in 2.1, i.e., if the graph-based activation step is concatenated with the graph-based normalization step, we will call the resulting algorithm GBVS. However, A may be computed using other techniques. 5We note that this normalization step of GBS can be iterated  times to improve performance. In practice, we use  2 f2; 3; 4g. Performance does not vary signicantly in this regime with respect to . 6with the intuition being that competition among competing saliency regions can settle, at which point it is wise to terminate 7http://www.saliencytoolbox.net 3.2 Human Eye-Movement Data on Images of Nature In a study by Einhäuser et al. [1], human and primate xation data was collected on 108 images, each modied8 in nine ways. Figure 2 shows an example image from this collection, together with "x"s marking the xation points of three human subjects on this particular picture. In the present study, 749 unique modications of the 108 original images, and 24149 human xations from [1] were used. Only pictures for which xation data from three human subjects were available were used. Each image was cropped to 600  400 pixels and was presented to subjects so that it took up 76  55 of their visual eld. In order to facilitate a fair comparison of algorithms, the rst step of the saliency algorithm, feature extraction (s1), was the same for every experiment. Two spatial scales 1 2; 1 4  were used, and for each of these, four orientation maps corresponding to orientations  = f0; 45; 90; 135g were computed using Gabor lters, one contrast map was computed using luminance variance in a local neighborhood of size 8080, and the last map was simply a luminance map (the grayscale values). Each of these 12 maps was nally downsampled to a 2537 raw feature map. "c-s" (center-surround) activation maps were computed by subtracting, from each raw feature map, a feature map on the same channel originally computed at a scale 4 binary orders of magnitude smaller in overall resolution and then resized smoothly to size 25  37. In [2], this overall scheme would be labelled c = f2; 3g, for 1 2 and 1 4, and  = f4g, corresponding to a scale change of 4 orders. The other activation procedures are described in section 2.1.2 and 2.1.1. The normalization procedures are all earlier described and named. Figure 2 shows an actual image with the resulting saliency maps from two different (activation, normalization) schemes. (a) Sample Picture With Fixation (b) Graph-Based Saliency Map (c) Traditional Saliency Map ROC area = 0.74 ROC area = 0.57 Figure 2: (a) An image from the data-set with xations indicated using x's. (b) The saliency map formed when using (activation,normalization)= (graph (i),graph (iii)). (c) Saliency map for (activation,normalization)=(c-s,DoG) Finally, we show the performance of this algorithm on the corpus of images. For each image, a mean inter-subject ROC area was computed as follows: for each of the three subjects who viewed an image, the xation points of the remaining two subjects were convolved with a circular, decaying kernel with decay constant matched to the decaying cone density in the retina. This was treated as a saliency map derived directly from human xations, and with the target points being set to the 8Modications were made to change the luminance contrast either up or down in selected circular regions. Both modied and unmodied stimuli were used in these experiments. Please refer to [1], [12]. xations of the rst subject, an ROC area was computed for a single subject. The mean over the three is termed "inter-subject ROC value" in the following gures. For each range of this quantity, a mean performance metric was computed for various activation and normalization schemes. For any particular scheme, an ROC area was computed using the resulting saliency map together with the xations from all 3 human subjects as target points to detect. The results are shown below. (a) Activation Comparison (b) Normalization Comparison 0.55 0.6 0.65 0.7 0.75 0.8 0.45 0.5 0.55 0.6 0.65 0.7 inter-subject ROC value mean ROC value for algorithm Comparison of Activation Algorithms c-s graph (i) graph (ii) self-info 0.55 0.6 0.65 0.7 0.75 0.8 0.45 0.5 0.55 0.6 0.65 0.7 inter-subject ROC value mean ROC value for algorithm Comparison of Normalization Algorithms graph (iii) graph (iv) ave-max NL DoG Figure 3: (a) A mean ROC metric is computed for each range of inter-subject ROC values. Each curve represents a different activation scheme, while averaging over individual image numbers and normalization schemes. (b) A mean ROC metric is similarly computed, instead holding the normalization constant while varying the activation scheme. In both Figures 3 and 4, The boundary lines above and below show a rough upper9 and strict lower bounds on performance (based on a human control and chance performance). Figure 3(a) and Figure 3(b) clearly demonstrate the tremendous predictive power of the graph-based algorithms over standard approaches. Figure 4 demonstrates the especially effective performance of combining the best graph-based activation and normalization schemes, contrasted against the standard Itti & Koch approaches, and also the "self-information" approach which includes no mention of a normalization step (hence, set here to "I"). 0.55 0.6 0.65 0.7 0.75 0.8 0.45 0.5 0.55 0.6 0.65 0.7 inter-s ubjec t ROC value mean ROC value for algorithm Comparis on of A lgorithm s End-to-End graph s elf-info ave-max NL DoG Figure 4: We compare the predictive power of ve saliency algorithms. The best performer is the method which combines a graph based activation algorithm with a graph based normalization algorithm. The combination of a few possible pairs of activation schemes together with normalization schemes is summarized in Table 1, with notes indicating where certain combinations correspond to established benchmarks. Performance is shown as a fraction of the inter-subject ROC area. Overall, we nd an median ROC area of 0.55 for the Itti & Koch saliency algorithms [2] on these images. In [1] 9To form a true upper bound, one would need the xation data of many more than three humans on each image. the mean is reported as 0.57, which is remarkably close and plausible if you assume slightly more sophisticated feature maps (for instance, at more scales). Table 1: Performance of end-to-end algorithms activation algorithm normalization algorithm ROC area (fraction10) published graph (ii) graph (iv) 0.981148 graph (i) graph (iv) 0.975313 graph (ii) I 0.974592 graph (ii) ave-max 0.974578 graph (ii) graph (iii) 0.974227 graph (i) graph (iii) 0.968414 self-info I 0.841054 *Bruce & Tsotsos [5] c-s DoG 0.840968 *Itti & Koch [3] c-s ave-max 0.840725 *Itti, Koch, & Niebur [2] c-s NL 0.831852 *Lee, Itti, Koch, & Braun [10] 4 Discussion and Conclusion Although a novel, simple approach to an old problem is always welcome, we must also seek to answer the scientic question of how it is possible that, given access to the same feature information, GBVS predicts human xations more reliably than the standard algorithms. We nd experimentally that there are at least two reasons for this observed difference. The rst observation is that, because nodes are on average closer to a few center nodes than to any particular point along the image periphery, it is an emergent property that GBVS promotes higher saliency values in the center of the image plane. We hypothesize that this "center bias" is favorable with respect to predicting xations due to human experience both with photographs, which are typically taken with a central subject, and with everyday life in which head motion often results in gazing straight ahead. Notably, the images of foliage used in the present study had no central subject. One can quantify the GBVS-induced center bias by activating, then normalizing, a uniform image using our algorithms. However, if we introduce this center bias to the output of the standard algorithms' master maps (via pointwise multiplication), we nd that the standard algorithms predict xations better, but still worse than GBVS. In some cases (e.g., "DoG"), introducing this center bias only explains 20% of the performance gap to GBVS – in the best case (viz., "max-ave"), it explains 90% of the difference. We conjecture that the other reason for the performance difference stems from the robustness of our algorithm with respect to differences in the sizes of salient regions. Experimentally, we nd that the "c-s" algorithm has trouble activating salient regions distant from object borders, even if one varies over many choices of scale differences and combinations thereof. Since most of the standard algorithms have "c-s" as a rst step, they are weakened ab initio. Similarly, the "self-info" algorithm suffers the same weakness, even if one varies over the neighborhood size parameter. On the other hand, GBVS robustly highlights salient regions, even far away from object borders. We note here that what lacks from GBVS described as above is any notion of a multiresolution representation of map data. Therefore, because multiresolution representations are so basic, one may extend both the graph-based activation and normalization steps to a multiresolution version as follows: We begin with, instead of a single map A : [n]2 ! R, a collection of maps fAig, with each Ai : [ni]2 ! R representing the same underlying information but at different resolutions. Proceeding as we did before, we instantiate a node for every point on every map, introducing edges again between every pair of nodes, with weights computed same as before with one caveat: the 10performance here is measured by the ratio of (ROC area using the given algorithm for xation detection) to (ROC area using a saliency map formed from the xations of other subjects on a single picture) distance penalty function F(a; b) accepts two arguments each of which is a distance between two nodes along a particular dimension. In order to compute F in this case, one must dene a distance over points taken from different underlying domains. The authors suggest a denition whereby: (1) each point in each map is assigned a set of locations, (2) this set corresponds to the spatial support of this point in the highest resolution map, and (3) the distance between two sets of locations is given as the mean of the set of pairwise distances. The equilibrium distribution can be computed as before. We nd that this extension (say, GBVS Multiresolution, or GBVSM) improves performance with little added computation. Therefore, we have presented a method of computing bottom-up saliency maps which shows a remarkable consistency with the attentional deployment of human subjects. The method uses a novel application of ideas from graph theory to concentrate mass on activation maps, and to form activation maps from raw features. We compared our method with established models and found that ours performed favorably, for both of the key steps in our organization of saliency computations. Our model is extensible to multiresolutions for better performance, and it is biologically plausible to the extent that a parallel implementation of the power-law algorithm for Markov chains is trivially accomplished in hardware. Acknowledgments The authors express sincere gratitude to Wolfgang Einhäuser for his offering of natural images, and the xation data associated with them from a study with seven human subjects. We also acknowledge NSF, NIH, DARPA, and ONR for their generous support of our research. References [1] W. Einhäuser, W. Kruse, K.P. Hoffmann, & P. König "Differences of Monkey and Human Overt Attention under Natural Conditions", Vision Research 2006. [2] L. Itti, C. Koch, & E. Niebur "A model of saliency based visual attention for rapid scene analysis", IEEE Transactions on Pattern Analysis and Machine 1998 [3] L. Itti & C. Koch "A saliency-based search mechanism for overt and covert shifts of visual attention", Vision Research, 2000 [4] L. Itti, & P. Baldi "Bayesian Surprise Attracts Human Attention", NIPS*2005 [5] N. Bruce & J. Tsotsos "Saliency Based on Information Maximization", NIPS*2005 [6] L.F. Costa "Visual Saliency and Attention as Random Walks on Complex Networks", arXiv preprint 2006 [7] G. Boccignone, & M. Ferraro "Modelling gaze shift as a constrained random walk", Physica A 331, 207 2004 [8] D. Brockmann, T. Geisel "Are human scanpaths Levy ights?", ICANN 1999 [9] D. Parkhurst, K. Law, & E. Niebur "Modeling the role of salience in the allocation of overt visual attention", Vision Research, 2002 [10] D.K. Lee, L. Itti, C. Koch, & J. Braun "Attention activates winner-take-all competition among visual features", Nature Neuroscience, 1999 [11] L. Itti, J. Braun, D.K. Lee, & C. Koch "Attention Modulation of Human Pattern Discrimination Psychophysics Reproduced by a Quantitative Model", NIPS*1998 [12] W. Einhäuser & P. König, "Does luminance-contrast contribute to saliency map for overt visual attention?", Eur. J. Neurosci. 2003 [13] U. Rutishauser, D. Walther, C. Koch, & P. Perona "Is bottom-up attention useful for object recognition?", CVPR 2004 [14] B.W. Tatler, R.J. Baddeley, & I.D. Gilchrist "Visual correlates of xation selection: Effects of scale and time." Vision Research 2005 [15] J. Malik & P. Perona "Preattentive texture discrimination with early vision mechanisms" Journal of the Optical Society of America A 1990
2006
57
3,078
Doubly Stochastic Normalization for Spectral Clustering Ron Zass and Amnon Shashua ∗ Abstract In this paper we focus on the issue of normalization of the affinity matrix in spectral clustering. We show that the difference between N-cuts and Ratio-cuts is in the error measure being used (relative-entropy versus L1 norm) in finding the closest doubly-stochastic matrix to the input affinity matrix. We then develop a scheme for finding the optimal, under Frobenius norm, doubly-stochastic approximation using Von-Neumann’s successive projections lemma. The new normalization scheme is simple and efficient and provides superior clustering performance over many of the standardized tests. 1 Introduction The problem of partitioning data points into a number of distinct sets, known as the clustering problem, is central in data analysis and machine learning. Typically, a graph-theoretic approach to clustering starts with a measure of pairwise affinity Kij measuring the degree of similarity between points xi, xj, followed by a normalization step, followed by the extraction of the leading eigenvectors which form an embedded coordinate system from which the partitioning is readily available. In this domain there are three principle dimensions which make a successful clustering: (i) the affinity measure, (ii) the normalization of the affinity matrix, and (iii) the particular clustering algorithm. Common practice indicates that the former two are largely responsible for the performance whereas the particulars of the clustering process itself have a relatively smaller impact on the performance. In this paper we focus on the normalization of the affinity matrix. We first show that the existing popular methods Ratio-cut (cf. [1]) and Normalized-cut [7] employ an implicit normalization which corresponds to L1 and Relative Entropy based approximations of the affinity matrix K to a doubly stochastic matrix. We then introduce a Frobenius norm (L2) normalization algorithm based on a simple successive projections scheme (based on Von-Neumann’s [5] successive projection lemma for finding the closest intersection of sub-spaces) which finds the closest doubly stochastic matrix under the least-squares error norm. We demonstrate the impact of the various normalization schemes on a large variety of data sets and show that the new normalization algorithm often induces a significant performance boost in standardized tests. Taken together, we introduce a new tuning dimension to clustering algorithms allowing better control of the clustering performance. 2 The Role of Doubly Stochastic Normalization It has been shown in the past [11, 4] that K-means and spectral clustering are intimately related where in particular [11] shows that the popular affinity matrix normalization such as employed by Normalized-cuts is related to a doubly-stochastic constraint induced by K-means. Since this background is a key to our work we will briefly introduce the relevant arguments and derivations. Let xi ∈RN, i = 1, ..., n, be points arranged in k (mutually exclusive) clusters ψ1, .., ψk with nj points in cluster ψj and P j nj = n. Let Kij = κ(xi, xj) be a symmetric positive-semi-definite ∗School of Engineering and Computer Science, Hebrew University of Jerusalem, Jerusalem 91904, Israel. affinity function, e.g. Kij = exp−∥xi−xj∥2/σ2. Then, the problem of finding the cluster assignments by maximizing: max ψ1,...,ψk k X j=1 1 nj X (r,s)∈ψj Kr,s, (1) is equivalent to minimizing the ”kernel K-means” problem: min c1,...,ckψ1,...,ψk k X j=1 X i∈ψj ∥φ(xi) −cj∥2, where φ(xi) is a mapping associated with the kernel κ(xi, xj) = φ(xi)⊤φ(xj) and cj = (1/nj) P i∈ψj φ(xi) are the class centers. After some algebraic manipulations it can be shown that the optimization setup of eqn. 1 is equivalent to the matrix form: max G tr(G⊤KG) s.t G ≥0, GG⊤1 = 1, G⊤G = I (2) where G is the desired assignment matrix with Gij = 1/√nj if i ∈ψj and zero otherwise, and 1 is a column vector of ones. Note that the feasible set of matrices satisfying the constraints G ≥ 0, GG⊤1 = 1, G⊤G = I are of this form for some partitioning ψ1, ..., ψk. Note also that the matrix F = GG⊤must be doubly stochastic (F is non-negative, symmetric and F1 = 1). Taken together, we see that the desire is to find a doubly-stochastic matrix F as close as possible to the input matrix K (in the sense that P ij FijKij is maximized over all feasible F), such that the symmetric decomposition F = GG⊤satisfies non-negativity (G ≥0) and orthonormality constraints (G⊤G = I). To see the connection with spectral clustering, and N-cuts in particular, relax the non-negativity condition of eqn. 2 and define a two-stage approach: find the closest doubly stochastic matrix K′ to K and we are left with a spectral decomposition problem: max G tr(G⊤K′G) s.t G⊤G = I (3) where G contains the leading k eigenvectors of K′. We will refer to the process of transforming K to K′ as a normalization step. In N-cuts, the normalization takes the form K′ = D−1/2KD−1/2 where D = diag(K1) (a diagonal matrix containing the row sums of K) [9]. In [11] it was shown that repeating the N-cuts normalization, i.e., setting up the iterative step K(t+1) = D−1/2K(t)D−1/2 where D = diag(K(t)1) and K(0) = K converges to a doubly-stochastic matrix (a symmetric version of the well known ”iterative proportional fitting procedure” [8]). The conclusion of this brief background is to highlight the motivation for seeking a doubly-stochastic approximation to the input affinity matrix as part of the clustering process. The open issue is under what error measure is the approximation to take place? It is not difficult to show that repeating the N-cuts normalization converges to the global optimum under the relative entropy measure (see Appendix). Noting that spectral clustering optimizes the Frobenius norm it seems less natural to have the normalization step optimize a relative entropy error measure. We will derive in this paper the normalization under the L1 norm and under the Frobenius norm. The purpose of the L1 norm is to show that the resulting scheme is equivalent to a ratio-cut clustering — thereby not introducing a new clustering scheme but only contributing to the unification and better understanding the differences between the N-cuts and Ratio-cuts schemes. The Frobenius norm normalization is a new formulation and is based on a simple iterative scheme. The resulting normalization provides a new clustering performance which proves quite practical and boosts the clustering performance in many of the standardized tests we conducted. 3 Ratio-cut and the L1 Normalization Given that our desire is to find a doubly stochastic approximation K′ to the input affinity matrix K, we begin with the L1 norm approximation: Proposition 1 (ratio-cut) The closest doubly stochastic matrix K′ under the L1 error norm is K′ = K −D + I, which leads to the ratio-cut clustering algorithm, i.e., the partitioning of the data set into two clusters is determined by the second smallest eigenvector of the Laplacian D −K, where D = diag(K1). Proof: Let r = minF ∥K −F∥1 s.t. F1 = 1, F = F ⊤, where ∥A∥1 = P ij abs(Aij) is the L1 norm. Since ∥K −F∥1 ≥∥(K −F)1∥1 for any matrix F, we must have: r ≥∥(K −F)1∥1 = ∥D1 −1∥1 = ∥D −I∥1. Let F = K −D + I, then ∥K −(K −D + I)∥1 = ∥D −I∥1. If v is an eigenvector of the Laplacian D −K with eigenvalue λ, then v is also an eigenvector of K′ = K −D + I with eigenvalue 1 −λ and since (D −K)1 = 0 then the smallest eigenvector v = 1 of the Laplacian is the largest of K′, and the second smallest eigenvector of the Laplacian (the ratio-cut result) corresponds to the second largest eigenvector of K′. What we have so far is that the difference between N-cuts and Ratio-cuts as two popular spectral clustering schemes is that the former uses the relative entropy error measure in finding a doubly stochastic approximation to K and the latter uses the L1 norm error measure (which turns out to be the negative Laplacian with an added identity matrix). 4 Normalizing under Frobenius Norm Given that spectral clustering optimizes the Frobenius norm, there is a strong argument in favor of finding a Frobenius-norm optimum doubly stochastic approximation to K. The optimization setup is that of a quadratic linear programming (QLP). However, the special circumstances of our problem render the solution to the QLP to consist of a very simple iterative computation, as described next. The closest doubly-stochastic matrix K′ under Frobenius norm is the solution to the following QLP: K′ = argminF ∥K −F∥2 F s.t. F ≥0, F1 = 1, F = F ⊤, (4) where ∥A∥2 F = P ij A2 ij is the Frobenius norm. We define next two sub-problems, each with a closed-form solution, and have our QLP solution derived by alternating successively between the two until convergence. Consider the affine sub-problem: P1(X) = argminF ∥X −F∥2 F s.t. F1 = 1, F = F ⊤ (5) and the convex sub-problem: P2(X) = argminF ∥X −F∥2 F s.t. F ≥0 (6) We will use the Von-Neumann [5] successive projection lemma stating that P1P2P1P2...P1(K) will converge onto the projection of K onto the intersection of the affine and conic subspaces described above1. Therefore, what remains to show is that the projections P1 and P2 can be solved efficiently (and in closed form). We begin with the solution for P1. The Lagrangian corresponding to eqn. 5 takes the form: L(F, µ1, µ2) = trace(F ⊤F −2X⊤F) −µ⊤ 1 (F1 −1) −µ⊤ 2 (F ⊤1 −1), where from the condition F = F ⊤we have that µ1 = µ2 = µ. Setting the derivative with respect to F to zero yields: F = X + µ1⊤+ 1µ⊤. 1actually, the Von-Neumann lemma applies only to linear subspaces. The extension to convex subspaces involves a ”deflection” component described by Dykstra [3]. However, it is possible to show that for this specific problem the deflection component is redundant and the Von-Neumann lemma still applies. 10 20 30 40 50 0 1000 2000 3000 4000 # of data−points seconds Projection Matlab QP 500 1000 1500 2000 0 1 2 3 4 # of data−points seconds L1 Frobenius Relative Entropy (a) (b) Figure 1: Running times of the normalization algorithms. (a) the Frobenius scheme compared to a general Matlab QLP solver, (b) running time of the three normalization schemes. Isolate µ by multiplying by 1 on both sides: µ = (nI + 11⊤)−1(I −X)1. Noting that (nI + 11⊤)−1 = (1/n)(I −(1/2n)11⊤) we obtain a closed form solution: P1(X) = X + 1 nI + 1⊤X1 n2 I −1 nX ! 11⊤−1 n11⊤X. (7) The projection P2(X) can also be described in a simple closed form manner. Let I+ be the set of indices corresponding to non-negative entries of X and I−the set of negative entries of X. The criterion function ∥X −F∥2 F becomes: ∥X −F∥2 F = X (i,j)∈I+ (Xij −Fij)2 + X (i,j)∈I− (Xij −Fij)2. Clearly, the minimum energy over F ≥0 is obtained when Fij = Xij for all (i, j) ∈I+ and zero otherwise. Let th≥0(X) stand for the operator that zeroes out all negative entries of X. Then, P2(X) = th≥0(X). To conclude, the global optimum of eqn. 4 which returns the closest doubly stochastic matrix K′ in Frobenius error norm to the input affinity matrix K is obtained by repeating the following steps: Algorithm 1 (Frobenius-optimal Doubly Stochastic Normalization) finds the closest doubly stochastic approximation in Frobenius error norm to a given matrix K (global optimum of eqn. 4). 1. Let X(0) = K. 2. Repeat t = 0, 1, 2, ... (a) X(t+1) = P1(X(t)) (b) If X(t+1) ≥0 then stop and set K′ = X(t+1), otherwise set X(t+1) = th≥0(X(t+1)). This algorithm is simple and very efficient. Fig. 1a shows the running time of the algorithm compared to an off-the-shelf QLP Matlab solver over random matrices of increasing size — one can see that the run-time of our algorithm is a fraction of the standard QLP solver and scales very well with dimension. In fact the standard QLP solver can handle only small problem sizes. In Fig. 1b we plot the running times of all three normalization schemes: the L1 norm (computing the Laplacian), the relative-entropy (the iterative D−1/2KD−1/2), and the Frobenius scheme presented in this section. The Frobenius is more efficient than the relative-entropy normalization (which is the least efficient among the three). 5 Experiments For the clustering algorithm into k ≥2 clusters we experimented with the spectral algorithms described in [10] and [6]. The latter uses the N-cuts normalization D−1/2KD−1/2 followed by K-means on the embedded coordinates (the leading k eigenvectors of the normalized affinity) and the former uses a certain discretization scheme to turn the k leading eigenvectors into an indicator matrix. Both algorithms produced similar results thus we focused on [10] while replacing the normalization with the three schemes presented above. We refer to ”Ncuts” as the original normalization D−1/2KD−1/2, by ”RE” to the iterative application of the original normalization (which is proven to converge to a doubly stochastic matrix [11]), by ”L1” to the L1 doubly-stochastic normalization (which we have shown is equivalent to Ratio-cuts) and by ”Frobenius” to the iterative Frobenius scheme based on Von-Neumann’s lemma described in Section 4. We also included a ”None” field which corresponds to no normalization being applied. Dataset Kernel k Size Dim. Lowest Error Rate L1 Frobenius RE NCuts None SPECTF heart RBF 2 267 44 27.5 19.2 27.5 27.5 29.5 Pima RBF 2 768 8 36.2 35.2 34.9 35.2 35.4 Wine RBF 3 178 13 38.8 27.0 34.3 29.2 27.5 SpamBase RBF 2 4601 57 36.1 30.3 37.7 31.8 30.4 BUPA Poly 2 345 6 37.4 37.4 41.7 41.7 37.4 WDBC Poly 2 569 30 18.8 11.1 37.4 37.4 18.8 Table 1: UCI datasets used, together with some characteristics and the best result achieved using the different methods. Dataset Kernel k Size #PC Lowest Error Rate L1 Frobenius RE NCuts None Leukemia Poly 2 72 5 27.8 16.7 36.1 38.9 30.6 Lung Poly 2 181 5 15.5 9.9 16.6 15.5 15.5 Prostate RBF 2 136 5 40.4 19.9 43.4 40.4 40.4 Prostate Outcome RBF 2 21 5 28.6 4.8 23.8 28.6 28.6 Table 2: Cancer datasets used, together with some characteristics and the best result achieved using the different methods. We begin with evaluating the clustering quality obtained under the different normalization methods taken over a number of well studied datasets from the UCI repository2. The data-sets are listed in Table 1 together with some of their characteristics. The best performance (lowest error rate) is presented in Boldface. With the first four datasets we used an RBF kernel e ∥xi−xj ∥2 σ2 for the affinity matrix, while for the latter two a polynomial kernel (xT i xj + 1)d was used. The kernel parameters were calibrated independently for each method and for each dataset. In most cases the best performance was obtained with the Frobenius norm approximation, but as a general rule the type of normalization depends on the data. Also worth noting are instances, such as Wine and SpamBase, when the RE or Ncuts actually worsen the performance. In that case the RE performance is worse the Ncuts as the entire normalization direction is counter-productive. When RE outperforms None it also outperforms Ncuts (as can be expected since Ncuts is the first step in the iterative scheme of RE). With regard to tuning the affinity measure, we show in Fig. 2 the clustering performance of each dataset under each normalization scheme under varying kernel setting (σ and d values). Generally, the performance of the Frobenius normalization behaves in a smoother manner and is more stable under varying kernel settings than the other normalization schemes. Our next set of experiments was over some well studied cancer data-sets3. The data-sets are listed in Table 2 together with some of their characteristics. The column ”#PC” refers to the number of principal components used in a PCA pre-processing for the purpose of dimensionality reduction prior to clustering. Note that better results can be achieved when using a more sophisticated preprocessing, but since the focus is on the performances of the clustering algorithms and not on the datasets, we prefer not to use the optimal pre-processing and leave the data noisy. The AML/ALL 2http://www.ics.uci.edu/∼mlearn/MLRepository.html 3All cancer datasets can be found at http://sdmc.i2r.a-star.edu.sg/rp/ 20 40 60 80 100 10 20 30 40 50 sigma % errors 1 2 3 4 5 6 30 35 40 45 50 sigma % errors (SPECTF) (Pima) 200 400 600 800 20 30 40 50 60 sigma % errors 50 100 150 200 250 300 30 35 40 45 sigma % errors (Wine) (SpamBase) 10 20 30 40 50 60 35 40 45 50 degree % errors 1 2 3 4 5 6 10 20 30 40 50 degree % errors (BUPA) (WDBC) Figure 2: Error rate vs. similarity measure, for the UCI datasets listed in Table 1 L1 in magenta +; Forbenius in blue o; Relative Entropy in black ×; and Normalized-Cuts in red 2 4 6 8 10 10 20 30 40 50 degree % errors 2 4 6 8 10 0 10 20 30 40 50 degree % errors (AML/ALL Leukemia) (Lung Cancer) 50 100 150 10 20 30 40 50 sigma % errors 200 400 600 800 0 10 20 30 40 50 sigma % errors (Prostate) (Prostate Outcome) Figure 3: Error rate vs. similarity measure, for the cancer datasets listed in Table 2. L1 in magenta +; Forbenius in blue o; Relative Entropy in black ×; and Normalized-Cuts in red Leukemia dataset is a challenging benchmark common in the cancer community, where the task is to distinguish between two types of Leukemia. The original dataset consists of 7129 coordinates probed from 6817 human genes, and we perform PCA to obtain 5 leading principal components prior to clustering using a polynomial kernel. Lung Cancer (Brigham and Women’s Hospital, Harvard Medical School) dataset is another common benchmark that describes 12533 genes sampled from 181 tissues. The task is to distinguish between malignant pleural mesothelioma (MPM) and adenocarcinoma (ADCA) of the lung. The Prostate dataset consists of 12,600 coordinates representing different genes, where the task is to identify prostate samples as tumor or non-tumor. We use the first five principal components as input for clustering using an RBF kernel. The Prostate Outcome dataset uses the same genes from another set of prostate samples, where the task is to predict the clinical outcome (relapse or non-relapse for at least four years). Finally, Fig. 3 shows the clustering performance of each dataset under each normalization scheme under varying kernel settings (σ and d values). 6 Summary Normalization of the affinity matrix is a crucial element in the success of spectral clustering. The type of normalization performed by N-cuts is a step towards a doubly-stochastic approximation of the affinity matrix under relative entropy [11]. In this paper we have extended the normalization via doubly-stochasticity in three ways: (i) we have shown that the difference between N-Cuts and Ratio-cuts is in the error measure used to find the closest doubly stochastic approximation to the input affinity matrix, (ii) we have introduced a new normalization scheme based on Frobenius norm approximation. The scheme involves a succession of simple computations, is very simple to implement and is efficient computation-wise, and (iii) throughout extensive experimentation on standard data-sets we have shown the importance of normalization to the performance of spectral clustering. In the experiments we have conducted the Frobenius normalization had the upper-hand in most cases. We have also shown that the relative-entropy normalization is not always the right approach as in some data-sets the performance worsened after the relative-entropy but never worsened when the Frobenius normalization was applied. References [1] P. K. Chan, M. D. F. Schlag, and J. Y. Zien. Spectral k-way ratio-cut partitioning and clustering. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 13(9):1088– 1096, 1994. [2] I. Csiszar. I-divergence geometry of probability distributions and minimization problems. The Annals of Probability, 3(1):146–158, 1975. [3] R.L. Dykstra. An algorithm for restricted least squares regression. J. of the Amer. Stat. Assoc., 78:837–842, 1983. [4] I.S.Dhillon, Y.Guan, and B.Kulis. Kernel k-means, spectral clustering and normalized cuts. In International Conference on Knowledge Discovery and Data Mining(KDD), pages 551–556, Aug. 2004. [5] J. Von Neumann. Functional Operators Vol. II. Princeton University Press, 1950. [6] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Proceedings of the conference on Neural Information Processing Systems (NIPS), 2001. [7] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 2000. [8] R. Sinkhorn and P. Knopp. Conerning non-negative matrices and doubly stochastic matrices. Pacific J. Math., 21:343–348, 1967. [9] Y. Weiss. Segmentation using eigenvectors: a unifying view. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1999. [10] S.X. Yu and J. Shi. Multiclass spectral clustering. In Proceedings of the International Conference on Computer Vision, 2003. [11] R. Zass and A. Shashua. A unifying approach to hard and probabilistic clustering. In Proceedings of the International Conference on Computer Vision, Beijing, China, Oct. 2005. A Normalized Cuts and Relative Entropy Normalization The following proposition is an extension (symmetric version) of the claim about the iterative proportional fitting procedure converging in relative entropy error measure [2]: Proposition 2 The closest doubly-stochastic matrix F under the relative-entropy error measure to a given symmetric matrix K, i.e., which minimizes: min F RE(F||K) s.t. F ≥0, F = F ⊤, F1 = 1, F ⊤1 = 1 has the form F = DKD for some (unique) diagonal matrix D. Proof: The Lagrangian of the problem is: L() = X ij fij ln fij kij + X ij kij − X ij fij − X i λi( X j fij −1) − X j µj( X i fij −1) The derivative with respect to fij is: ∂L ∂fij = ln fij + 1 −ln kij −1 −λi −µj = 0 from which we obtain: fij = eλieµjkij Let D1 = diag(eλ1, ..., eλn) and D2 = diag(eµ1, ..., eµn), then we have: F = D1KD2 Since F = F ⊤and K is symmetric we must have D1 = D2.
2006
58
3,079
A Scalable Machine Learning Approach to Go Lin Wu and Pierre Baldi School of Information and Computer Sciences University of California, Irvine Irvine, CA 92697-3435 lwu,pfbaldi@ics.uci.edu Abstract Go is an ancient board game that poses unique opportunities and challenges for AI and machine learning. Here we develop a machine learning approach to Go, and related board games, focusing primarily on the problem of learning a good evaluation function in a scalable way. Scalability is essential at multiple levels, from the library of local tactical patterns, to the integration of patterns across the board, to the size of the board itself. The system we propose is capable of automatically learning the propensity of local patterns from a library of games. Propensity and other local tactical information are fed into a recursive neural network, derived from a Bayesian network architecture. The network integrates local information across the board and produces local outputs that represent local territory ownership probabilities. The aggregation of these probabilities provides an effective strategic evaluation function that is an estimate of the expected area at the end (or at other stages) of the game. Local area targets for training can be derived from datasets of human games. A system trained using only 9 × 9 amateur game data performs surprisingly well on a test set derived from 19 × 19 professional game data. Possible directions for further improvements are briefly discussed. 1 Introduction Go is an ancient board game–over 3,000 years old [6, 5]–that poses unique opportunities and challenges for artificial intelligence and machine learning. The rules of Go are deceptively simple: two opponents alternatively place black and white stones on the empty intersections of an odd-sized square board, traditionally of size 19 × 19. The goal of the game, in simple terms, is for each player to capture as much territory as possible across the board by encircling the opponent’s stones. This disarming simplicity, however, conceals a formidable combinatorial complexity [2]. On a 19 × 19 board, there are approximately 319×19 = 10172.24 possible board configurations and, on average, on the order of 200-300 possible moves at each step of the game, preventing any form of semi-exhaustive search. For comparison purposes, the game of chess has a much smaller branching factor, on the order of 35-40 [10, 7]. Today, computer chess programs, built essentially on search techniques and running on a simple PC, can rival or even surpass the best human players. In contrast, and in spite of several decades of significant research efforts and of progress in hardware speed, the best Go programs of today are easily defeated by an average human amateur. Besides the intrinsic challenge of the game, and the non-trivial market created by over 100 million players worldwide, Go raises other important questions for our understanding of natural or artificial intelligence in the distilled setting created by the simple rules of a game, uncluttered by the endless complexities of the “real world”. For example, to many observers, current computer solutions to chess appear “brute force”, hence “unintelligent”. But is this perception correct, or an illusion–is there something like true intelligence beyond “brute force” and computational power? Where is Go situated in the apparent tug-of-war between intelligence and sheer computational power? Another fundamental question that is particularly salient in the Go setting is the question of knowledge transfer. Humans learn to play Go on boards of smaller sized–typically 9 × 9–and then “transfer” their knowledge to the larger 19 × 19 standard size. How can we develop algorithms that are capable of knowledge transfer? Here we take modest steps towards addressing these challenges by developing a scalable machine learning approach to Go. Clearly good evaluation functions and search algorithms are essential ingredients of computer board-game systems. Here we focus primarily on the problem of learning a good evaluation function for Go in a scalable way. We do include simple search algorithms in our system, as many other programs do, but this is not the primary focus. By scalability we imply that a main goal is to develop a system more or less automatically, using machine learning approaches, with minimal human intervention and handcrafting. The system ought to be able to transfer information from one board size (e.g. 9 × 9), to another size (e.g. 19 × 19). We take inspiration in three ingredients that seem to be essential to the Go human evaluation process: the understanding of local patterns, the ability to combine patterns, and the ability to relate tactical and strategic goals. Our system is built to learn these three capabilities automatically and attempts to combine the strengths of existing systems while avoiding some of their weaknesses. The system is capable of automatically learning the propensity of local patterns from a library of games. Propensity and other local tactical information are fed into a recursive neural network, derived from a Bayesian network architecture. The network integrates local information across the board and produces local outputs that represent local territory ownership probabilities. The aggregation of these probabilities provides an effective strategic evaluation function that is an estimate of the expected area at the end (or at other stages) of the game. Local area targets for training can be derived from datasets of human games. The main results we present here are derived on a 19×19 board using a player trained using only 9 × 9 game data. 2 Data Because the approach to be described emphasizes scalability and learning, we are able to train our systems at a given board size and use it to play at different sizes, both larger and smaller. Pure bootstrap approaches to Go where computer players are initialized randomly and play large numbers of games, such as evolutionary approaches or reinforcement learning, have been tried [11]. We have implemented these approaches and used them for small board sizes 5 × 5 and 7 × 7. However, in our experience, these approaches do not scale up well to larger board sizes. For larger board sizes, better results are obtained using training data derived from records of games played by humans. We used available data at board sizes 9 × 9, 13 × 13, and 19 × 19. Data for 9 × 9 Boards: This data consists of 3,495 games. We randomly selected 3,166 games (90.6%) for training, and the remaining 328 games (9.4%) for validation. Most of the games in this data set are played by amateurs. A subset of 424 games (12.13%) have at least one player with an olf ranking of 29, corresponding to a very good amateur player. Data for 13 × 13 Boards: This data consists of 4175 games. Most of the games, however, are played by rather weak players and therefore cannot be used for training. For validation purposes, however, we retained a subset of 91 games where both players have an olf ranking greater or equal to 25–the equivalent of a good amateur player. Data for 19 × 19 Boards: This high-quality data set consists of 1835 games played by professional players (at least 1 dan). A subset of 1131 games (61.6%) are played by 9 dan players (the highest possible ranking). This is the dataset used in [12]. 3 System Architecture 3.1 Evaluation Function, Outputs, and Targets Because Go is a game about territory, it is sensible to have “expected territory” be the evaluation function, and to decompose this expectation as a sum of local probabilities. More specifically, let Aij(t) denote the ownership of intersection ij on the board at time t during the game. At the end of a game, each intersection can be black, white, or both 1. Black is represented as 1, white as 0, and both as 0.5. The same scheme with 0.5 for empty intersections, or more complicated schemes, can be used to represent ownership at various intermediate stages of the game. Let Oij(t) be the output of the learning system at intersection ij at time t in the game. Likewise, let Tij(t) be the corresponding training target. In the most simple case, we can use Tij(t) = Aij(T), where T denotes the end of the game. In this case, the output Oij(t) can be interpreted as the probability Pij(t), estimated at time t, of owning the ij intersection at the end of the game. Likewise, P ij Oij(t) is the estimate, computed at time t, of the total expected area at the end of the game. Propagation of information provided by targets/rewards computed at the end of the game only, however, can be problematic. With a dataset of training examples, this problem can be addressed because intermediary area values Aij(t) are available for training for any t. In the simulations presented here, we use a simple scheme Tij(t) = (1 −w)Aij(T) + wAij(t + k) (1) w ≥0 is a parameter that controls the convex combination between the area at the end of the game and the area at some step t + k in the more near future. w = 0 corresponds to the simple case described above where only the area at the end of the game is used in the target function. Other ways of incorporating target information from intermediary game positions are discussed briefly at the end. To learn the evaluation function and the targets, we propose to use a graphical model (Bayesian network) which in turn leads to a directed acyclic graph recursive neural network (DAG-RNN) architecture. 3.2 DAG-RNN Architectures The architecture is closely related to an architecture originally proposed for a problem in a completely different area – the prediction of protein contact maps [8, 1]. As a Bayesian network, the architecture can be described in terms of the DAG in Figure 1 where the nodes are arranged in 6 lattice planes reflecting the Go board spatial organization. Each plane contains N × N nodes arranged on the vertices of a square lattice. In addition to the input and output planes, there are four hidden planes for the lateral propagation and integration of information across the Go board. Within each hidden plane, the edges of the quadratic lattice are oriented towards one of the four cardinal directions (NE, NW, SE, and SW). Directed edges within a column of this architecture are given in Figure 1b. Thus each intersection ij in a N × N board is associated with six units. These units consist of an input unit Iij, four hidden units HNE ij , HNW ij , HSW ij , HSE ij , and an output unit Oij. In a DAG-RNN the relationships between the variables are deterministic, rather than probabilistic, and implemented in terms of neural networks with weight sharing. Thus the previous architecture, leads to a DAG-RNN architecture consisting of 5 neural networks in the form            Oi,j = NO(Ii,j, HNW i,j , HNE i,j , HSW i,j , HSE i,j ) HNE i,j = NNE(Ii,j, HNE i−1,j, HNE i,j−1) HNW i,j = NNW (Ii,j, HNW i+1,j, HNW i,j−1) HSW i,j = NSW (Ii,j, HSW i+1,j, HSW i,j+1) HSE i,j = NSE(Ii,j, HSE i−1,j, HSE i,j+1) (2) where, for instance, NO is a single neural network that is shared across all spatial locations. In addition, since Go is “isotropic” we use a single network shared across the four hidden planes. Go however involves strong boundaries effects and therefore we add one neural network NC for the corners, shared across all four corners, and one neural network NS for each side position, shared across all four sides. In short, the entire Go DAG-RNN architecture is described by four feedforward NNs (corner, side, lateral, output) that are shared at all corresponding locations. For each one of these feedforward neural networks, we have experimented with several architectures, but we 1This is called “seki”. Seki is a situation where two live groups share liberties and where neither of them can fill them without dying. typically use a single hidden layer. The DAG-RNN in the main simulation results uses 16 hidden nodes and 8 output nodes for the lateral propagation networks, and 16 hidden nodes and one output node for the output network. All transfer functions are logistic. The total number of free parameters is close to 6000. Because the underlying graph is acyclic, these networks can be unfolded in space and training can proceed by simple gradient descent (back-propagation) taking into account relevant symmetries and weight sharing. Networks trained at one board size can be reused at any other board size, providing a simple mechanism for reusing and extending acquired knowledge. For a board of size N × N, the training procedure scales like O(WMN 4) where W is the number of adjustable weights, and M is the number of training games. There are roughly N 2 board positions in a game and, for each position, N 2 outputs Oij to be trained, hence the O(N 4) scaling. Both game records and the positions within each selected game record are randomly selected during training. Weights are updated essentially on line, once every 10 game positions. Training a single player on our 9×9 data takes on the order of a week on a current desktop computer, corresponding roughly to 50 training epochs at 3 hours per epoch. Output Plane Input Plane 4 Hidden Planes NE NW SW SE SE i,j-1 NW i,j I i,j SE i,j SW i,j NE i,j NW i+1,j SE i-1,j NE i,j-1 NE i+1,j NW i,j+1 SW i-1,j SW i,j+1 O i,j a. Planar lattices of the architecture. b. Connection details within an ij column. Figure 1: (a) The nodes of a DAG-RNN are regularly arranged in one input plane, one output plane, and four hidden planes. In each plane, nodes are arranged on a square lattice. The hidden planes contain directed edges associated with the square lattices. All the edges of the square lattice in each hidden plane are oriented in the direction of one of the four possible cardinal corners: NE, NW, SW, and SE. Additional directed edges run vertically in column from the input plane to each hidden plane and from each hidden plane to the output plane. (b) Connection details within one column of Figure 1a. The input node is connected to four corresponding hidden nodes, one for each hidden plane. The input node and the hidden nodes are connected to the output node. Iij is the vector of inputs at intersection ij. Oij is the corresponding output. Connections of each hidden node to its lattice neighbors within the same plane are also shown. 3.3 Inputs At a given board intersection, the input vector Iij has multiple components–listed in Table 1. The first three components–stone type, influence, and propensity–are associated with the corresponding intersection and a fixed number of surrounding locations. Influence and propensity are described below in more detail. The remaining features correspond to group properties involving variable numbers of neighboring stones and are self explanatory for those who are familiar with Go. The group Gij associated with a given intersection is the maximal set of stones of the same color that are connected to it. Neighboring (or connected) opponent groups of Gij are groups of the opposite color that are directly connected (adjacent) to Gij. The idea of using higher order liberties is from Werf [13]. O1st and O2nd provide the number of true eyes and the number of liberties of the weakest and the second weakest neighboring opponent groups. Weakness here is defined in alphabetical order with respect to the number of eyes first, followed by the number of liberties. Table 1: Typical input features. The first three features–stone type, influence, and propensity– are properties associated with the corresponding intersection and a fixed number of surrounding locations. The other properties are group properties involving variable numbers of neighboring stones. Feature Description b,w,e the stone type: black, white or empty influence the influence from the stones of the same color and the opposing color propensity a local statistics computed from 3 × 3 patterns in the training data (section 3.3) Neye the number of true eyes N1st the number of liberties, which is the number of empty intersections connected to a group of stones. We also call it the 1st-order liberties N2nd the number of 2nd-order liberties, which is defined as the liberties of the 1storder liberties N3rd the number of 3rd-order liberties, which is defined as the liberties of the 2ndorder liberties N4th the number of 4th-order liberties, which is defined as the liberties of the 3rdorder liberties O1st features of the weakest connected opponent group (stone type, number of liberties, number of eyes) O2nd features of the second weakest connected opponent group (stone type, number of liberties, number of eyes) Influence: We use two types of influence calculation. Both algorithms are based on Chen’s method [4]. One is an exact implementation of Chen’s method. The other uses a stringent influence propagation rule. In Chen’s exact method, any opponent stone can block the propagation of influence. With a stringent influence propagation rule, an opponent stone can block the propagation of influence if and only if it is stronger than the stone emitting the influence. Strength is again defined in alphabetical order with respect to the number of eyes first, followed by the number of liberties. Propensity–Automated Learning and Scoring of a Pattern Library: We develop a method to learn local patterns and their value automatically from a database of games. The basic method is illustrated in the case of 3 × 3 patterns, which are used in the simulations. Considering rotation and mirror symmetries, there are 10 unique locations for a 3 × 3 window on a 9 × 9 board (see also [9]). Given any 3 × 3 pattern of stones on the board and a set of games, we then compute nine numbers, one for each intersection. These numbers are local indicators of strength or propensity. The propensity Sij(p) of each intersection ij associated with stone pattern p and a 3 × 3 window w is defined as: Sw ij(p) = NBij(p) −NWij(p) NBij(p) + NWij(p) + C (3) where NBij(p) is the number of times that pattern p ends with a black stone at intersection ij at the end of the games in the data, and NWij(p) is the same for a white stone. Both NBij(p) and NWij(p) are computed taking into account the location and the symmetries of the corresponding window w. C plays a regularizing role in the case of rare patterns and is set to 1 in the simulations. Thus Sw ij(p) is an empirical normalized estimate of the local differential propensity towards conquering the corresponding intersection in the local context provided by the corresponding pattern and window. In general, a given intersection ij on the board is covered by several 3 × 3 windows. Thus, for a given intersection ij on a given board, we can compute a value Sw ij(p) for each different window that contains the intersection. In the following simulations, a single final value Sij(p) is computed by averaging over the different w’s. However, more complex schemes that retain more information can easily be envisioned by, for instance: (1) computing also the standard deviation of the Sw ij(p) as a function of w; (2) using a weighted average, weighted by the importance of the window w; and (3) using the entire set of Sw ij(p) values, as w varies around ij, to augment the input vector. 3.4 Move Selection and Search For a given position, the next move can be selected using one-level search by considering all possible legal moves and computing the estimate at time t of the total expected area E = P ij Oij(t) at the end of the game, or some intermediate position, or a combination of both, where Oij(t) are the outputs (predicted probabilities) of the DAG-RNNs. The next move can be chosen by maximizing this evaluation function (1-ply search). Alternatively, Gibbs sampling can be used to choose the next move among all the legal moves with a probability proportional to eE/T emp, where Temp is a temperature parameter [3, 11, 12]. We have also experimented with a few other simple search schemes, such as 2-ply search (MinMax). 4 Results We trained a large number of players using the methods described above. In the absence of training data, we used pure bootstrap approaches (e.g. reinforcement learning) at sizes 5 × 5 and 7 × 7 with results that were encouraging but clearly insufficient. Not surprisingly, when used to play at larger board sizes, the RNNs trained at these small board sizes yield rather weak players. The quality of most 13 × 13 games available to us is too poor for proper training, although a small subset can be used for validation purposes. We do not have any data for sizes N = 11, 15, and 17. And because of the O(N 4) scaling, training systems directly at 19 × 19 takes many months and is currently in progress. Thus the most interesting results we report are derived by training the RNNs using the 9×9 game data, and using them to play at 9×9 and, more importantly, at larger board sizes. Several 9×9 players achieve top comparable performance. For conciseness, here we report the results obtained with one of them, trained with target parameters w = 0.25 and k = 2 in Equation 1, 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 0.7 w1 w2 w30 w38 0 50 100 150 200 0 10 20 30 40 50 60 rand top1 top5 top10 top20 top30 a. Validation error vs. game phase b. Percentage vs. game phase Figure 2: (a) Validation error vs. game phase. Phase is defined by the total number of stones on the board. The four curves respectively represent the validation errors of the neural network after 1, 2, 33, and 38 epochs of training. (b) Percentage of moves made by professional human players on boards of size 19 × 19 that are contained in the m top-ranked moves according to the DAG-RNN trained on 9 × 9 amateur data, for various values of m. The baseline associated with the red curve corresponds to a random uniform player. Figure 2a shows how the validation error changes as training progresses. Validation error here is defined as the relative entropy between the output probabilities produced by the RNN and the target probabilities, computed on the validation data. The validation error decreases quickly during the first epochs. In this case, no substantial decrease in validation error is observed after epoch 30. Note also how the error is smaller towards the end of the game due both to the reduction in the number of possible moves and the strong end-of-game training signal. An area and hence a probability can be assigned by the DAG-RNN to each move, and used to rank them, as described in section 3.4. Thus we can compute the average probability of moves played by good human players according to the DAG-RNN or other probabilistic systems such as [12]. In Table 2, we report such probabilities for several systems and at different board sizes. For size 19 × 19, we use the same test set used in [12]. Boltzmann5 and BoltzmannLiberties are their results reported in the pre-published version of their NIPS paper. At this size, the probabilities in Table 2: Probabilities assigned by different systems to moves played by human players in test data. Board Size System Log Probability Probability 9 × 9 Random player -4.13 1/62 9 × 9 RNN(1-ply search) -1.86 1/7 13 × 13 Random player -4.88 1/132 13 × 13 RNN(1-ply search) -2.27 1/10 19 × 19 Random player -5.64 1/281 19 × 19 Boltzmann5 -5.55 1/254 19 × 19 BoltzmannLiberties -5.27 1/194 19 × 19 RNN(1-ply search) -2.70 1/15 the table are computed using the 80-83rd moves of each game. For boards of size 19×19, a random player that selects moves uniformly at random among legal moves assigns a probability of 1/281 to the moves played by professional players in the data set. BoltzmannLiberties was able to improve this probability to 1/194. Our best DAG-RNNs trained using amateur data at 9 × 9 are capable of bringing this probability further down to 1/15 (also a considerable improvement over our previous 1/42 performance presented in April 2006 at the Snowbird Learning Conference). A remarkable example where the top ranked move according to the DAG-RNN coincides with the move actually played in a game between two very highly-ranked players is given in Figure 3, illustrating also the underlying probabilistic territory calculations. T 19 19 A A 18 18 B B 17 17 C C 16 16 D D 15 15 E E 14 14 F F 13 13 G G 12 12 H H 11 11 J J 10 10 K K 9 9 L L 8 8 M M 7 7 N N 6 6 O O 5 5 P P 4 4 Q Q 3 3 R R 2 2 S S 1 1 T T 19 19 A A 18 18 B B 17 17 C C 16 16 D D 15 15 E E 14 14 F F 13 13 G G 12 12 H H 11 11 J J 10 10 K K 9 9 L L 8 8 M M 7 7 N N 6 6 O O 5 5 P P 4 4 Q Q 3 3 R R 2 2 S S 1 1 T Figure 3: Example of an outstanding move based on territory predictions made by the DAG-RNN. For each intersection, the height of the green bar represents the estimated probability that the intersection will be owned by black at the end of the game. The figure on the left shows the predicted probabilities if black passes. The figure on the right shows the predicted probabilities if black makes the move at N12. N12 causes the greatest increase in green area and is top-ranked move for the DAG-RNN. Indeed this is the move selected in the game played by Zhou, Heyang (black, 8 dan) and Chang, Hao (white, 9 dan) on 10/22/2000. Figure 2b, provides a kind of ROC curve by displaying the percentage of moves made by professional human player on boards of size 19 × 19 that are contained in the m top-ranked moves according to the DAG-RNN trained on 9 × 9 amateur data, for various values of m across all phases of the game. For instance, when there are 80 stones on the board, and hence on the order of 300 legal moves available, there is a 50% chance that a move selected by a very highly ranked human player (dan 9) is found among the top 30 choices produced by the DAG-RNN. 5 Conclusion We have designed a DAG-RNN for the game of Go and demonstrated that it can learn territory predictions fairly well. Systems trained using only a set of 9 × 9 amateur games achieve surprisingly good performance on a 19 × 19 test set that contains 1835 professional played games. The methods and results presented clearly point also to several possible direction of improvement that are currently under active investigation. These include: (1) obtaining larger data sets and training systems of size greater than 9 × 9; (2) exploiting patterns that are larger than 3 × 3, especially at the beginning of the game when the board is sparsely occupied and matching of large patterns is possible using, for instance, Zobrist hashing techniques [14]; (3) combining different players, such as players trained at different board sizes, or players trained on different phases of the game; and (4) developing better, non-exhaustive but deeper, search methods. Acknowledgments The work of PB and LW has been supported by a Laurel Wilkening Faculty Innovation award and awards from NSF, BREP, and Sun Microsystems to PB. We would like to thank Jianlin Chen for developing a web-based Go graphical user interface, Nicol Schraudolph for providing the 9 × 9 and 13 × 13 data, and David Stern for providing the 19 × 19 data. References [1] P. Baldi and G. Pollastri. The principled design of large-scale recursive neural network architectures–DAG-RNNs and the protein structure prediction problem. Journal of Machine Learning Research, 4:575–602, 2003. [2] E. Berlekamp and D. Wolfe. Mathematical Go–Chilling gets the last point. A K Peters, Wellesley, MA, 1994. [3] B. Brugmann. Monte Carlo Go. 1993. URL: ftp://www.joy.ne.jp/welcome/igs/ Go/computer/mcgo.tex.Z. [4] Zhixing Chen. Semi-empirical quantitative theory of Go part 1: Estimation of the influence of a wall. ICGA Journal, 25(4):211–218, 2002. [5] W. S. Cobb. The Book of GO. Sterling Publishing Co., New York, NY, 2002. [6] K. Iwamoto. GO for Beginners. Pantheon Books, New York, NY, 1972. [7] Aske Plaat, Jonathan Schaeffer, Wim Pijls, and Arie de Bruin. Exploiting graph properties of game trees. In 13th National Conference on Artificial Intelligence (AAAI’96), pages 234–239. 1996. [8] G. Pollastri and P. Baldi. Prediction of contact maps by GIOHMMs and recurrent neural networks using lateral propagation from all four cardinal corners. Bioinformatics, 18:S62– S70, 2002. [9] Liva Ralaivola, Lin Wu, and Pierre Balid. SVM and Pattern-Enriched Common Fate Graphs for the game of Go. ESANN 2005, 27-29:485–490, 2005. [10] Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 2nd edition, 2002. [11] N. N. Schrauldolph, P. Dayan, and T. J. Sejnowski. Temporal difference learning of position evaluation in the game of Go. In Advances in Neural Information Processing Systems 6, pages 817–824. 1994. [12] David H. Stern, Thore Graepel, and David J. C. MacKay. Modelling uncertainty in the game of Go. In Advances in Neural Information Processing Systems 17, pages 1353–1360. 2005. [13] E. Werf, H. Herik, and J. Uiterwijk. Learning to score final positions in the game of Go. In Advances in Computer Games: Many Games, Many Challenges, pages 143–158. 2003. [14] Albert L. Zobrist. A new hashing method with application for game playing. 1970. Technical report 88, University of Wisconsin, April 1970. Reprinted in ICCA Journal, 13(2), (1990), pp. 69-73.
2006
59
3,080
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Q. Weinberger Dept of Computer and Information Science U of Pennsylvania, Philadelphia, PA 19104 kilianw@seas.upenn.edu Fei Sha Computer Science Division UC Berkeley, CA 94720 feisha@cs.berkeley.edu Qihui Zhu Dept of Computer and Information Science U of Pennsylvania, Philadelphia, PA 19104 qihuizhu@seas.upenn.edu Lawrence K. Saul Dept of Computer Science and Engineering UC San Diego, La Jolla, CA 92093 saul@cs.ucsd.edu Abstract In many areas of science and engineering, the problem arises how to discover low dimensional representations of high dimensional data. Recently, a number of researchers have converged on common solutions to this problem using methods from convex optimization. In particular, many results have been obtained by constructing semidefinite programs (SDPs) with low rank solutions. While the rank of matrix variables in SDPs cannot be directly constrained, it has been observed that low rank solutions emerge naturally by computing high variance or maximal trace solutions that respect local distance constraints. In this paper, we show how to solve very large problems of this type by a matrix factorization that leads to much smaller SDPs than those previously studied. The matrix factorization is derived by expanding the solution of the original problem in terms of the bottom eigenvectors of a graph Laplacian. The smaller SDPs obtained from this matrix factorization yield very good approximations to solutions of the original problem. Moreover, these approximations can be further refined by conjugate gradient descent. We illustrate the approach on localization in large scale sensor networks, where optimizations involving tens of thousands of nodes can be solved in just a few minutes. 1 Introduction In many areas of science and engineering, the problem arises how to discover low dimensional representations of high dimensional data. Typically, this high dimensional data is represented in the form of large graphs or matrices. Such data arises in many applications, including manifold learning [12], robot navigation [3], protein clustering [6], and sensor localization [1]. In all these applications, the challenge is to compute low dimensional representations that are consistent with observed measurements of local proximity. For example, in robot path mapping, the robot’s locations must be inferred from the high dimensional description of its state in terms of sensorimotor input. In this setting, we expect similar state descriptions to map to similar locations. Likewise, in sensor networks, the locations of individual nodes must be inferred from the estimated distances between nearby sensors. Again, the challenge is to find a planar representation of the sensors that preserves local distances. In general, it is possible to formulate these problems as simple optimizations over the low dimensional representations ⃗xi of individual instances (e.g., robot states, sensor nodes). The most straightforward formulations, however, lead to non-convex optimizations that are plagued by local minima. For this reason, large-scale problems cannot be reliably solved in this manner. A more promising approach reformulates these problems as convex optimizations, whose global minima can be efficiently computed. Convexity is obtained by recasting the problems as optimizations over the inner product matrices Xij = ⃗xi · ⃗xj. The required optimizations can then be relaxed as instances of semidefinite programming [10], or SDPs. Two difficulties arise, however, from this approach. First, only low rank solutions for the inner product matrices X yield low dimensional representations for the vectors ⃗xi. Rank constraints, however, are non-convex; thus SDPs and other convex relaxations are not guaranteed to yield the desired low dimensional solutions. Second, the resulting SDPs do not scale very well to large problems. Despite the theoretical guarantees that follow from convexity, it remains prohibitively expensive to solve SDPs over matrices with (say) tens of thousands of rows and similarly large numbers of constraints. For the first problem of “rank regularization”, an apparent solution has emerged from recent work in manifold learning [12] and nonlinear dimensionality reduction [14]. This work has shown that while the rank of solutions from SDPs cannot be directly constrained, low rank solutions often emerge naturally by computing maximal trace solutions that respect local distance constraints. Maximizing the trace of the inner product matrix X has the effect of maximizing the variance of the low dimensional representation {⃗xi}. This idea was originally introduced as “semidefinite embedding” [12, 14], then later described as “maximum variance unfolding” [9] (and yet later as “kernel regularization” [6, 7]). Here, we adopt the name maximum variance unfolding (MVU) which seems to be currently accepted [13, 15] as best capturing the underlying intuition. This paper addresses the second problem mentioned above: how to solve very large problems in MVU. We show how to solve such problems by approximately factorizing the large n×n matrix X as X ≈QYQ⊤where Q is a pre-computed n×m rectangular matrix with m≪n. The factorization leaves only the much smaller m × m matrix Y to be optimized with respect to local distance constraints. With this factorization, and by collecting constraints using the Schur complement lemma, we show how to rewrite the original optimization over the large matrix X as a simple SDP involving the smaller matrix Y. This SDP can be solved very quickly, yielding an accurate approximation to the solution of the original problem. Moreover, if desirable, this solution can be further refined [1] by (non-convex) conjugate gradient descent in the vectors {⃗xi}. The main contribution of this paper is the matrix factorization that makes it possible to solve large problems in MVU. Where does the factorization come from? Either implicitly or explicitly, all problems of this sort specify a graph whose nodes represent the vectors {⃗xi} and whose edges represent local distance constraints. The matrix factorization is obtained by expanding the low dimensional representation of these nodes (e.g., sensor locations) in terms of the m ≪n bottom (smoothest) eigenvectors of the graph Laplacian. Due to the local distance constraints, one expects the low dimensional representation of these nodes to vary smoothly as one traverses edges in the graph. The presumption of smoothness justifies the partial orthogonal expansion in terms of the bottom eigenvectors of the graph Laplacian [5]. Similar ideas have been widely applied in graphbased approaches to semi-supervised learning [4]. Matrix factorizations of this type have also been previously studied for manifold learning; in [11, 15], though, the local distance constraints were not properly formulated to permit the large-scale applications considered here, while in [8], the approximation was not considered in conjunction with a variance-maximizing term to favor low dimensional representations. The approach in this paper applies generally to any setting in which low dimensional representations are derived from an SDP that maximizes variance subject to local distance constraints. For concreteness, we illustrate the approach on the problem of localization in large scale sensor networks, as recently described by [1]. Here, we are able to solve optimizations involving tens of thousands of nodes in just a few minutes. Similar applications to the SDPs that arise in manifold learning [12], robot path mapping [3], and protein clustering [6, 7] present no conceptual difficulty. This paper is organized as follows. Section 2 reviews the problem of localization in large scale sensor networks and its formulation by [1] as an SDP that maximizes variance subject to local distance constraints. Section 3 shows how we solve large problems of this form—by approximating the inner product matrix of sensor locations as the product of smaller matrices, by solving the smaller SDP that results from this approximation, and by refining the solution from this smaller SDP using local search. Section 4 presents our experimental results on several simulated networks. Finally, section 5 concludes by discussing further opportunities for research. 2 Sensor localization via maximum variance unfolding Figure 1: Sensors distributed over US cities. Distances are estimated between nearby cities within a fixed radius. The problem of sensor localization is best illustrated by example; see Fig. 1. Imagine that sensors are located in major cities throughout the continental US, and that nearby sensors can estimate their distances to one another (e.g., via radio transmitters). From only this local information, the problem of sensor localization is to compute the individual sensor locations and to identify the whole network topology. In purely mathematical terms, the problem can be viewed as computing a low rank embedding in two or three dimensional Euclidean space subject to local distance constraints. We assume there are n sensors distributed in the plane and formulate the problem as an optimization over their planar coordinates ⃗x1, . . . , ⃗xn ∈ℜ2. (Sensor localization in three dimensional space can be solved in a similar way.) We define a neighbor relation i ∼j if the ith and jth sensors are sufficiently close to estimate their pairwise distance via limited-range radio transmission. From such (noisy) estimates of local pairwise distances {dij}, the problem of sensor localization is to infer the planar coordinates {⃗xi}. Work on this problem has typically focused on minimizing the sum-of-squares loss function [1] that penalizes large deviations from the estimated distances: min ⃗x1,...,⃗xn ! i∼j " ∥⃗xi −⃗xj∥2 −d2 ij #2 (1) In some applications, the locations of a few sensors are also known in advance. For simplicity, in this work we consider the scenario where no such “anchor points” are available as prior knowledge, and the goal is simply to position the sensors up to a global rotation, reflection, and translation. Thus, to the above optimization, without loss of generality we can add the centering constraint: $$$ ! i⃗xi $$$ 2 = 0. (2) It is straightforward to extend our approach to incorporate anchor points, which generally leads to even better solutions. In this case, the centering constraint is not needed. The optimization in eq. (1) is not convex; hence, it is likely to be trapped by local minima. By relaxing the constraint that the sensor locations ⃗xi lie in the ℜ2 plane, we obtain a convex optimization that is much more tractable [1]. This is done by rewriting the optimization in eqs. (1–2) in terms of the elements of the inner product matrix Xij =⃗xi · ⃗xj. In this way, we obtain: Minimize: ! i∼j " Xii −2Xij + Xjj −d2 ij #2 subject to: (i) ! ijXij = 0 and (ii) X ⪰0. (3) The first constraint centers the sensors on the origin, as in eq. (2), while the second constraint specifies that X is positive semidefinite, which is necessary to interpret it as an inner product matrix in Euclidean space. In this case, the vectors {⃗xi} are determined (up to rotation) by singular value decomposition. The convex relaxation of the optimization in eqs. (1–2) drops the constraint that that the vectors ⃗xi lie in the ℜ2 plane. Instead, the vectors will more generally lie in a subspace of dimensionality equal to the rank of the solution X. To obtain planar coordinates, one can project these vectors into their two dimensional subspace of maximum variance, obtained from the top two eigenvectors of X. Unfortunately, if the rank of X is high, this projection loses information. As the error of the projection grows with the rank of X, we would like to enforce that X has low rank. However, the rank of a matrix is not a convex function of its elements; thus it cannot be directly constrained as part of a convex optimization. Mindful of this problem, the approach to sensor localization in [1] borrows an idea from recent work in unsupervised learning [12, 14]. Very simply, an extra term is added to the loss function that favors solutions with high variance, or equivalently, solutions with high trace. (The trace is proportional to the variance assuming that the sensors are centered on the origin, since tr(X) = % i ∥⃗xi∥2.) The extra variance term in the loss function favors low rank solutions; intuitively, it is based on the observation that a flat piece of paper has greater diameter than a crumpled one. Following this intuition, we consider the following optimization: Maximize: tr(X) −ν ! i∼j " Xii −2Xij + Xjj −d2 ij #2 subject to: (i) ! ijXij = 0 and (ii) X ⪰0. (4) The parameter ν > 0 balances the trade-off between maximizing variance and preserving local distances. This general framework for trading off global variance versus local rigidity has come to be known as maximum variance unfolding (MVU) [9, 15, 13]. As demonstrated in [1, 9, 6, 14], these types of optimizations can be written as semidefinite programs (SDPs) [10]. Many general-purpose solvers for SDPs exist in the public domain (e.g., [2]), but even for systems with sparse constraints, they do not scale very well to large problems. Thus, for small networks, this approach to sensor localization is viable, but for large networks (n∼104), exact solutions are prohibitively expensive. This leads us to consider the methods in the next section. 3 Large-scale maximum variance unfolding Most SDP solvers are based on interior-point methods whose time-complexity scales cubically in the matrix size and number of constraints [2]. To solve large problems in MVU, even approximately, we must therefore reduce them to SDPs over small matrices with small numbers of constraints. 3.1 Matrix factorization To obtain an optimization involving smaller matrices, we appeal to ideas in spectral graph theory [5]. The sensor network defines a connected graph whose edges represent local pairwise connectivity. Whenever two nodes share an edge in this graph, we expect the locations of these nodes to be relatively similar. We can view the location of the sensors as a function that is defined over the nodes of this graph. Because the edges represent local distance constraints, we expect this function to vary smoothly as we traverse edges in the graph. The idea of graph regularization in this context is best understood by analogy. If a smooth function is defined on a bounded interval of ℜ1, then from real analysis, we know that it can be well approximated by a low order Fourier series. A similar type of low order approximation exists if a smooth function is defined over the nodes of a graph. This low-order approximation on graphs will enable us to simplify the SDPs for MVU, just as low-order Fourier expansions have been used to regularize many problems in statistical estimation. Function approximations on graphs are most naturally derived from the eigenvectors of the graph Laplacian [5]. For unweighted graphs, the graph Laplacian L computes the quadratic form f ⊤Lf = ! i∼j (fi −fj)2 (5) on functions f ∈ℜn defined over the nodes of the graph. The eigenvectors of L provide a set of basis functions over the nodes of the graph, ordered by smoothness. Thus, smooth functions f can be well approximated by linear combinations of the bottom eigenvectors of L. Expanding the sensor locations ⃗xi in terms of these eigenvectors yields a compact factorization for the inner product matrix X. Suppose that ⃗xi ≈%m α=1 Qiα⃗yα, where the columns of the n×m rectangular matrix Q store the m bottom eigenvectors of the graph Laplacian (excluding the uniform eigenvector with zero eigenvalue). Note that in this approximation, the matrix Q can be cheaply precomputed from the unweighted connectivity graph of the sensor network, while the vectors ⃗yα play the role of unknowns that depend in a complicated way on the local distance estimates dij. Let Y denote the m × m inner product matrix of these vectors, with elements Yαβ = ⃗yα · ⃗yβ. From the low-order approximation to the sensor locations, we obtain the matrix factorization: X ≈QYQ⊤. (6) Eq. (6) approximates the inner product matrix X as the product of much smaller matrices. Using this approximation for localization in large scale networks, we can solve an optimization for the much smaller m×m matrix Y, as opposed to the original n×n matrix X. The optimization for the matrix Y is obtained by substituting eq. (6) wherever the matrix X appears in eq. (4). Some simplifications occur due to the structure of the matrix Q. Because the columns of Q store mutually orthogonal eigenvectors, it follows that tr(QYQ⊤)=tr(Y). Because we do not include the uniform eigenvector in Q, it follows that QYQ⊤automatically satisfies the centering constraint, which can therefore be dropped. Finally, it is sufficient to constrain Y⪰0, which implies that QYQ⊤⪰0. With these simplifications, we obtain the following optimization: Maximize: tr(Y) −ν ! i∼j & (QYQ⊤)ii−2(QYQ⊤)ij+(QYQ⊤)jj −d2 ij '2 subject to: Y ⪰0 (7) Eq. (6) can alternately be viewed as a form of regularization, as it constrains neighboring sensors to have nearby locations even when the estimated local distances dij suggest otherwise (e.g., due to noise). Similar forms of graph regularization have been widely used in semi-supervised learning [4]. 3.2 Formulation as SDP As noted earlier, our strategy for solving large problems in MVU depends on casting the required optimizations as SDPs over small matrices with few constraints. The matrix factorization in eq. (6) leads to an optimization over the m × m matrix Y, as opposed to the n × n matrix X. In this section, we show how to cast this optimization as a correspondingly small SDP. This requires us to reformulate the quadratic optimization over Y ⪰0 in eq. (4) in terms of a linear objective function with linear or positive semidefinite constraints. We start by noting that the objective function in eq. (7) is a quadratic function of the elements of the matrix Y. Let Y ∈ℜm2 denote the vector obtained by concatenating all the columns of Y. With this notation, the objective function (up to an additive constant) takes the form b⊤Y −Y⊤AY, (8) where A ∈ℜm2×m2 is the positive semidefinite matrix that collects all the quadratic coefficients in the objective function and b ∈ℜm2 is the vector that collects all the linear coefficients. Note that the trace term in the objective function, tr(Y), is absorbed by the vector b. With the above notation, we can write the optimization in eq. (7) as an SDP in standard form. As in [8], this is done in two steps. First, we introduce a dummy variable ℓthat serves as a lower bound on the quadratic piece of the objective function in eq. (8). Next, we express this bound as a linear matrix inequality via the Schur complement lemma. Combining these steps, we obtain the SDP: Maximize: b⊤Y −ℓ subject to: (i) Y ⪰0 and (ii) ( I A 1 2 Y (A 1 2 Y)⊤ ℓ ) ⪰0. (9) In the second constraint of this SDP, we have used I to denote the m2×m2 identity matrix and A 1 2 to denote the matrix square root. Thus, via the Schur lemma, this constraint expresses the lower bound ℓ≥Y⊤AY, and the SDP is seen to be equivalent to the optimization in eqs. (7–8). The SDP in eq. (9) represents a drastic reduction in complexity from the optimization in eq. (7). The only variables of the SDP are the m(m + 1)/2 elements of Y and the unknown scalar ℓ. The only constraints are the positive semidefinite constraint on Y and the linear matrix inequality of size m2 × m2. Note that the complexity of this SDP does not depend on the number of nodes or edges in the network. As a result, this approach scales very well to large problems in sensor localization. In the above formulation, it is worth noting the important role played by quadratic penalties. The use of the Schur lemma in eq. (9) was conditioned on the quadratic form of the objective function in eq. (7). Previous work on MVU has enforced the distance constraints as strict equalities [12], as one-sided inequalities [9, 11], and as soft constraints with linear penalties [14]. Expressed as SDPs, these earlier formulations of MVU involved as many constraints as edges in the underlying graph, even with the matrix factorization in eq. (6). Thus, the speed-ups obtained here over previous approaches are not merely due to graph regularization, but more precisely to its use in conjunction with quadratic penalties, all of which can be collected in a single linear matrix inequality via the Schur lemma. 3.3 Gradient-based improvement While the matrix factorization in eq. (6) leads to much more tractable optimizations, it only provides an approximation to the global minimum of the original loss function in eq. (1). As suggested in [1], we can refine the approximation from eq. (9) by using it as initial starting point for gradient descent in eq. (1). In general, gradient descent on non-convex functions can converge to undesirable local minima. In this setting, however, the solution of the SDP in eq. (9) provides a highly accurate initialization. Though no theoretical guarantees can be made, in practice we have observed that this initialization often lies in the basin of attraction of the true global minimum. Our most robust results were obtained by a two-step process. First, starting from the m-dimensional solution of eq. (9), we used conjugate gradient methods to maximize the objective function in eq. (4). Though this objective function is written in terms of the inner product matrix X, the hill-climbing in this step was performed in terms of the vectors ⃗xi ∈ℜm. While not always necessary, this first step was mainly helpful for localization in sensor networks with irregular (and particularly non-convex) boundaries. It seems generally difficult to representation such boundaries in terms of the bottom eigenvectors of the graph Laplacian. Next, we projected the results of this first step into the ℜ2 plane and use conjugate gradient methods to minimize the loss function in eq. (1). This second step helps to correct patches of the network where either the graph regularization leads to oversmoothing and/or the rank constraint is not well modeled by MVU. 4 Results We evaluated our algorithm on two simulated sensor networks of different size and topology. We did not assume any prior knowledge of sensor locations (e.g., from anchor points). We added white noise to each local distance measurement with a standard deviation of 10% of the true local distance. !0.8 !0.6 !0.4 !0.2 0 0.2 0.4 0.6 0.8 !0.6 !0.4 !0.2 0 0.2 0.4 0.6 !0.8 !0.6 !0.4 !0.2 0 0.2 0.4 0.6 0.8 !0.6 !0.4 !0.2 0 0.2 0.4 0.6 Figure 2: Sensor locations inferred for n = 1055 largest cities in the continental US. On average, each sensor estimated local distances to 18 neighbors, with measurements corrupted by 10% Gaussian noise; see text. Left: sensor locations obtained by solving the SDP in eq. (9) using the m=10 bottom eigenvectors of the graph Laplacian (computation time 4s). Despite the obvious distortion, the solution provides a good initial starting point for gradient-based improvement. Right: sensor locations after post-processing by conjugate gradient descent (additional computation time 3s). Figure 3: Results on a simulated network with n = 20000 uniformly distributed nodes inside a centered unit square. See text for details. The first simulated network, shown in Fig. 1, placed nodes at scaled locations of the n = 1055 largest cities in the continental US. Each node estimated the local distance to up to 18 other nodes within a radius of size r = 0.09. The SDP in eq. (9) was solved using the m = 10 bottom eigenvectors of the graph Laplacian. Fig. 2 shows the solution from this SDP (on the left), as well as the final result after gradient-based improvement (on the right), as described in section 3.3. From the figure, it can be seen that the solution of the SDP recovers the general topology of the network but tends to clump nodes together, especially near the boundaries. After gradient-based improvement, however, the inferred locations differ very little from the true locations. The construction and solution of the SDP required 4s of total computation time on a 2.4 GHz Pentium 4 desktop computer, while the post-processing by conjugate gradient descent took an additional 3s. 0 5 10 15 20 objective time objective value number of eigenvectors computation time (in sec) 2.0 1.0 480 240 Figure 4: Left: the value of the loss function in eq. (1) from the solution of the SDP in eq. (8). Right: the computation time to solve the SDP. Both are plotted versus the number of eigenvectors, m, in the matrix factorization. The second simulated network, shown in Fig. 3, placed nodes at n = 20000 uniformly sampled points inside the unit square. The nodes were then centered on the origin. Each node estimated the local distance to up to 20 other nodes within a radius of size r = 0.06. The SDP in eq. (9) was solved using the m = 10 bottom eigenvectors of the graph Laplacian. The computation time to construct and solve the SDP was 19s. The follow-up conjugate gradient optimization required 52s for 100 line searches. Fig. 3 illustrates the absolute positional errors of the sensor locations computed in three different ways: the solution from the SDP in eq. (8), the refined solution obtain by conjugate gradient descent, and the “baseline” solution obtained by conjugate gradient descent from a random initialization. For these plots, the sensors were colored so that the ground truth positioning reveals the word CONVEX in the foreground with a radial color gradient in the background. The refined solution in the third panel is seen to yield highly accurate results. (Note: the representations in the second and fourth panels were scaled by factors of 0.50 and 1028, respectively, to have the same size as the others.) We also evaluated the effect of the number of eigenvectors, m, used in the SDP. (We focused on the role of m, noting that previous studies [1, 7] have thoroughly investigated the role of parameters such as the weight constant ν, the sensor radius r, and the noise level.) For the simulated network with nodes at US cities, Fig. 4 plots the value of the loss function in eq. (1) obtained from the solution of eq. (8) as a function of m. It also plots the computation time required to create and solve the SDP. The figure shows that more eigenvectors lead to better solutions, but at the expense of increased computation time. In our experience, there is a “sweet spot” around m ≈10 that best manages this tradeoff. Here, the SDP can typically be solved in seconds while still providing a sufficiently accurate initialization for rapid convergence of subsequent gradient-based methods. Finally, though not reported here due to space constraints, we also tested our approach on various data sets in manifold learning from [12]. Our approach generally reduced previous computation times of minutes or hours to seconds with no noticeable loss of accuracy. 5 Discussion In this paper, we have proposed an approach for solving large-scale problems in MVU. The approach makes use of a matrix factorization computed from the bottom eigenvectors of the graph Laplacian. The factorization yields accurate approximate solutions which can be further refined by local search. The power of the approach was illustrated by simulated results on sensor localization. The networks in section 4 have far more nodes and edges than could be analyzed by previously formulated SDPs for these types of problems [1, 3, 6, 14]. Beyond the problem of sensor localization, our approach applies quite generally to other settings where low dimensional representations are inferred from local distance constraints. Thus we are hopeful that the ideas in this paper will find further use in areas such as robotic path mapping [3], protein clustering [6, 7], and manifold learning [12]. Acknowledgments This work was supported by NSF Award 0238323. References [1] P. Biswas, T.-C. Liang, K.-C. Toh, T.-C. Wang, and Y. Ye. Semidefinite programming approaches for sensor network localization with noisy distance measurements. IEEE Transactions on Automation Science and Engineering, 3(4):360–371, 2006. [2] B. Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software 11(1):613-623, 1999. [3] M. Bowling, A. Ghodsi, and D. Wilkinson. Action respecting embedding. In Proceedings of the Twenty Second International Conference on Machine Learning (ICML-05), pages 65–72, Bonn, Germany, 2005. [4] O. Chapelle, B. Sch¨olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006. [5] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997. [6] F. Lu, S. Keles, S. Wright, and G. Wahba. Framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences, 102:12332–12337, 2005. [7] F. Lu, Y. Lin, and G. Wahba. Robust manifold unfolding with kernel regularization. Technical Report 1108, Department of Statistics, University of Wisconsin-Madison, 2005. [8] F. Sha and L. K. Saul. Analysis and extension of spectral methods for nonlinear dimensionality reduction. In Proceedings of the Twenty Second International Conference on Machine Learning (ICML-05), pages 785–792, Bonn, Germany, 2005. [9] J. Sun, S. Boyd, L. Xiao, and P. Diaconis. The fastest mixing Markov process on a graph and a connection to a maximum variance unfolding problem. SIAM Review, 48(4):681–699, 2006. [10] L. Vandenberghe and S. P. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, March 1996. [11] K. Q. Weinberger, B. D. Packer, and L. K. Saul. Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization. In Z. Ghahramani and R. Cowell, editors, Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTATS-05), pages 381–388, Barbados, West Indies, 2005. [12] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite programming. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR-04), volume 2, pages 988–995, Washington D.C., 2004. Extended version in International Journal of Computer Vision, 70(1): 77-90, 2006. [13] K. Q. Weinberger and L. K. Saul. An introduction to nonlinear dimensionality reduction by maximum variance unfolding. In Proceedings of the Twenty First National Conference on Artificial Intelligence (AAAI-06), Cambridge, MA, 2006. [14] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction. In Proceedings of the Twenty First International Conference on Machine Learning (ICML-04), pages 839–846, Banff, Canada, 2004. [15] L. Xiao, J. Sun, and S. Boyd. A duality view of spectral methods for dimensionality reduction. In Proceedings of the Twenty Third International Conference on Machine Learning (ICML-06), pages 1041– 1048, Pittsburgh, PA, 2006.
2006
6
3,081
Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds Gloria Haro, Gregory Randall, and Guillermo Sapiro IMA and Electrical and Computer Engineering University of Minnesota, Minneapolis, MN 55455 haro@ima.umn.edu,randall@fing.edu.uy,guille@umn.edu Abstract The study of point cloud data sampled from a stratification, a collection of manifolds with possible different dimensions, is pursued in this paper. We present a technique for simultaneously soft clustering and estimating the mixed dimensionality and density of such structures. The framework is based on a maximum likelihood estimation of a Poisson mixture model. The presentation of the approach is completed with artificial and real examples demonstrating the importance of extending manifold learning to stratification learning. 1 Introduction Data in high dimensions is becoming ubiquitous, from image analysis and finances to computational biology and neuroscience. This data is often given or represented as samples embedded in a high dimensional Euclidean space, point cloud data, though it is assumed to belong to lower dimensional manifolds. Thus, in recent years, there have been significant efforts in the development of methods to analyze these point clouds and their underlying manifolds. These include numerous techniques for the estimation of the intrinsic dimension of the data and also its projection onto lower dimensional representations. These disciplines are often called manifold learning and dimensionality reduction. A few examples include [2, 3, 4, 9, 10, 11, 12, 16]. The vast majority of the manifold learning and dimensionality reduction techniques developed in the literature assume, either explicitly or implicitly, that the given point cloud are samples of a unique manifold. It is very easy to realize that a significant part of the interesting data has mixed dimensionality and complexity. The work here presented deals with this more general case, where there are different dimensionalities/complexities present in the point cloud data. That is, we have samples not of a manifold but of a stratification. The main aim is to cluster the data according to the complexity (dimensionality) of the underlying possible multiple manifolds. Such clustering can be used both to better understand the varying dimensionality and complexity of the data, e.g., states in neural recordings or different human activities for video analysis, or as a pre-processing step for the above mentioned manifold learning and dimensionality reduction techniques. This clustering-by-dimensionality task has been recently explored in a handful of works. Barbar´a and Chen, [1], proposed a hard clustering technique based on the fractal dimension (box-counting). Starting from an initial clustering, they incrementally add points into the cluster for which the change in the fractal dimension after adding the point is the lowest. They also find the number of clusters and the intrinsic dimension of the underlying manifolds. Gionis et al., [7], use local growth curves to estimate the local correlation dimension and density for each point. The new two-dimensional representation of the data is clustered using standard techniques. Souvenir and Pless, [14], use an Expectation Maximization (EM) type of technique, combined with weighted geodesic multidimensional scaling. The weights measure how well each point fits the underlying manifold defined by the current set of points in the cluster. After clustering, each cluster dimensionality is estimated following [10]. Huang et al., [8], cluster linear subspaces with an algebraic geometric method based on polynomial differentiation and a Generalized PCA. They search for the best combination of linear subspaces that explains the data, and find the number of linear subspaces and their intrinsic dimension. The work of Mordohai and Medioni, [11], estimates the local dimension using tensor voting. These recent works have clearly shown the necessity to go beyond manifold learning, into “stratification learning.” In our work, we do not assume linear subspaces, and we simultaneously estimate the soft clustering and the intrinsic dimension and density of the clusters. This collection of attributes is not shared by any of the pioneering works just described. Our approach is an extension of the Levina and Bickel’s local dimension estimator [10]. They proposed to compute the intrinsic dimension at each point using a Maximum Likelihood (ML) estimator based on a Poisson distribution. The local estimators are then averaged, under the assumption of a single uniform manifold. We propose to compute a ML on the whole point cloud data at the same time (and not one for each point independently), and use a Poisson mixture model, which permits to have different classes, each one with their own dimension and sampling density. This technique automatically gives a soft clustering according to dimensionality and density, with an estimation of both quantities for each class. Our approach assumes that the number of classes is given, but we are discovering the actual number of underlying manifolds. If we search for a larger than needed number of classes, we obtain some classes with the same dimensionality and density or some classes with very few representatives, as shown in the examples later presented. The remainder of this paper is organized as follows: In Section 2 we review the method proposed by Levina and Bickel, [10], which gives a local estimation of the intrinsic dimension and has inspired our work. In Section 3 we present our core contribution of simultaneous soft clustering and dimensionality and density estimation. We present experiments with synthetic and real data in Section 4, and finally, some conclusions are presented in Section 5. 2 Local intrinsic dimension estimation Levina and Bickel (LB), [10], proposed a geometric and probabilistic method which estimates the local dimension (and density) of a point cloud data1. This is the approach we here extend, which is based on the idea that if we sample an m-dimensional manifold with T points, the proportion of points that fall into a ball around a point xt is k T ≈f(xt)V (m)Rk(xt)m, where the given point cloud, embedded in high dimensions D, is X = {xt ∈RD; t = 1, . . . , T}, k is the number of points inside the ball, f(xt) is the local sampling density at point xt, V (m) is the volume of the unit sphere in Rm, and Rk(xt) is the Euclidean distance from xt to its k-th nearest neighbor (kNN). Then, they consider the inhomogeneous process N(R, xt), which counts the number of points falling into a small D-dimensional sphere B(R, xt) of radius R centered at xt. This is a binomial process, and some assumptions need to be done to proceed. First, if T →∞, k →∞, and k/T →0, then we can approximate the binomial process by a Poisson process. Second, the density f(xt) is constant inside the sphere, a valid assumption for small R. With these assumptions, the rate λ of the counting process N(R, xt) can be written as λ(R, xt) = f(xt)V (m)mRm−1. The log-likelihood of the process N(R, xt) is then given by L(m(xt), θ(xt)) = Z R 0 log λ(r, xt)dN(r, xt) − Z R 0 λ(r, xt)dr, where θ(xt) := log f(xt) is the density parameter and the first integral is a Riemann-Stieltjes integral [13]. The maximum likelihood estimators satisfy ∂L/∂θ = 0 and ∂L/∂m = 0, leading to a computation for the local dimension at point xt, m(xt), depending on all the neighbors within a distance R from xt [10]. In practice, it is more convenient to compute a fixed amount k of nearest neighbors. Thus, the local dimension at point xt is m(xt) = h 1 k−2 Pk−1 j=1 log Rk(xt) Rj(xt) i−1 . This estimator is asymptotically unbiased (see [10] for more details). If the data points belong to the same manifold, we can average over all m(xt) in order to obtain a more robust estimator. However, if there are two or more manifolds with different dimensions, the average does not make sense, unless we first cluster according to dimensionality and then we estimate the dimensionality for each 1M. Hein pointed us out in NIPS that this dimension estimator is equivalent to the one proposed in [15]. cluster. We briefly toy with this idea now, as a warm up to our simultaneous soft clustering and estimation technique described in Section 3. 2.1 A two step clustering approach As a first simple approach to detect and cluster mixed dimensionality (and/or densities), we can combine a local dimensionality estimator such as the one just described and a clustering technique. For the second step we use the Information Bottleneck (IB) [17], which is an elegant framework to eventually combine several local dimension estimators and other possible features such as density [6]. The IB is a technique that allows to cluster (compress) a variable according to another related variable. Let X be the set of variables to be clustered and S the relevance variable that gives some information about X. An example is the information that different words provide about documents of different topics. We call ˜X the clustered version of X. The optimal ˜X is the one that minimizes the functional L(p( ˜xt|xt)) = I( ˜X; X) −βI( ˜X; S), where I(·; ·) denotes mutual information and p(·) the probability density function. There is a trade-off, controlled by β, between compressing the representation and preserving the meaningful information. In our context, we want to cluster the data according to the intrinsic dimensionality (and/or density). Then, our relevant variable S will be the set of (quantized) estimated local intrinsic dimensions. For the joint distribution p(xt, si), si ∈S, we use the histogram of local dimensions inside a ball of radius R′ around xt,2 computed by the LB technique. Examples of this technique will be presented in the experimental Section. Instead of a two-steps algorithm, with local dimensionality and/or density estimation followed by clustering, we now propose a maximum likelihood technique that combines these steps. 3 Poisson mixture model The core approach that we propose to study stratifications (mixed manifolds) is based on extending the LB technique [10]. Instead of modelling each point and its local ball of radius R as a Poisson process and computing the ML for each ball separately, we consider all the possible balls at the same time in the same ML function. As the probability density function for all the point cloud we consider a mixture of Poisson distributions with different parameters (dimension and density). Thus, we allow the presence of different intrinsic dimensions and densities in the dataset. These are automatically computed while being used for soft clustering. Let us denote by J the number of different Poisson distributions considered in the mixture, each one with a (possibly) different dimension m and density parameter θ. We consider the vector set of parameters ψ = {ψj = (πj, θj, mj); j = 1, . . . , J}, where πj is the mixture coefficient for class j (the proportion of distribution j in the dataset), θj is its density parameter (f j = eθj), and mj is its dimension. We denote by p(·) the probability density function and by P(·) the probability. As in the LB approach, the observable event will be yt = N(R, xt), the number of points inside the ball B(R, xt) of radius R centered at point xt. The total number of observations is T ′ and Y = {yt; t = 1, . . . , T ′} is the observation sequence. If we consider every possible ball in the dataset then, T ′ coincides with the total number of points T in the point cloud. From now on, we will consider this case and T ′ ≡T. The density function of the Poisson mixture model is given by p(yt|ψ) = J X j=1 πjp(yt|θj, mj) = J X j=1 πj exp Z R 0 log λj(r) dN(r, xt) ! exp − Z R 0 λj(r)dr ! , where λj(r) = eθjV (mj)mjrmj−1. Usually, problems involving a mixture of experts are solved by the Expectation Maximization (EM) algorithm [5]. In our context, there are two kinds of unknown parameters: The membership function of an expert (class), πj, and the parameters of each expert, mj and θj. The membership information is originally unknown, thereby making the parameter estimation for each class difficult. The EM algorithm computes its expected value (E-step) and then this value is used for the parameter estimation procedure (M-step). These two steps are iterated. 2The value of R′ determines the amount of regularity in the classification. If Y contains T statistically independent variables, then the incomplete data log-likelihood is: L(Y |ψ) = log p(Y |ψ) = log T Y t=1 p(yt|ψ) = Q(ψ) + R(ψ), Q(ψ) := X Z P(Z|Y, ψ) log p(Z, Y |ψ), R(ψ) := − X Z P(Z|Y, ψ) log P(Z|Y, ψ), where Z = {zt ∈C; t = 1, . . . , T} is the missing data (hidden-state information), and the set of class labels is C = {C1, C2, . . . CJ}. Here, zt = Cj means that the j-th mixture generates yt. We call Q the expectation of log p(Z, Y |ψ) with respect to Z. The EM algorithm is based on maximizing Q, since while improving (maximizing) the function Q at each iteration, the likelihood function L is also improved. The probability density that appears in the function Q can be written as p(Z, Y |ψ) = QT t=1 p(zt, yt|ψ), and the complete-data log-likelihood becomes log p(Z, Y |ψ) = T X t=1 J X j=1 δj t log  p(yt|zt = Cj, ψj)πj , (1) where a set of indicator variables δj t is used in order to indicate the status of the hidden variables: δj t ≡δ(zt, Cj) = 1 if yt generated by mixture Cj, 0 else. Considering the expectation, with respect to Z, EZ(·) of (1) and setting ψ to a fixed known value ψn (the value at step n of the algorithm), everywhere except for the log function, we get a function Q of ψ. We denote it by Q(ψ|ψn), and it has the following form Q(ψ|ψn) = T X t=1 J X j=1 hj n(yt) log h p(yt|δj t = 1, ψj)πji , where hj n(yt) = EZ[δj t |yt, ψn] = P(δj t = 1|yt, ψn) = p(yt|δj t = 1, ψj n)πj n PJ l=1 p(yt|δl t = 1, ψln)πln (2) is the probability that observation t belongs to mixture j. Finally, the probability density in (2) is p(yt|δj t = 1, ψj n) = exp Z R 0 log λj n(r)dN(r, xt) ! exp − Z R 0 λj n(r)dr ! , (3) where λj n(r) = eθj nV (mj n)mj nrmj n−1. As mentioned above, the EM algorithm consists of two main steps. In the E-step, the function Q(ψ|ψn) is computed, for that, we determine the best guess of the membership function, i.e., the probabilities hj n(yt). Once we know these probabilities, Q(ψ|ψn) can be considered as a function of the only unknown, ψ, and it is maximized in order to compute the values of ψn+1, i.e., the maximum likelihood parameters ψ at step n + 1; this is called the Mstep. The EM suffers from local maxima, hitting a local maximum can be prevented running the algorithm several times with different initializations. Different random subset of points, from the point cloud, may be used in each run. We have experimented with both approaches and the results are always similar if we initialize all the probabilities equally. The Algorithm PMM describes the main components of this proposed approach. The estimators πj n+1, mj n+1, and θj n+1 are obtained by computing ψj n+1 = arg maxψj Q(ψ|ψn) + λ(PJ l=1 πl −1) in the M-step, where λ is the Lagrange multiplier that allows to introduce the constraint PJ l=1 πl = 1. This gives equations (4)-(5), where V (mj n) = (2πmj n/2)/(mj nΓ( mj n 2 )), and Γ( mj n 2 ) = R ∞ 0 tmj n/2−1e−tdt. In order to compute mj n+1 we have used the same approach as in [10], by means of a k nearest neighbor graph. 4 Experimental results We now present a number of experimental results for the technique proposed in Section 3. We often compare it with the two-steps algorithm described in Section 2, and denote this algorithm by LD+IB. Algorithm PMM Poisson Mixture Model Require: The point cloud data, J (number of desired classes) and k (scale of observation). Ensure: Soft clustering according to dimensionality and density. 1: Initialization of ψ0 = {πj 0, mj 0, θj 0} to any set of values which ensures that PJ j=1 πj 0 = 1. 2: EM iterations on n, For all j = 1, . . . J, compute: • E-step: Compute hj n(yt) by (2). • M-step: Compute πj n+1 = 1 T T X t=1 hj n(yt) ; mj n+1 =   PT t=1 hj n(yt) Pk−1 j=1 log Rk(yt) Rj(yt) PT t=1 hj n(yt)(k −1)   −1 (4) θj n+1 = log T X t=1 hj n(yt)(k −1) −log V (mj n) T X t=1 hj n(yt)Rk(yt)mj n ! (5) Until convergence of ψn, that is, when ||ψn+1 −ψn||2 < ϵ, for a certain small value ϵ. In all the experiments we use the initialization πj 0 = 1/J, θj 0 = 0, and mj 0 = j, for all j = 1, . . . , J. The distances are normalized so that the maximum distance is 1. The embedding dimension in all the experiments on synthetic data is 3, although the results were found to be consistent when we increased the embedding dimension. The first experiment consists of a mixture of a Swiss roll manifold (700 points) and a line (700 points) embedded in a three dimensional space. The algorithm (with J = 2 and k = 10) is able to separate both manifolds. The estimated parameters are collected in Table 1. For each table, we display the estimated dimension m, density θ, and mixture coefficient π for each one of the classes. We also show the percentage of points of each manifold that are classified in each class (after thresholding the soft assignment). Figure 1(a) displays both manifolds – each point is colored according to the probability of belonging to each one of the two possible classes. Tables 1(a) and 1(c) contain the results for both PMM and LD+IB using J = 2. Table 1(b) shows the results for the PMM algorithm with k = 10 and J = 3. Note how the parameters of the first two classes are quite similar to the ones obtained with J = 2, and the third class is marginal (very small π). Figure 1(b) shows the PMM classification when J = 3. Note that all the points of the line belong to the class of dimension 1. The points of the Swiss roll are mainly concentrated in the other class with dimension 2. A slight amount of Swiss roll points belong to a third class with roughly the same dimension as the second class. Actually, these points are located in the point cloud boundaries, where the underlying assumptions are not always valid. If we estimate the dimension of the mixture using the LB technique with k = 10, we obtain 1.70 with a standard deviation of 5.31. If we use the method proposed by Costa and Hero [4], the estimated dimension is 2. In both cases, the estimated intrinsic dimension is the largest one present in the mixture, ignoring that the data actually lives in two manifolds of different intrinsic dimension. The same Table and Figure, second rows, show results for noisy data. We add to the point coordinates Gaussian noise with σ = 0.6. The results obtained with k = 10 are displayed in Tables 1(d), 1(e) and 1(f), and in Figures 1(d), 1(e) and 1(f). Note how the classification still separates the two different manifolds, although the line is much more affected by the noise and it does not look like a one dimensional manifold anymore. This is reflected also by the estimated dimension which now is bigger. This phenomena is related to the scale of observation and to the level of noise. If the level of noise is large – e.g., compared to the mean distance to the k nearest neighbors for a small k – intuitively the estimated intrinsic dimension will be closer to the embedding dimension (this behavior was experimentally verified). We can again compare the results with the ones obtained with the LB estimator alone: Estimated dimension 2.71 and standard deviation 1.12. Using Costa and Hero [4], the estimated dimension varies between 2 and 3 (depending on the number of bootstrap loops). Both techniques do not consider the possibility of mixed dimensionality. The experiment in Figure 2 illustrates how the soft clustering is done according to both dimensionality and density. The data consists of 2500 points on the Swiss roll, 100 on a line with high density Estimated parameters m 1.00 2.01 θ 5.70 2.48 π 0.5000 0.5000 % points in each class Line 100 0 SR 0 100 Estimated parameters m 1.00 2.01 2.16 θ 5.70 2.55 1.52 π 0.5000 0.4792 0.0208 % points in each class Line 100 0 0 SR 0 96.57 3.43 Estimated dimension m 1.67 2.00 % points in each class Line 100 0 Swiss roll 3.45 96.55 (a) PMM (J = 2). (b) PMM (J = 3). (c) LD+IB (J = 2). Estimated parameters m 3.02 2.38 θ 7.69 2.73 π 0.4951 0.5049 % points in each class Line 98.14 1.86 SR 0.86 99.14 Estimated parameters m 3.01 2.40 2.26 θ 7.70 2.88 1.72 π 0.4910 0.4766 0.0325 % points in each class Line 97.71 2.29 0 SR 0.71 93.00 6.29 Estimated dimension m 3.09 2.30 % points in each class Line 79.71 20.29 Swiss roll 24.71 75.29 (d) PMM (J = 2). (e) PMM (J = 3). (f) LD+IB (J = 2). Table 1: Clustering results for the Swiss roll (SR) and a line (k = 10), without noise (first row) and with noise (second row). (a) PMM (J = 2) (b) PMM (J = 3) (c) LD+IB (J = 2) (d) PMM (J = 2) (e) PMM (J = 3) (f) LD+IB (J = 2) Figure 1: Clustering of a line and a Swiss roll (k = 10). First row without noise, second row with Gaussian noise (σ = 0.6). Points colored according to the probability of belonging to each class. and 50 on another less dense line. We have set J = 4 and the algorithm gives an “empty class,” thus discovering that three classes, with correct dimensionality and density, is enough for a good representation. The only errors are in the borders, as expected. Estimated parameters m 1.94 1.04 0.98 1.93 θ 7.12 3.82 2.66 2.57 π 0.9330 0.0498 0.0167 0.0004 % points in each class Line 0.0 15.69 84.31 0.0 Line (dense) 0.0 99.00 1.00 0.0 Swiss Roll 98.92 1.08 0.0 0.0 Figure 2: Clustering with mixed dimensions and density (k = 20, J = 4). In order to test the algorithm with real data, we first work with the MNIST database of handwritten digits,3 which has a test set of 10.000 examples. Each digit is an image of 28 × 28 pixels and we treat the data as 784-dimensional vectors. 3http://yann.lecun.com/exdb/mnist/ We study the mixture of digits one and two and apply PMM and LD+IB with J = 2 and k = 10. The results are shown in Figure 3. Note how the digits are well separated.4 The LB estimator alone gives dimensions 9.13 for digits one, 13.02 for digits two, and 11.26 for the mixture of both digits. The Costa and Hero’s method, [4], gives 8, 11 and 9 respectively. Both methods assume a single intrinsic dimension and give an average of the dimensions of the underlying manifolds. Estimated parameters m 8.50 12.82 θ 11.20 6.80 π 0.4901 0.5099 % points in each class Ones 93.48 6.52 Twos 0 100 Estimated dimension m 9.17 13.74 % points in each class Ones 94.71 5.29 Twos 9.08 90.02 (a) PMM (b) LD+IB (c) Some image examples. Figure 3: Results for digits 1 and 2 (k = 10, J = 2). Next, we experiment with 9-dimensional vectors formed of image patches of 3 × 3 pixels. If we impose J = 3 and use PMM, we obtain the results in Figure 4. Notice how roughly one class corresponds to patches in homogeneous zones (approximately constant gray value), a second class corresponds to textured zones and a third class to patches containing edges. The estimated dimensions in each region are in accordance to the estimated dimensions using Isomap or Costa and Hero’s technique in each region after separation. This experiment is just a proof of concept, in the future we will study how to adapt this clustering approach to image segmentation. Figure 4: Clustering of image patches of 3 × 3 pixels with PMM, colors indicating the different classes (complexity) (J = 3, k = 30). Left: original and segmented images of a house. Right: original and segmented images of a portion of biological tissue. Adding spatial regularization is the subject of current research. Finally, as an additional proof of the validity of our approach and its potential applications, we use the PMM framework to separate activities in video, Figure 5 (see also [14]). Each original frame is 480 × 640, sub-sampled to 48 × 64 pixels, with 1673 frames. Four classes are present: standing, walking, jumping, and arms waving. The whole run took 361 seconds in Matlab, while the classification time (PMM) can be neglected compared to the kNN component. Samples in each cluster C1 C2 C3 C4 Standing 416 0 95 0 Walking 0 429 69 25 Waving 0 5 423 4 Jumping 0 18 0 189 Figure 5: Classifying human activities in video (k = 10, J = 4). Four sample frames are shown followed by the classification results (confusion matrix). Visual analysis of the wrongly classified frames show that these are indeed very similar to the misclassified class members. Adding features, e.g., optical flow, will improve the results. 4Since the clustering is done according to dimensionality and density, digits which share these characteristics won’t be separated into different classes. 5 Conclusions In this paper we discussed the concept of “stratification learning,” where the point cloud data is not assumed to belong to a single manifold, as commonly done in manifold learning and dimensionality reduction. We extended the work in [10] in the sense that the maximum likelihood is computed once for the whole dataset, and the probability density function is a mixture of Poisson laws, each one modeling different intrinsic dimensions and densities. The soft clustering and the estimation are simultaneously computed. This framework has been contrasted with a more standard two-steps approach, a combination of the local estimator introduced in [10] with the Information Bottleneck clustering technique [17]. In both methods we need to compute a kNN-graph which is precisely the computationally more expensive part. The mixture of Poisson estimators is faster than the two-steps approach one, it uses an EM algorithm, linear in the number of classes and observations, which converges in a few iterations. The mixture of Poisson model is not only clustering according to dimensionality, but to density as well. The introduction of additional observations and estimates can also help to separate points that although have the same dimensionality and density, belong to different manifolds. We would also like to study the use of ellipsoids instead of balls in the counting process in order to better follow the geometry of the intrinsic manifolds. Another aspect to study is the use of metrics more adapted to the nature of the data instead of the Euclidean distance. At the theoretical level, the bias of the PMM model needs to be studied. Results in these directions will be reported elsewhere. Acknowledgments: This work has been supported by ONR, DARPA, NSF, NGA, and the McKnight Foundation. We thank Prof. Persi Diaconis and Prof. Ren´e Vidal for important feedback and comments. We also thank Pablo Arias and J´er´emie Jakubowicz for their help. GR was on sabbatical from the Universidad de la Republica, Uruguay, while performing this work. References [1] D. Barbara and P. Chen. Using the fractal dimension to cluster datasets. In Proceedings of the Sixth ACM SIGKDD, pages 260–264, 2000. [2] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in NIPS 14, 2002. [3] M. Brand. Charting a manifold. In Advances in NIPS 16, 2002. [4] J. A. Costa and A. O. Hero. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. IEEE Trans. on Signal Processing, 52(8):2210–2221, 2004. [5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data. Journal of the Royal Statistical Society Ser. B, 39:1–38, 1977. [6] N. Friedman, O. Mosenzon, N. Slonim, and N. Tishby. Multivariate information bottleneck. In Seventeenth Conference UAI, pages 152–161, 2001. [7] A. Gionis, A. Hinneburg, S. Papadimitriu, and P. Tsparas. Dimension induced clustering. In Proceeding of the Eleventh ACM SIGKDD, pages 51–60, 2005. [8] K. Huang, Y. Ma, and R. Vidal. Minimum effective dimension for mixtures of subspaces: A robust GPCA algorithm and its applications. In Proceedings of CVPR, pages 631–638, 2004. [9] B. Kegl. Intrinsic dimension estimation using packing numbers. In Advances in NIPS 14, 2002. [10] E. Levina and P.J. Bickel. Maximum likelihood estimation of intrinsic dimension. In Advances in NIPS 17, 2005. [11] P. Mordohai and G. Medioni. Unsupervised dimensionality estimation and manifold learning in highdimensional spaces by tensor voting. In IJCAI, page 798, 2005. [12] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [13] D. L. Snyder. Random Point Processes. Wiley, New York, 1975. [14] R. Souvenir and R. Pless. Manifold clustering. In ICCV, pages 648–653, 2005. [15] F. Takens. On the numerical determination of the dimension of an attractor. Lecture notes in mathematics. Dynamical systems and bifurcations, 1125:99–106, 1985. [16] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [17] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368–377, 1999.
2006
60
3,082
A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation Yee Whye Teh Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, UK ywteh@gatsby.ucl.ac.uk David Newman and Max Welling Bren School of Information and Computer Science University of California, Irvine CA 92697-3425 USA {newman,welling}@ics.uci.edu Abstract Latent Dirichlet allocation (LDA) is a Bayesian network that has recently gained much popularity in applications ranging from document modeling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian inference algorithm for LDA, and show that it is computationally efficient, easy to implement and significantly more accurate than standard variational Bayesian inference for LDA. 1 Introduction Bayesian networks with discrete random variables form a very general and useful class of probabilistic models. In a Bayesian setting it is convenient to endow these models with Dirichlet priors over the parameters as they are conjugate to the multinomial distributions over the discrete random variables [1]. This choice has important computational advantages and allows for easy inference in such models. A class of Bayesian networks that has gained significant momentum recently is latent Dirichlet allocation (LDA) [2], otherwise known as multinomial PCA [3]. It has found important applications in both text modeling [4, 5] and computer vision [6]. Training LDA on a large corpus of several million documents can be a challenge and crucially depends on an efficient and accurate inference procedure. A host of inference algorithms have been proposed, ranging from variational Bayesian (VB) inference [2], expectation propagation (EP) [7] to collapsed Gibbs sampling [5]. Perhaps surprisingly, the collapsed Gibbs sampler proposed in [5] seem to be the preferred choice in many of these large scale applications. In [8] it is observed that EP is not efficient enough to be practical while VB suffers from a large bias. However, collapsed Gibbs sampling also has its own problems: one needs to assess convergence of the Markov chain and to have some idea of mixing times to estimate the number of samples to collect, and to identify coherent topics across multiple samples. In practice one often ignores these issues and collects as many samples as is computationally feasible, while the question of topic identification is often sidestepped by using just 1 sample. Hence there still seems to be a need for more efficient, accurate and deterministic inference procedures. In this paper we will leverage the important insight that a Gibbs sampler that operates in a collapsed space—where the parameters are marginalized out—mixes much better than a Gibbs sampler that samples parameters and latent topic variables simultaneously. This suggests that the parameters and latent variables are intimately coupled. As we shall see in the following, marginalizing out the parameters induces new dependencies between the latent variables (which are conditionally independent given the parameters), but these dependencies are spread out over many latent variables. This implies that the dependency between any two latent variables is expected to be small. This is precisely the right setting for a mean field (i.e. fully factorized variational) approximation: a particular variable interacts with the remaining variables only through summary statistics called the field, and the impact of any single variable on the field is very small [9]. Note that this is not true in the joint space of parameters and latent variables because fluctuations in parameters can have a significant impact on latent variables. We thus conjecture that the mean field assumptions are much better satisfied in the collapsed space of latent variables than in the joint space of latent variables and parameters. In this paper we leverage this insight and propose a collapsed variational Bayesian (CVB) inference algorithm. In theory, the CVB algorithm requires the calculation of very expensive averages. However, the averages only depend on sums of independent Bernoulli variables, and thus are very closely approximated with Gaussian distributions (even for relatively small sums). Making use of this approximation, the final algorithm is computationally efficient, easy to implement and significantly more accurate than standard VB. 2 Approximate Inference in Latent Dirichlet Allocation LDA models each document as a mixture over topics. We assume there are K latent topics, each being a multinomial distribution over a vocabulary of size W. For document j, we first draw a mixing proportion θj = {θjk} over K topics from a symmetric Dirichlet with parameter α. For the ith word in the document, a topic zij is drawn with topic k chosen with probability θjk, then word xij is drawn from the zijth topic, with xij taking on value w with probability φkw. Finally, a symmetric Dirichlet prior with parameter β is placed on the topic parameters φk = {φkw}. The full joint distribution over all parameters and variables is: p(x, z, θ, φ|α, β) = D Y j=1 Γ(Kα) Γ(α)K QK k=1 θα−1+njk· jk K Y k=1 Γ(Wβ) Γ(β)W QW w=1 φβ−1+n·kw kw (1) where njkw = #{i : xij = w, zij = k}, and dot means the corresponding index is summed out: n·kw = P j njkw, and njk· = P w njkw. Given the observed words x = {xij} the task of Bayesian inference is to compute the posterior distribution over the latent topic indices z = {zij}, the mixing proportions θ = {θj} and the topic parameters φ = {φk}. There are three current approaches, variational Bayes (VB) [2], expectation propagation [7] and collapsed Gibbs sampling [5]. We review the VB and collapsed Gibbs sampling methods here as they are the most popular methods and to motivate our new algorithm which combines advantages of both. 2.1 Variational Bayes Standard VB inference upper bounds the negative log marginal likelihood −log p(x|α, β) using the variational free energy: −log p(x|α, β) ≤eF(˜q(z, θ, φ)) = E˜q[−log p(x, z, φ, θ|α, β)] −H(˜q(z, θ, φ)) (2) with ˜q(z, θ, φ) an approximate posterior, H(˜q(z, θ, φ)) = E˜q[−log ˜q(z, θ, φ)] the variational entropy, and ˜q(z, θ, φ) assumed to be fully factorized: ˜q(z, θ, φ) = Y ij ˜q(zij|˜γij) Y j ˜q(θj|˜αj) Y k ˜q(φk|˜βk) (3) ˜q(zij|˜γij) is multinomial with parameters ˜γij and ˜q(θj|˜αj), ˜q(φk|˜βk) are Dirichlet with parameters ˜αj and ˜βk respectively. Optimizing eF(˜q) with respect to the variational parameters gives us a set of updates guaranteed to improve eF(˜q) at each iteration and converges to a local minimum: ˜αjk = α + P i ˜γijk (4) ˜βkw = β + P ij 111(xij =w)˜γijk (5) ˜γijk ∝exp  Ψ(˜αjk) + Ψ(˜βkxij) −Ψ(P w ˜βkw)  (6) where Ψ(y) = ∂log Γ(y) ∂y is the digamma function and 111 is the indicator function. Although efficient and easily implemented, VB can potentially lead to very inaccurate results. Notice that the latent variables z and parameters θ, φ can be strongly dependent in the true posterior p(z, θ, φ|x) through the cross terms in (1). This dependence is ignored in VB which assumes that latent variables and parameters are independent instead. As a result, the VB upper bound on the negative log marginal likelihood can be very loose, leading to inaccurate estimates of the posterior. 2.2 Collapsed Gibbs Sampling Standard Gibbs sampling, which iteratively samples latent variables z and parameters θ, φ, can potentially have slow convergence due again to strong dependencies between the parameters and latent variables. Collapsed Gibbs sampling improves upon Gibbs sampling by marginalizing out θ and φ instead, therefore dealing with them exactly. The marginal distribution over x and z is p(z, x|α, β) = Y j Γ(Kα) Γ(Kα+nj··) Q k Γ(α+njk·) Γ(α) Y k Γ(Wβ) Γ(Wβ+n·k·) Q w Γ(β+n·kw) Γ(β) (7) Given the current state of all but one variable zij, the conditional probability of zij is: p(zij = k|z¬ij, x, α, β) = (α + n¬ij jk· )(β + n¬ij ·kxij)(Wβ + n¬ij ·k· )−1 PK k′=1(α + n¬ij jk′·)(β + n¬ij ·k′xij)(Wβ + n¬ij ·k′·)−1 (8) where the superscript ¬ij means the corresponding variables or counts with xij and zij excluded, and the denominator is just a normalization. The conditional distribution of zij is multinomial with simple to calculate probabilities, so the programming and computational overhead is minimal. Collapsed Gibbs sampling has been observed to converge quickly [5]. Notice from (8) that zij depends on z¬ij only through the counts n¬ij jk· , n¬ij ·kxij, n¬ij ·k· . In particular, the dependence of zij on any particular other variable zi′j′ is very weak, especially for large datasets. As a result we expect the convergence of collapsed Gibbs sampling to be fast [10]. However, as with other MCMC samplers, and unlike variational inference, it is often hard to diagnose convergence, and a sufficiently large number of samples may be required to reduce sampling noise. The argument of rapid convergence of collapsed Gibbs sampling is reminiscent of the argument for when mean field algorithms can be expected to be accurate [9]. The counts n¬ij jk· , n¬ij ·kxij, n¬ij ·k· act as fields through which zij interacts with other variables. In particular, averaging both sides of (8) by p(z¬ij|x, α, β) gives us the Callen equations, a set of equations that the true posterior must satisfy: p(zij = k|x, α, β) = Ep(z¬ij|x,α,β) " (α+n¬ij jk· )(β+n¬ij ·kxij)(Wβ+n¬ij ·k· )−1 PK k′=1(α+n¬ij jk′·)(β+n¬ij ·k′xij)(Wβ+n¬ij ·k′·)−1 # (9) Since the latent variables are already weakly dependent on each other, it is possible to replace (9) by a set of mean field equations where latent variables are assumed independent and still expect these equations to be accurate. This is the idea behind the collapsed variational Bayesian inference algorithm of the next section. 3 Collapsed Variational Bayesian Inference for LDA We derive a new inference algorithm for LDA combining the advantages of both standard VB and collapsed Gibbs sampling. It is a variational algorithm which, instead of assuming independence, models the dependence of the parameters on the latent variables in an exact fashion. On the other hand we still assume that latent variables are mutually independent. This is not an unreasonable assumption to make since as we saw they are only weakly dependent on each other. We call this algorithm collapsed variational Bayesian (CVB) inference. There are two ways to deal with the parameters in an exact fashion, the first is to marginalize them out of the joint distribution and to start from (7), the second is to explicitly model the posterior of θ, φ given z and x without any assumptions on its form. We will show that these two methods are equivalent. The only assumption we make in CVB is that the latent variables z are mutually independent, thus we approximate the posterior as: ˆq(z, θ, φ) = ˆq(θ, φ|z) Y ij ˆq(zij|ˆγij) (10) where ˆq(zij|ˆγij) is multinomial with parameters ˆγij. The variational free energy becomes: bF(ˆq(z)ˆq(θ, φ|z)) = Eˆq(z)ˆq(θ,φ|z)[−log p(x, z, θ, φ|α, β)] −H(ˆq(z)ˆq(θ, φ|z)) =Eˆq(z)[Eˆq(θ,φ|z)[−log p(x, z, θ, φ|α, β)] −H(ˆq(θ, φ|z))] −H(ˆq(z)) (11) We minimize the variational free energy with respect to ˆq(θ, φ|z) first, followed by ˆq(z). Since we do not restrict the form of ˆq(θ, φ|z), the minimum is achieved at the true posterior ˆq(θ, φ|z) = p(θ, φ|x, z, α, β), and the variational free energy simplifies to: bF(ˆq(z)) ≜ min ˆq(θ,φ|z) bF(ˆq(z)ˆq(θ, φ|z)) = Eˆq(z)[−log p(x, z|α, β)] −H(ˆq(z)) (12) We see that CVB is equivalent to marginalizing out θ, φ before approximating the posterior over z. As CVB makes a strictly weaker assumption on the variational posterior than standard VB, we have bF(ˆq(z)) ≤eF(˜q(z)) ≜ min ˜q(θ)˜q(φ) e F(˜q(z)˜q(θ)˜q(φ)) (13) and thus CVB is a better approximation than standard VB. Finally, we derive the updates for the variational parameters ˆγij. Minimizing (12) with respect to ˆγijk, we get ˆγijk = ˆq(zij = k) = exp Eˆq(z¬ij)[p(x, z¬ij, zij = k|α, β)]  PK k′=1 exp Eˆq(z¬ij)[p(x, z¬ij, zij = k′|α, β)]  (14) Plugging in (7), expanding log Γ(η+n) Γ(η) = Pn−1 l=0 log(η + l) for positive reals η and positive integers n, and cancelling terms appearing both in the numerator and denominator, we get ˆγijk = exp  Eˆq(z¬ij)[log(α+n¬ij jk· ) + log(β+n¬ij ·kxij) −log(Wβ+n¬ij ·k· )]  PK k′=1 exp  Eˆq(z¬ij)[log(α+n¬ij jk′·) + log(β+n¬ij ·k′xij) −log(Wβ+n¬ij ·k′·)]  (15) 3.1 Gaussian approximation for CVB Inference For completeness, we describe how to compute each expectation term in (15) exactly in the appendix. This exact implementation of CVB is computationally too expensive to be practical, and we propose instead to use a simple Gaussian approximation which works very accurately and which requires minimal computational costs. In this section we describe the Gaussian approximation applied to Eˆq[log(α + n¬ij jk· )]; the other two expectation terms are similarly computed. Assume that nj·· ≫0. Notice that n¬ij jk· = P i′̸=i 111(zi′j = k) is a sum of a large number independent Bernoulli variables 111(zi′j = k) each with mean parameter ˆγi′jk, thus it can be accurately approximated by a Gaussian. The mean and variance are given by the sum of the means and variances of the individual Bernoulli variables: Eˆq[n¬ij jk· ] = X i′̸=i ˆγi′jk Varˆq[n¬ij jk· ] = X i′̸=i ˆγi′jk(1 −ˆγi′jk) (16) We further approximate the function log(α + n¬ij jk· ) using a second-order Taylor expansion about Eˆq[n¬ij jk· ], and evaluate its expectation under the Gaussian approximation: Eˆq[log(α + n¬ij jk· )] ≈log(α + Eˆq[n¬ij jk· ]) − Varˆq(n¬ij jk· ) 2(α + Eˆq[n¬ij jk· ])2 (17) Because Eˆq[n¬ij jk· ] ≫0, the third derivative is small and the Taylor series approximation is very accurate. In fact, we have found experimentally that the Gaussian approximation works very well even when nj·· is small. The reason is that we often have ˆγi′jk being either close to 0 or 1 thus the variance of n¬ij jk· is small relative to its mean and the Gaussian approximation will be accurate. Finally, plugging (17) into (15), we have our CVB updates: ˆγijk ∝  α+Eˆq[n¬ij jk· ]   β+Eˆq[n¬ij ·kxij]   Wβ+Eˆq[n¬ij ·k· ] −1 exp  − Varˆ q(n¬ij jk· ) 2(α+Eˆ q[n¬ij jk· ])2 − Varˆq(n¬ij ·kxij ) 2(β+Eˆ q[n¬ij ·kxij ])2 + Varˆ q(n¬ij ·k· ) 2(Wβ+Eˆ q[n¬ij ·k· ])2  (18) Notice the striking correspondence between (18), (8) and (9), showing that CVB is indeed the mean field version of collapsed Gibbs sampling. In particular, the first line in (18) is obtained from (8) by replacing the fields n¬ij jk· , n¬ij ·kxij and n¬ij ·k· by their means (thus the term mean field) while the exponentiated terms are correction factors accounting for the variance in the fields. CVB with the Gaussian approximation is easily implemented and has minimal computational costs. By keeping track of the mean and variance of njk·, n·kw and n·k·, and subtracting the mean and variance of the corresponding Bernoulli variables whenever we require the terms with xij, zij removed, the computational cost scales only as O(K) for each update to ˆq(zij). Further, we only need to maintain one copy of the variational posterior over the latent variable for each unique document/word pair, thus the overall computational cost per iteration of CVB scales as O(MK) where M is the total number of unique document/word pairs, while the memory requirement is O(MK). This is the same as for VB. In comparison, collapsed Gibbs sampling needs to keep track of the current sample of zij for every word in the corpus, thus the memory requirement is O(N) while the computational cost scales as O(NK) where N is the total number of words in the corpus—higher than for VB and CVB. Note however that the constant factor involved in the O(NK) time cost of collapsed Gibbs sampling is significantly smaller than those for VB and CVB. 4 Experiments We compared the three algorithms described in the paper: standard VB, CVB and collapsed Gibbs sampling. We used two datasets: first is “KOS” (www.dailykos.com), which has J = 3430 documents, a vocabulary size of W = 6909, a total of N = 467, 714 words in all the documents and on average 136 words per document. Second is “NIPS” (books.nips.cc) with J = 1675 documents, a vocabulary size of W = 12419, N = 2, 166, 029 words in the corpus and on average 1293 words per document. In both datasets stop words and infrequent words were removed. We split both datasets into a training set and a test set by assigning 10% of the words in each document to the test set. In all our experiments we used α = 0.1, β = 0.1, K = 8 number of topics for KOS and K = 40 for NIPS. We ran each algorithm on each dataset 50 times with different random initializations. Performance was measured in two ways. First using variational bounds of the log marginal probabilities on the training set, and secondly using log probabilities on the test set. Expressions for the variational bounds are given in (2) for VB and (12) for CVB. For both VB and CVB, test set log probabilities are computed as: p(xtest) = Y ij X k ¯θjk ¯φkxtest ij ¯θjk = α + Eq[njk·] Kα + Eq[nj··] ¯φkw = β + Eq[n·kw] Wβ + Eq[n·k·] (19) Note that we used estimated mean values of θjk and φkw [11]. For collapsed Gibbs sampling, given S samples from the posterior, we used: p(xtest) = Y ij X k 1 |S| S X s=1 θs jkφs kxtest ij θs jk = α + ns jk· Kα + ns j·· φs kw = β + ns ·kw Wβ + ns ·k· (20) Figure 1 summarizes our results. We show both quantities as functions of iterations and as histograms of final values for all algorithms and datasets. CVB converged faster and to significantly better solutions than standard VB; this confirms our intuition that CVB provides much better approximations than VB. CVB also converged faster than collapsed Gibbs sampling, but Gibbs sampling attains a better solution in the end; this is reasonable since Gibbs sampling should be exact with 0 20 40 60 80 100 −9 −8.5 −8 −7.5 Collapsed VB Standard VB 0 20 40 60 80 100 −9 −8.8 −8.6 −8.4 −8.2 −8 −7.8 −7.6 −7.4 Collapsed VB Standard VB −7.8 −7.675 −7.55 0 5 10 15 20 Collapsed VB Standard VB −7.65 −7.6 −7.55 −7.5 −7.45 −7.4 0 5 10 15 20 25 30 35 40 Collapsed VB Standard VB 0 20 40 60 80 100 −7.9 −7.8 −7.7 −7.6 −7.5 −7.4 Collapsed Gibbs Collapsed VB Standard VB 0 20 40 60 80 100 −7.9 −7.8 −7.7 −7.6 −7.5 −7.4 −7.3 −7.2 Collapsed Gibbs Collapsed VB Standard VB −7.7 −7.65 −7.6 −7.55 −7.5 −7.45 −7.4 0 5 10 15 20 Collapsed Gibbs Collapsed VB Standard VB −7.5 −7.45 −7.4 −7.35 −7.3 −7.25 −7.2 0 5 10 15 20 25 30 Collapsed Gibbs Collapsed VB Standard VB Figure 1: Left: results for KOS. Right: results for NIPS. First row: per word variational bounds as functions of numbers of iterations of VB and CVB. Second row: histograms of converged per word variational bounds across random initializations for VB and CVB. Third row: test set per word log probabilities as functions of numbers of iterations for VB, CVB and Gibbs. Fourth row: histograms of final test set per word log probabilities across 50 random initializations. 0 500 1000 1500 2000 2500 −8.2 −8.1 −8 −7.9 −7.8 −7.7 −7.6 −7.5 −7.4 Collapsed Gibbs Collapsed VB Standard VB 0 500 1000 1500 2000 2500 −9.5 −9 −8.5 −8 −7.5 Collapsed VB Standard VB Figure 2: Left: test set per word log probabilities. Right: per word variational bounds. Both as functions of the number of documents for KOS. enough samples. We have also applied the exact but much slower version of CVB without the Gaussian approximation, and found that it gave identical results to the one proposed here (not shown). We have also studied the dependence of approximation accuracies on the number of documents in the corpus. To conduct this experiment we train on 90% of the words in a (growing) subset of the corpus and test on the corresponding 10% left out words. In figure Figure 2 we show both variational bounds and test set log probabilities as functions of the number of documents J. We observe that as expected the variational methods improve as J increases. However, perhaps surprisingly, CVB does not suffer as much as VB for small values of J, even though one might expect that the Gaussian approximation becomes dubious in that regime. 5 Discussion We have described a collapsed variational Bayesian (CVB) inference algorithm for LDA. The algorithm is easy to implement, computationally efficient and more accurate than standard VB. The central insight of CVB is that instead of assuming parameters to be independent from latent variables, we treat their dependence on the topic variables in an exact fashion. Because the factorization assumptions made by CVB are weaker than those made by VB, the resulting approximation is more accurate. Computational efficiency is achieved in CVB with a Gaussian approximation, which was found to be so accurate that there is never a need for exact summation. The idea of integrating out parameters before applying variational inference has been independently proposed by [12]. Unfortunately, because they worked in the context of general conjugateexponential families, the approach cannot be made generally computationally useful. Nevertheless, we believe the insights of CVB can be applied to a wider class of discrete graphical models beyond LDA. Specific examples include various extensions of LDA [4, 13] hidden Markov models with discrete outputs, and mixed-membership models with Dirichlet distributed mixture coefficients [14]. These models all have the property that they consist of discrete random variables with Dirichlet priors on the parameters, which is the property allowing us to use the Gaussian approximation. We are also exploring CVB on an even more general class of models, including mixtures of Gaussians, Dirichlet processes, and hierarchical Dirichlet processes. Over the years a variety of inference algorithms have been proposed based on a combination of {maximize, sample, assume independent, marginalize out} applied to both parameters and latent variables. We conclude by summarizing these algorithms in Table 1, and note that CVB is located in the marginalize out parameters and assume latent variables are independent cell. A Exact Computation of Expectation Terms in (15) We can compute the expectation terms in (15) exactly as follows. Consider Eˆq[log(α + n¬ij jk· )], which requires computing ˆq(n¬ij jk· ) (other expectation terms are similarly computed). Note that Parameters → maximize sample assume marginalize ↓Latent variables independent out maximize Viterbi EM ? ME ME sample stochastic EM Gibbs sampling ? collapsed Gibbs assume independent variational EM ? VB CVB marginalize out EM any MCMC EP for LDA intractable Table 1: A variety of inference algorithms for graphical models. Note that not every cell is filled in (marked by ?) while some are simply intractable. “ME” is the maximization-expectation algorithm of [15] and “any MCMC” means that we can use any MCMC sampler for the parameters once latent variables have been marginalized out. n¬ij jk· = P i′̸=i 111(zi′j = k) is a sum of independent Bernoulli variables 111(zi′j = k) each with mean parameter ˆγi′jk. Define vectors vi′jk = [(1 −ˆγi′jk), ˆγi′jk]⊤, and let vjk = v1jk ⊗· · · ⊗vn·j·jk be the convolution of all vi′jk. Finally let v¬ij jk be vjk deconvolved by vijk. Then ˆq(n¬ij jk· = m) will be the (m+1)st entry in v¬ij jk . The expectation Eˆq[log(α+n¬ij jk· )] can now be computed explicitly. This exact implementation requires an impractical O(n2 j··) time to compute Eˆq[log(α+n¬ij jk· )]. At the expense of complicating the algorithm implementation, this can be improved by sparsifying the vectors vjk (setting small entries to zero) as well as other computational tricks. We propose instead the Gaussian approximation of Section 3.1, which we have found to give extremely accurate results but with minimal implementation complexity and computational cost. Acknowledgement YWT was previously at NUS SoC and supported by the Lee Kuan Yew Endowment Fund. MW was supported by ONR under grant no. N00014-06-1-0734 and by NSF under grant no. 0535278. References [1] D. Heckerman. A tutorial on learning with Bayesian networks. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer Academic Publishers, 1999. [2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3, 2003. [3] W. Buntine. Variational extensions to EM and multinomial PCA. In ECML, 2002. [4] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors and documents. In UAI, 2004. [5] T. L. Griffiths and M. Steyvers. Finding scientific topics. In PNAS, 2004. [6] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories. In CVPR, 2005. [7] T. P. Minka and J. Lafferty. Expectation propagation for the generative aspect model. In UAI, 2002. [8] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In UAI, 2004. [9] M. Opper and O. Winther. From naive mean field theory to the TAP equations. In D. Saad and M. Opper, editors, Advanced Mean Field Methods : Theory and Practice. The MIT Press, 2001. [10] G. Casella and C. P. Robert. Rao-Blackwellisation of sampling schemes. Biometrika, 83(1):81–94, 1996. [11] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003. [12] J. Sung, Z. Ghahramani, and S. Choi. Variational Bayesian EM: A second-order approach. Unpublished manuscript, 2005. [13] W. Li and A. McCallum. Pachinko allocation: DAG-structured mixture models of topic correlations. In ICML, 2006. [14] E. M. Airoldi, D. M. Blei, E. P. Xing, and S. E. Fienberg. Mixed membership stochastic block models for relational data with application to protein-protein interactions. In Proceedings of the International Biometrics Society Annual Meeting, 2006. [15] M. Welling and K. Kurihara. Bayesian K-means as a “maximization-expectation” algorithm. In SIAM Conference on Data Mining, 2006.
2006
61
3,083
Dynamic Foreground/Background Extraction from Images and Videos using Random Patches Le Lu∗ Integrated Data Systems Department Siemens Corporate Research Princeton, NJ 08540 le-lu@siemens.com Gregory Hager Department of Computer Science Johns Hopkins University Baltimore, MD 21218 hager@cs.jhu.edu Abstract In this paper, we propose a novel exemplar-based approach to extract dynamic foreground regions from a changing background within a collection of images or a video sequence. By using image segmentation as a pre-processing step, we convert this traditional pixel-wise labeling problem into a lower-dimensional supervised, binary labeling procedure on image segments. Our approach consists of three steps. First, a set of random image patches are spatially and adaptively sampled within each segment. Second, these sets of extracted samples are formed into two “bags of patches” to model the foreground/background appearance, respectively. We perform a novel bidirectional consistency check between new patches from incoming frames and current “bags of patches” to reject outliers, control model rigidity and make the model adaptive to new observations. Within each bag, image patches are further partitioned and resampled to create an evolving appearance model. Finally, the foreground/background decision over segments in an image is formulated using an aggregation function defined on the similarity measurements of sampled patches relative to the foreground and background models. The essence of the algorithm is conceptually simple and can be easily implemented within a few hundred lines of Matlab code. We evaluate and validate the proposed approach by extensive real examples of the object-level image mapping and tracking within a variety of challenging environments. We also show that it is straightforward to apply our problem formulation on non-rigid object tracking with difficult surveillance videos. 1 Introduction In this paper, we study the problem of object-level figure/ground segmentation in images and video sequences. The core problem can be defined as follows: Given an image X with known figure/ground labels L, infer the figure/ground labels L′ of a new image X′ closely related to X. For example, we may want to extract a walking person in an image using the figure/ground mask of the same person in another image of the same sequence. Our approach is based on training a classifier from the appearance of a pixel and its surrounding context (i.e., an image patch centered at the pixel) to recognize other similar pixels across images. To apply this process to a video sequence, we also evolve the appearance model over time. A key element of our approach is the use of a prior segmentation to reduce the complexity of the segmentation process. As argued in [22], image segments are a more natural primitive for image modeling than pixels. More specifically, an image segmentation provides a natural dimensional reduction from the spatial resolution of the image to a much smaller set of spatially compact and relatively homogeneous regions. We can then focus on representing the appearance characteristics ∗The work has been done while the first author was a graduate student in Johns Hopkins University. of these regions. Borrowing a term from [22], we can think of each region as a ”superpixel” which represents a complex connected spatial region of the image using a rich set of derived image features. We can then consider how to classify each superpixel (i.e. image segment) as foreground or background, and then project this classification back into the original image to create the pixel-level foreground-background segmentation we are interested in. The original superpixel representation in [22, 19, 18] is a feature vector created from the image segment’s color histogram [19], filter bank responses [22], oriented energy [18] and contourness [18]. These features are effective for image segmentation [18], or finding perceptually important boundaries from segmentation by supervised training [22]. However, as shown in [17], those parameters do not work well for matching different classes of image regions from different images. Instead, we propose using a set of spatially randomly sampled image patches as a non-parametric, statistical superpixel representation. This non-parametric “bag of patches” model1 can be easily and robustly evolved with the spatial-temporal appearance information from video, while maintaining the model size (the number of image patches per bag) using adaptive sampling. Foreground/background classification is then posed as the problem of matching sets of random patches from the image with these models. Our major contributions are demonstrating the effectiveness and computational simplicity of a nonparametric random patch representation for semantically labelling superpixels and a novel bidirectional consistency check and resampling strategy for robust foreground/background appearance adaptation over time. (a) (b) (c) (d) Figure 1: (a) An example indoor image, (b) the segmentation result using [6] coded in random colors, (c) the boundary pixels between segments shown in red, the image segments associated with the foreground, a walking person here, shown in blue, (d) the associated foreground/background mask. Notice that the color in (a) is not very saturated. This is a common fact in our indoor experiments without any specific lighting controls. We organize the paper as follows. We first address several image patch based representations and the associated matching methods are described. In section 3, the algorithm used in our approach is presented with details. We demonstrate the validity of the proposed approach using experiments on real examples of the object-level figure/ground image mapping and non-rigid object tracking under dynamic conditions from videos of different resolutions in section 4. Finally, we summarize the contributions of the paper and discuss possible extensions and improvements. 2 Image Patch Representation and Matching Building stable appearance representations of images patches is fundamental to our approach. There are many derived features that can be used to represent the appearance of an image patch. In this paper, we evaluate our algorithm based on: 1) an image patch’s raw RGB intensity vector, 2) mean color vector, 3) color + texture descriptor (filter bank response or Haralick feature [17]), and 4) PCA, LDA and NDA (Nonparametric Discriminant Analysis) features [7, 3] on the raw RGB vectors. For completeness, we give a brief description of each of these techniques. Texture descriptors: To compute texture descriptions, we first apply the Leung-Malik (LM) filter bank [13] which consists of 48 isotropic and anisotropic filters with 6 directions, 3 scales and 2 phases. Thus each image patch is represented by a 48 component feature vector. The Haralick texture descriptor [10] was used for image classification in [17]. Haralick features are derived from the Gray Level Co-occurrence Matrix, which is a tabulation of how often different combinations of pixel brightness values (grey levels) occur in an image region. We selected 5 out of 14 texture 1Highly distinctive local features [16] are not the adequate substitutes for image patches. Their spatial sparseness nature limits their representativity within each individual image segment, especially for the nonrigid, nonstructural and flexible foreground/background appearance. descriptors [10] including dissimilarity, Angular Second Moment (ASM), mean, standard deviation (STD) and correlation. For details, refer to [10, 17]. Dimension reduction representations: The Principal Component Analysis (PCA) algorithm is used to reduce the dimensionality of the raw color intensity vectors of image patches. PCA makes no prior assumptions about the labels of data. However, recall that we construct the ”bag of patches” appearance model from sets of labelled image patches. This supervised information can be used to project the bags of patches into a subspace where they are best separated using Linear discriminant Analysis (LDA) or Nonparametric Discriminant Analysis (NDA) algorithm [7, 3] by assuming Gaussian or Non-Gaussian class-specific distributions. Patch matching: After image patches are represented using one of the above methods, we must match them against the foreground/background models. There are 2 methods investigated in this paper: the nearest neighbor matching using Euclidean distance and KDE (Kernel Density Estimation) [12] in PCA/NDA subspaces. For nearest-neighbor matching, we find, for each patch p, its nearest neighbors pF n , pB n in foreground/background bags, and then compute dF p =∥p −pF n ∥, dB p =∥p −pB n ∥. On the other hand, an image patch’s matching scores mF p and mB p are evaluated as probability density values from the KDE functions KDE(p, ΩF ) and KDE(p, ΩB) where ΩF |B are bags of patch models. Then the segmentation-level classification is performed as section 3.2. 3 Algorithms We briefly summarize our labeling algorithm as follows. We assume that each image of interest has been segmented into spatial regions. A set of random image patches are spatially and adaptively sampled within each segment. These sets of extracted samples are formed into two “bags of patches” to model the foreground/background appearance respectively. The foreground/background decision for any segment in a new image is computed using one of two aggregation functions on the appearance similarities from its inside image patches to the foreground and background models. Finally, for videos, within each bag, new patches from new frames are integrated through a robust bidirectional consistency check process and all image patches are then partitioned and resampled to create an evolving appearance model. As described below, this process prune classification inaccuracies in the nonparametric image patch representations and adapts them towards current changes in foreground/background appearances for videos. We describe each of these steps for video tracking of foreground/background segments in more detail below, and for image matching, which we treat as a special case by simply omitting step 3 and 4 in Figure 2. Non-parametric Patch Appearance Modelling-Matching Algorithm inputs: Pre-segmented Images Xt, t = 1, 2, ..., T ; Label L1 outputs: Labels Lt, t = 2, ..., T ; 2 “bags of patches” appearance model for foreground/background ΩF |B T 1. Sample segmentation-adaptive random image patches {P1} from image X1. 2. Construct 2 new bags of patches ΩF |B 1 for foreground/background using patches {P1} and label L1; set t = 1. 3. t = t + 1; Sample segmentation-adaptive random image patches {Pt} from image Xt; match {Pt} with ΩF |B t−1 and classify segments of Xt to generate label Lt by aggregation. 4. Classify and reject ambiguous patch samples, probable outliers and redundant appearance patch samples from new extracted image patches {Pt} against ΩF |B t−1 ; Then integrate the filtered {Pt} into ΩF |B t−1 and evaluate the probability of survival ps for each patch inside ΩF |B t−1 against the original unprocessed {Pt} (Bidirectional Consistency Check). 5. Perform the random partition and resampling process according to the normalized product of probability of survival ps and partition-wise sampling rate γ′ inside ΩF |B t−1 to generate ΩF |B t . 6. If t = T, output Lt, t = 2, ..., T and ΩF |B T ; exit. If t < T, go to (3). Figure 2: Non-parametric Patch Appearance Modelling-Matching Algorithm Figure 3: Left: Segment adaptive random patch sampling from an image with known figure/ground labels. Green dots are samples for background; dark brown dots are samples for foreground. Right: Segment adaptive random patch sampling from a new image for figure/ground classification, shown as blue dots. 3.1 Sample Random Image Patches We first employ an image segmentation algorithm2 [6] to pre-segment all the images or video frames in our experiments. A typical segmentation result is shown in Figure 1. We use Xt, t = 1, 2, ..., T to represent a sequence of video frames. Given an image segment, we formulate its representation as a distribution on the appearance variation over all possible extracted image patches inside the segment. To keep this representation to a manageable size, we approximate this distribution by sampling a random subset of patches. We denote an image segment as Si with SF i for a foreground segment, and SB i for a background segment, where i is the index of the (foreground/background)image segment within an image. Accordingly, Pi, PF i and PB i represent a set of random image patches sampled from Si, SF i and SB i respectively. The cardinality Ni of an image segment Si generated by [6] typically ranges from 50 to thousands. However small or large superpixels are expected to have roughly the same amount of uniformity. Therefore the sampling rate γi of Si, defined as γi = size(Pi)/Ni, should decrease with increasing Ni. For simplicity, we keep γi as a constant for all superpixels, unless Ni is above a predefined threshold τ, (typically 2500 ∼3000), above which size(Pi) is held fixed. This sampling adaptivity is illustrated in Figure 3. Notice that large image segments have much more sparsely sampled patches than small image segments. From our experiments, this adaptive spatial sampling strategy is sufficient to represent image segments of different sizes. 3.2 Label Segments by Aggregating Over Random Patches For an image segment Si from a new frame to be classified, we again first sample a set of random patches Pi as its representative set of appearance samples. For each patch p ∈Pi, we calculate its distances dF p , dB p or matching scores mB p , mF p towards the foreground and background appearance models respectively as described in Section 2. The decision of assigning Si to foreground or background, is an aggregating process over all {dF p , dB p } or {mB p ; mF p } where p ∈Pi. Since Pi is considered as a set of i.i.d. samples of the appearance distribution of Si, we use the average of {dF p , dB p } or {mB p ; mF p } (ie. first-order statistics) as its distances DF Pi, DB Pi or fitness values M F Pi, M B Pi with the foreground/background model. In terms of distances {dF p , dB p }, DF Pi = meanp∈Pi(dF p ) and DB Pi = meanp∈Pi(dB p ). Then the segment’s foreground/background fitness is set as the inverse of the distances: M F Pi = 1/DF Pi and M B Pi = 1/DB Pi. In terms of KDE matching scores {mB p ; mF p }, M F Pi = meanp∈Pi(mF p ) and M B Pi = meanp∈Pi(mB p ). Finally, Si is classified as foreground if M F Pi > M B Pi, and vice versa. The Median robust operator can also be employed in our experiments, without noticeable difference in performance. Another choice is to classify each p ∈Pi from mB p and mF p , then vote the majority foreground/background decision for Si. The performance is similar with mean and median. 2Because we are not focused on image segmentation algorithms, we choose Felzenszwalb’s segmentation code which generates good results and is publicly available at http://people.cs.uchicago.edu/∼pff/segment/. 3.3 Construct a Robust Online Nonparametric Foreground/Background Appearance Model with Temporal Adaptation From sets of random image patches extracted from superpixels with known figure/ground labels, 2 foreground/background “bags of patches” are composed. The bags are the non-parametric form of the foreground/background appearance distributions. When we intend to “track” the figure/ground model sequentially though a sequence, these models need to be updated by integrating new image patches extracted from new video frames. However the size (the number of patches) of the bag will be unacceptably large if we do not also remove the some redundant information over time. More importantly, imperfect segmentation results from [6] can cause inaccurate segmentation level figure/ground labels. For robust image patch level appearance modeling of Ωt, we propose a novel bidirectional consistency check and resampling strategy to tackle various noise and labelling uncertainties. More precisely, we classify new extracted image patches {Pt} as {PF t } or {PB t } according to ΩF |B t−1 ; and reject ambiguous patch samples whose distances dF p , dB p towards respective ΩF |B t−1 have no good contrast (simply, the ratio between dF p and dB p falls into the range of 0.8 to 1/0.8). We further sort the distance list of the newly classified foreground patches {PF t } to ΩF t−1, filter out image patches on the top of the list which have too large distances and are probably to be outliers, and ones from the bottom of the list which have too small distances and contain probably redundant appearances compared with ΩF t−1 3. We perform the same process with {PB t } according to ΩB t−1. Then the filtered {Pt} are integrated into ΩF |B t−1 to form ΩF ′|B′ t−1 , and we evaluate the probability of survival ps for each patch inside ΩF ′|B′ t−1 against the original unprocessed {Pt} with their labels4. Next, we cluster all image patches of ΩF ′|B′ t−1 into k partitions [8], and randomly resample image patches within each partition. This is roughly equivalent to finding the modes of an arbitrary distribution and sampling for each mode. Ideally, the resampling rate γ′ should decrease with increasing partition size, similar to the segment-wise sampling rate γ. For simplicity, we define γ′ as a constant value for all partitions, unless setting a threshold τ ′ to be the minimal required size5 of partitions after resampling. If we perform resampling directly over patches without partitioning, some modes of the appearance distribution may be mistakenly removed. This strategy represents all partitions with sufficient number of image patches, regardless of their different sizes. In all, we resample image patches of ΩF |B t−1 , according to the normalized product of probability of survival ps and partitionwise sampling rate γ′, to generate ΩF |B t . By approximately fixing the expected bag model size, the number of image patches extracted from a certain frame Xt in the bag decays exponentially in time. The problem of partitioning image patches in the bag can be formulated as the NP-hard k-center problem. The definition of k-center is as follows: given a data set of n points and a predefined cluster number k, find a partition of the points into k subgroups P1, P2, ..., Pk and the data centers c1, c2, ..., ck, to minimize the maximum radius of clusters maxi maxp∈Pi ∥p −ci ∥, where i is the index of clusters. Gonzalez [8] proposed an efficient greedy algorithm, farthest-point clustering, which proved to give an approximation factor of 2 of the optimum. The algorithm operates as follows: pick a random point p1 as the first cluster center and add it to the center set C; for iterations i = 2, ..., k, find the point pi with the farthest distance to the current center set C: di(pi, C) = minc∈C ∥pi −c ∥and add pi to set C; finally assign data points to its nearest center and recompute the means of clusters in C. Compared with the popular k-means algorithm, this algorithm is computationally efficient and theoretically bounded6. In this paper, we employ the Eu3Simply, we reject patches with distances dF pF t that are larger than mean(dF pF t ) + λ ∗std(dF pF t ) or smaller than mean(dF pF t )−λ∗std(dF pF t ) where λ controls the range of accepting patch samples of ΩF |B t−1 , called model rigidity. 4For example, we compute the distance of each patch in ΩF ′ t−1 to {PF t }, and covert them as surviving probabilities using a exponential function over negative covariance normalized distances. Patches with smaller distances have higher survival chances during resampling; and vice versa. We perform the same process with ΩB′ t−1 according to {PB t }. 5All image patches from partitions that are already smaller than τ ′ are kept during resampling. 6The random initialization of all k centers and the local iterative smoothing process in k-means, which is time-consuming in high dimensional space and possibly converges to undesirable local minimum, are avoided. clidean distance between an image patch and a cluster center, using the raw RGB intensity vector or the feature representations discussed in section 2. 4 Experiments We have evaluated the image patch representations described in Section 2 for figure/ground mapping between pairs of image on video sequences taken with both static and moving cameras. Here we summarize our results. 4.1 Evaluation on Object-level Figure/Ground Image Mapping We first evaluate our algorithm on object-level figure/groundmapping between pairs of images under eight configurations of different image patch representations and matching criteria. They are listed as follows: the nearest neighbor distance matching on the image patch’s mean color vector (MCV); raw color intensity vector of regular patch scanning (RCV) or segment-adaptive patch sampling over image (SCV); color + filter bank response (CFB); color + Haralick texture descriptor (CHA); PCA feature vector (PCA); NDA feature vector (NDA) and kernel density evaluation on PCA features (KDE). In general, 8000 ∼12000 random patches are sampled per image. There is no apparent difference on classification accuracy for the patch size ranging from 9 to 15 pixels and the sample rate from 0.02 to 0.10. The PCA/NDA feature vector has 20 dimensions, and KDE is evaluated on the first 3 ∼6 PCA features. Because the foreground figure has fewer of pixels than background, we conservatively measure the classification accuracy from the foreground’s detection precision and recall on pixels. Precision is the ratio of the number of correctly detected foreground pixels to the total number of detected foreground pixels; recall is is the ratio of the number of correctly detected foreground pixels to the total number of foreground pixels in the image. The patch size is 11 by 11 pixels, and the segmentwise patch sampling rate γ is fixed as 0.06, unless stated otherwise. Using 40 pairs of (720 × 480) images with the labelled figure/ground segmentation, we compare their average classification accuracies in Tables 1. MCV RCV SCV CFB CHA PCA NDA KDE 0.46 0.81 0.97 0.92 0.89 0.93 0.96 0.69 0.28 0.89 0.95 0.85 0.81 0.85 0.87 0.98 Table 1: Evaluation on classification accuracy (ratio). The first row is precision; the second row is recall. For figure/ground extraction accuracy, SCV has the best classification ratio using the raw color intensity vector without any dimension reduction. MCV has the worst accuracy, which shows that pixelcolor leads to poor separability between figure and ground in our data set. Four feature based representations, CFB, CHA, PCA, NDA with reduced dimensions, have similar performance, whereas NDA is slightly better than the others. KDE tends to be more biased towards the foreground class because background usually has a wider, flatter density distribution. The superiority of SCV over RCV proves that our segment-wise random patch sampling strategy is more effective at classifying image segments than regularly scanning the image, even with more samples. As shown in Figure 4 (b), some small or irregularly-shaped image segments do not have enough patch samples to produce stable classifications. (a) MCV (b) RCV (c) SCV (d) CFB (e) CHA (f) PCA (g) NDA (h) KDE Figure 4: An example of evaluation on object-level figure/ground image mapping. The labeled figure image segments are coded in blue. 4.2 Figure/Ground Segmentation Tracking with a Moving Camera From Figure 4 (h), we see KDE tends to produce some false positives for the foreground. However the problem can be effectively tackled by multiplying the appearance KDE by the spatial prior which is also formulated as a KDE function of image patch coordinates. By considering videos with complex appearance-changing figure/ground, imperfect segmentation results [6] are not completely avoidable which can cause superpixel based figure/ground labelling errors. However our robust bidirectional consistency check and resampling strategy, as shown below, enables to successfully track the dynamic figure/groundsegmentations in challenging scenarios with outlier rejection, model rigidity control and temporal adaptation (as described in section 3.3). Karsten.avi shows a person walking in an uncontrolled indoor environment while tracked with a handheld camera. After we manually label the frame 1, the foreground/background appearance model starts to develop, classify new frames and get updated online. Eight Example tracking frames are shown in Figure 5. Notice that the significant non-rigid deformations and large scale changes of the walking person, while the original background is completely substituted after the subject turned his way. In frame 258, we manually eliminate some false positives of the figure. The reason for this failure is that some image regions which were behind the subject begin to appear when the person is walking from left to the center of image (starting from frame 220). Compared to the online foreground/backgroundappearance models by then, these newly appearing image regions have quite different appearance from both the foreground and the background. Therefore the foreground’s spatial prior dominates the classification. We leave this issue for future work. (a) 12# (b) 91# (c) 155# (d) 180# (e) 221# (f) 257# (g) 308# (h) 329# Figure 5: Eight example frames (720 by 480 pixels) from the video sequence Karsten.avi of 330 frames. The video is captured using a handheld Panasonic PV-GS120 in standard NTSC format. Notice that the significant non-rigid deformations and large scale changes of the walking person, while the original background is completely substituted after the subject turned his way. The red pixels are on the boundary of segments; the tracked image segments associated with the foreground walking person is coded in blue. 4.3 Non-rigid Object Tracking from Surveillance Videos We can also apply our nonparametric treatment of dynamic random patches in Figure 2 into tracking non-rigid interested objects from surveillance videos. The difficult is that surveillance cameras normally capture small non-rigid figures, such as a walking person or running car, in low contrast and low resolution format. Thus to adapt our method to solve this problem, we make the following modifications. Because our task changes to localizing figure object automatically overtime, we can simply model figure/ground regions using rectangles and therefore no pre-segmentation [6] is needed. Random figure/ground patches are then extracted from the image regions within these two rectangles. Using two sets of random image patches, we train an online classifier for figure/ground classes at each time step, generate a figure appearance confidence map of classification for the next frame and, similarly to [1], apply mean shift [4] to find the next object location by mode seeking. In our problem solution, the temporal evolution of dynamic image patch appearance models are executed by the bidirectional consistency check and resampling described in section 3.3. Whereas [1] uses boosting for both temporal appearance model updating and classification, our online binary classification training can employ any off-the-shelf classifiers, such as k-Nearest Neighbors (KNN), support vector machine (SVM). Our results are favorably competitive to the state-of-the-art algorithms [1, 9], even under more challenging scenario. 5 Conclusion and Discussion Although quite simple both conceptually and computationally, our algorithm of performing dynamic foreground-background extraction in images and videos using non-parametric appearance models produces very promising and reliable results in a wide variety of circumstances. For tracking figure/ground segments, to our best knowledge, it is the first attempt to solve this difficult ”video matting” problem [15, 25] by robust and automatic learning. For surveillance video tracking, our results are very competitive with the state-of-art [1, 9] under even more challenging conditions. Our approach does not depend on an image segmentation algorithm that totally respects the boundaries of the foreground object. Our novel bidirectional consistency check and resampling process has been demonstrated to be effectively robust and adaptive. We leave the explorations on supervised dimension reduction and density modeling techniques on image patch sets, optimal random patch sampling strategy, and self-tuned optimal image patch size searching as our future work. In this paper, we extract foreground/background by classifying on individual image segments. It might improve the figure/ground segmentation accuracy by modeling their spatial pairwise relationships as well. This problem can be further solved using generative or discriminative random field (MRF/DRF) model or the boosting method on logistic classifiers [11]. In this paper, we focus on learning binary dynamic appearance models by assuming figure/ground are somewhat distributionwise separatable. Other cues, as object shape regularization and motion dynamics for tracking, can be combined to improve performance. References [1] S. Avidan, Ensemble Tracking, CVPR 2005. [2] Y. Boykov and M. Jolly, Interactive Graph Cuts for Optimal boundary and Region Segmentation of Objects in n-d Images, ICCV, 2001. [3] M. Bressan and J. Vitri`a, Nonparametric discriminative analysis and nearest neighbor classification, Pattern Recognition Letter, 2003. [4] D. Comaniciu and P. Meer, Mean shift: A robust approach toward feature space analysis. IEEE Trans. PAMI, 2002. [5] A. Efros, T. Leung, Texture Synthesis by Non-parametric Sampling, ICCV, 1999. [6] P. Felzenszwalb and D. Huttenlocher, Efficient Graph-Based Image Segmentation, IJCV, 2004. [7] K. Fukunaga and J. Mantock, Nonparametric discriminative analysis, IEEE Trans. on PAMI, Nov. 1983. [8] T. Gonzalez, Clustering to minimize the maximum intercluster distance, Theoretical Computer Science, 38:293-306, 1985. [9] B. Han and L. Davis, On-Line Density-Based Appearance Modeling for Object Tracking, ICCV 2005. [10] R. Haralick, K. Shanmugam, I. Dinstein, Texture features for image classification. IEEE Trans. on SMC, 1973. [11] D. Hoiem, A. Efros and M. Hebert, Automatic Photo Pop-up, SIGGRAPH, 2005. [12] A. Ihler, Kernel Density Estimation Matlab Toolbox, http://ssg.mit.edu/ ihler/code/kde.shtml. [13] T. Leung and J. Malik, Representing and Recognizing the Visual Appearance of Materials using ThreeDimensional Textons, IJCV, 2001. [14] Y. Li, J. Sun, C.-K. Tang and H.-Y. Shum, Lazy Snapping, SIGGRAPH, 2004. [15] Y. Li, J. Sun and H.-Y. Shum. Video Object Cut and Paste, SIGGRAPH, 2005. [16] D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV, 2004. [17] L. Lu, K. Toyama and G. Hager, A Two Level Approach for Scene Recognition, CVPR, 2005. [18] J. Malik, S. Belongie, T. Leung, J. Shi, Contour and Texture Analysis for Image Segmentation, IJCV, 2001. [19] D. Martin, C. Fowlkes, J. Malik, Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues, IEEE Trans. on PAMI, 26(5):530-549, May 2004. [20] A. Mittal and N. Paragios, Motion-based Background Substraction using Adaptive Kernel Density Estimation, CVPR, 2004. [21] Eric Nowak, Frdric Jurie, Bill Triggs, Sampling Strategies for Bag-of-Features Image Classification, ECCV, 2006. [22] X. Ren and J. Malik, Learning a classification model for segmentation, ICCV, 2003. [23] C. Rother, V. Kolmogorov and A. Blake, Interactive Foreground Extraction using Iterated Graph Cuts, SIGGRAPH, 2004. [24] Yaser Sheikh and Mubarak Shah, Bayesian Object Detection in Dynamic Scenes, CVPR, 2005. [25] J. Wang, P. Bhat, A. Colburn, M. Agrawala and M. Cohen, Interactive Video Cutout. SIGGRAPH, 2005.
2006
62
3,084
Subordinate class recognition using relational object models Aharon Bar Hillel Department of Computer Science The Hebrew university of Jerusalem aharonbh@cs.huji.ac.il Daphna Weinshall Department of Computer Science The Hebrew university of Jerusalem daphna@cs.huji.ac.il Abstract We address the problem of sub-ordinate class recognition, like the distinction between different types of motorcycles. Our approach is motivated by observations from cognitive psychology, which identify parts as the defining component of basic level categories (like motorcycles), while sub-ordinate categories are more often defined by part properties (like ’jagged wheels’). Accordingly, we suggest a two-stage algorithm: First, a relational part based object model is learnt using unsegmented object images from the inclusive class (e.g., motorcycles in general). The model is then used to build a class-specific vector representation for images, where each entry corresponds to a model’s part. In the second stage we train a standard discriminative classifier to classify subclass instances (e.g., cross motorcycles) based on the class-specific vector representation. We describe extensive experimental results with several subclasses. The proposed algorithm typically gives better results than a competing one-step algorithm, or a two stage algorithm where classification is based on a model of the sub-ordinate class. 1 Introduction Human categorization is fundamentally hierarchical, where categories are organized in tree-like hierarchies. In this organization, higher nodes close to the root describe inclusive classes (like vehicles), intermediate nodes describe more specific categories (like motorcycles), and lower nodes close to the leaves capture fine distinctions between objects (e.g., cross vs. sport motorcycles). Intuitively one could expect such hierarchy to be learnt either bottom-up or top-down (or both), but surprisingly, this is not the case. In fact, there is a well defined intermediate level in the hierarchy, called basic level, which is learnt first [11]. In addition to learning, this level is more primary than both more specific and more inclusive levels, in terms of many other psychological, anthropological and linguistic measures. The primary role of basic level categories seems related to the structure of objects in the world. In [13], Tversky & Hemenway promote the hypothesis that the explanation lies in the notion of parts. Their experiments show that basic level categories (like cars and flowers) are often described as a combination of distinctive parts (e.g., stem and petals), which are mostly unique. Higher (superordinate and more inclusive) levels are more often described by their function (e.g., ’used for transportation’), while lower (sub-ordinate and more specific) levels are often described by part properties (e.g., red petals) and other fine details. These points are illustrated in Fig. 1. This computational characterization of human categorization finds parallels in computer vision and machine learning. Specifically, traditional work in pattern recognition focused on discriminating vectors of features, where the features are shared by all objects, with different values. If we make the analogy between features and parts, this level of analysis is appropriate for sub-ordinate categories. In this level different objects share parts but differ in the parts’ values (e.g., red petals vs. yellow petals); this is called ’modified parts’ in [13]. Figure 1: Left Examples of sub-ordinate and basic level classification. Top row: Two motorcycle subordinate classes, sport (right) and cross (left). As members of the same basic level category, they share the same part structure. Bottom row: Objects from different basic level categories, like a chair and a face, lack such natural part correspondence. Right. Several parts from a learnt motorcycle model as detected in cross and sport motorcycle images. Based on the part correspondence we can build ordered vectors of part descriptions, and conduct the classification in this shared feature space. (Better seen in color) This discrimination paradigm cannot easily generalize to the classification of basic level objects, mostly because these objects do not share common informative parts, and therefore cannot be efficiently compared using an ordered vector of fixed parts. This problem is partially addressed in a more recent line of work (e.g., [5, 6, 2, 7, 9]), where part-based generative models of objects are learned directly from images. In this paradigm objects are modeled as a set of parts with spatial relations between them. The models are learnt and applied to images, which are represented as unordered feature sets (usually image patches). Learning algorithms developed within this paradigm are typically more complex and less efficient than traditional classifiers learnt in some fixed vector space. However, given the characteristics of human categorization discussed above, this seems to be the correct paradigm to address the classification of basic level categories. These considerations suggest that sub-ordinate classification should be solved using a two stage method: First we should learn a generative model for the basic category. Using such a model, the object parts should be identified in each image, and their descriptions can be concatenated into an ordered vector. In a second stage, the distinction between subordinate classes can be done by applying standard machine learning tools, like SVM, to the resulting ordered vectors. In this framework, the model learnt in stage 1 is used to solve the correspondence problem: features in the same entry in two different image vectors correspond since they implement the same part. Using this relatively high level representation, the distinction between subordinate categories may be expected to get easier. Similar notions, of constructing discriminative classifiers on top of generative models, have been recently proposed in the context of object localization [10] and class recognition [7]. The main motivation in these papers was to provide discriminative power to a generative model, optimized by maximum likelihood. Thus the discriminative classifier for a class in [7, 10] uses a generative model of the same class as a representation scheme.1 In contrast, in this work we use a recent learning algorithm, which already learns a generative relational model of basic categories using a discriminative boosting technique [2]. The new element in our approach is in the learning of a model of one class (the more general basic level category) to allow the efficient discrimination of another class (the more specific sub-ordinates). Thus our main contribution lies the use of objcet hierarchy, where we represent sub-ordinate classes using models of the more general, basic level class. The approach relies on a specific form of knowledge transfer between classes, and as such it is an instance of the ’learning-to-learn’ paradigm. There are several potential benefits to this approach. First and most important is improved accuracy, especially when training data is scarce. For an under-sampled sub-ordinate class, the basic level model can be learnt from a larger sample, leading to a more stable representation for the second stage SVM and lower error rate. A second advantage becomes apparent when scalability is considered: A system which needs to discriminate between many subordinate classes will have to learn and keep considerably less models (only one for each basic level class) if built according to our proposed 1An exception to this rule is the Caltech 101 experiment of [7], but there the discriminative classifiers for all 101 classes relies on the same two arbitrary class models. −100 −50 0 50 −150 −100 −50 0 50 100 Figure 2: Left A Bayesian network specifying the dependencies between the hidden variables Cl, Cs and the parts scale and location Xk l , Xk s for k = 1, .., P. The part appearance variables Xk a are independent, and so they do not appear in this network. Middle The spatial relations between 5 parts from a learnt chair model. The cyan cross indicates the position of the hidden object center cl. Right The implementations of the 5 parts in a chair image. (Better seen in color) approach. Such a system can better cope with new subordinate classes, since learning to identify a new class may rely on existing basic class models. Typically the learning of generative models from unsegmented images is exponential in the number of parts and features [5, 6]. This significantly limits the richness of the generative model, to a point where it may not contain enough detail to distinguish between subclass instances. Alternatively, rich models can be learnt from images with part segmentations [4, 9], but obtaining such training data requires a lot of human labor. The algorithm we use in this work, presented in [2], learns from unsegmented images, and its complexity is linear in the number of model parts and image features. We can hence learn models with many parts, providing a rich object description. In section 3 we discuss the importance of this property. We briefly describe the model learning algorithm in Section 2.1. The details of the two-stage method are then described in Section 2.2. In Section 3 we describe experiments with sub-classes from six basic level categories. We compare our proposed approach, called BLP (Basic Level Primacy), to a one-stage approach. We also compare to another two-stage approach, called SLP (Subordinate Level Primacy), in which discrimination is done based on a model of the subordinate class. In most cases, the results support our claim and demonstrate the superiority of the BLP method. 2 Algorithms To learn class models, we use an efficient learning method briefly reviewed in Section 2.1. Section 2.2 describes the techniques we use for subclass recognition. 2.1 Efficient learning of object class models The learning method from [2] learns a generative relational object model, but the model parameters are discriminatively optimized using an extended boosting process. The class model is learnt from a set of object images and a set of background images. Image I is represented using an unordered feature set F(I) with Nf features extracted by the Kadir & Brady feature detector [8]. The feature set usually contains several hundred features in various scales, with considerable overlap. Features are normalized to uniform size, zero mean and unit variance. They are then represented using their first 15 DCT coefficients, augmented by the image location of the feature and its scale. The object model is a generative part-based model with P parts (see example in Fig. 2b), where each part is implemented by a single image feature. For each part, its appearance, location and scale are modeled. The appearance of parts is assumed to be independent, while their location and scale are relative to the unknown object location and scale. This dependence is captured by a Bayesian network model, shown in Fig. 2a. It is a star-like model, where the center node is a 3-dimensional hidden node C = (⃗Cl, Cs), with the vector ⃗Cl denoting the unknown object location and the scalar Cs denoting its unknown scale. All the components of the part model, including appearance, relative location and relative log-scale, are modeled using Gaussian distributions with a (scaled) identity covariance matrix. Based on this model and some simplifying assumptions, the likelihood ratio test classifier is approximated by f(I) = max C P X k=1 max x∈F (I) log p(x|C, θk) −ν (1) This classifier compares the first term, which represents the approximated image likelihood, to a threshold ν. The likelihood term approximates the image likelihood using the MAP interpretation of the model in the image, i.e., it is determined by the single best implementation of model parts by image features. This MAP solution can be efficiently found using standard message passing in time linear in the number of parts P and the number of image features Nf. However, Maximum Likelihood (ML) parameter optimization cannot be used, since the approximation permits part repetition, and as a result the ML solution is vulnerable to repetitive choices of the same part. Instead, the model is optimized to minimize a discriminative loss function. Specifically, labeling object images by +1 and background images by −1, the learning algorithm tries to minimize the exp loss of the margin, L(f) = PN i=1 exp(−yif(Ii)), which is the loss minimized by the Adaboost algorithm [12]. The optimization is done using an extended ’relational’ boosting scheme, which generalizes the boosting technique to classifiers of the form (1). In the relational boosting algorithm, the weak hypotheses (summands in Eq. (1)) are not merely functions of the image I, but depend also on the hidden variable C, which captures the unknown location and scale of the object. In order to find good part hypotheses, the weak learner is given the best current estimate of C, and uses it to guide the search for a discriminative part hypothesis. After the new part hypothesis is added to the model, C is re-inferred and the new estimate is used in the next boosting round. Additional tweaks are added to improve class recognition results, including a gradient descent weak learner and a feedback loop between the optimization of the a weak hypothesis and its weight. 2.2 Subclass recognition As stated in the introduction, we approach subclass recognition using a two-stage algorithm. In the first stage a model of the basic level class is applied to the image, and descriptors of the identified parts are concatenated into an ordered vector. In the second stage the subclass label is determined by feeding this vector into a classifier trained to identify the subclass. We next present the implementation details of these two stages. Class model learning Subclass recognition in the proposed framework depends on part consistency across images, and it is more sensitive to part identification failures than the original class recognition task. Producing an informative feature vector is only possible using a rich model with many stable parts. We therefore use a large number of features (Nf = 400) per image, and a relatively fine grid of C values, with 10 × 10 locations over the entire image and 3 scales (a total of Nc = 300 possible values for the hidden variable C). We also learn large models with P = 60 parts.2 Note that such large values for Nf and P are not possible in a purely generative framework such as [5, 6] due to the prohibitive computational learning complexity of O(N P f ). In [2], model parts are learnt using a gradient based weak learner, which tends to produce exaggerated part location models to enhance its discriminative power. In such cases parts are modeled as being unrealistically far from the object center. Here we restrict the dynamics of the location model in order to produce more realistic and stable parts. In addition, we found out experimentally that when the data contains object images with rich backgrounds, performance of subclass recognition and localization is improved when using models with increased relative location weight. Specifically, a part hypothesis in the model includes appearance, location and scale components with relative weights λi/(λ1 + λ2 + λ3), i = 1, 2, 3, learnt automatically by the algorithm. We multiply λ2 of all the parts in the learnt model by a constant factor of 10 when learning from images with rich background. Probabilistically, such increase of λ2 amounts to smaller location covariance, and hence to stricter demands on the accuracy of the relative locations of parts. 2In comparison, class recognition in [2] was done with Nf = 200, Nc = 108 and P = 50. Subclass discrimination Given a learnt object model and a new image, we match for each model part the corresponding image feature which implements it in the MAP solution. We then build the feature vector, which represents the new image, by concatenating all the feature descriptors implementing parts 1, ..P. Each feature is described using a 21-dimensional descriptor including: • The 15 DCT coefficients describing the feature. • The relative (x,y) location and log-scale of the feature (relative to the computed MAP value of C). • A normalized mean of the feature (m −ˆm)/std(m) where m is the feature’s mean (over feature pixels), and ˆm, std(m) are the empirical mean and std of m over the P parts in the image. • A normalized logarithm of feature variance (v −ˆv)/std(v) with v the logarithm of the feature’s variance (over feature pixels) and ˆv, std(v) the empirical mean and std of v over image parts. • The log-likelihood of the feature (according to the part’s model). In the end, each image is represented by a vector of length 21×P. The training set is then normalized to have unit variance in all the dimensions, and the standard deviations are stored in order to allow for identical scaling of the test data. Vector representations are prepared in this manner for a training sample including objects from the sub-ordinate class, objects from other sub-ordinate classes of the same basic category, and background images. Finally, a linear SVM [3] is trained to discriminate the target subordinate class images from all other images. 3 Experimental results Methods: In our experiments, we regard subclass recognition as a binary classification problem in a retrieval scenario. Specifically, The learning algorithm is given a sample of background images, and a sample of unsegmented class images. Images are labeled by the subclass they represent, or as background if they do not contain any object from the inclusive class. The algorithm is trained to identify a specific subclass. In the test phase, the algorithm is given another sample from the same distribution of images, and is asked to identify images from the specific subclass. Several methodological problems arise in this scenario. First, subclasses are often not mutually exclusive [13], and in many cases there are borderline instances which are inherently ambiguous. This may lead to an ill-defined classification problem. We avoid this problem in the current study by filtering the data sets, leaving only instances with clear-cut subclass affiliation. The second problem concerns performance measurements. The common measure used in related work is the equal error rate of the ROC curve (denoted here EER), i.e., the error obtained when the rate of false positives and the rate of false negatives are equal. However, as discussed in [1], this measure is not well suited for a detection scenario, where the number of positive examples is much smaller than the number of negative examples. A better measure appears to be the equal error rate of the recall-precision curve (denoted here RPC). Subclass recognition has the same characteristics, and we therefore prefer the RPC measure; for completeness, and since the measures do not give qualitatively different results, the EER score is also provided. The algorithms compared: We compare the performance of the following three algorithms: • Basic Level Primacy (BLP) - The two-stage method for subclass recognition described above, in which a model of the basic level category is used to form the vector representation. • Subordinate level primacy (SLP) - A two-stage method for subclass recognition, in which a model of the sub-ordinate level category is used to form the vector representation. • One stage method - The classification is based on the likelihood obtained by a model of the sub-ordinate class. The three algorithms use the same training sample in all the experiments. The class models in all the methods were implemented using the algorithm described in Section 2.1, with exactly the same Motorcycles Faces Guitars Cross (106) Sport (156) Male (272) Female (173) Classical (60) Electric (60) Tables Chairs Pianos Dining (60) Coffee (60) Dining (60) Living R. (60) Grand (60) Upright (60) Figure 3: Object images from the subclasses learnt in our experiments. We used 12 subclasses of 6 basic classes. The number of images in each subclass is indicated in the parenthesis next to the subclass name. Individual faces were also considered as subclasses, and the males and females subclasses above include a single example from 4 such individuals. parameters (reported in section 2.2). This algorithm is competitive with current state-of-the-art methods in object class recognition [2]. The third and the second method learn a different model for each subordinate category, and use images from the other sub-ordinate classes as part of the background class during model learning. The difference is that in the third method, classification is done based on the model score (as in [2]), and in the second the model is only used to build a representation, while classification is done with an SVM (as in [7]). The first and second method both employ the distinction between a representation and classification, but the first uses a model of the basic category, and so tries to take advantage of the structural similarity between different subordinate classes of the same basic category. Datasets We have considered 12 subordinate classes from 6 basic categories. The images were obtained from several sources. Specifically, we have re-labeled subsets of the Caltech Motorcycle and Faces database3, to obtain the subordinates of sport and cross motorcycles, and male and female faces. For these data sets we have increased the weight of the location model, as mentioned in section 2.2. We took the subordinate classes of grand piano and electric guitar from the Caltech 101 dataset 4 and supplemented them with classes of upright piano and classical guitar collected using google images. Finally, we used subsets of the chairs and furniture background used in [2]5 to define classes of dining and living room chairs, dining and coffee tables. Example images from the data sets can be seen in Fig. 3. In all the experiments, the Caltech office background data was used as the background class. In each experiment half of the data was used for training and the other half for test. In addition, we have experimented with individual faces from the Caltech faces data set. In this experiment each individual is treated as a sub-ordinate class of the Faces basic class. We filtered the faces data to include only people which have at least 20 images. There were 19 such individuals, and we report the results of these experiments using the mean error. 3Available at http://www.robots.ox.ac.uk/ vgg/data.html. 4Available at http://www.vision.caltech.edu/feifeili/Datasets.htm 5Available at http://www.cs.huji.ac.il/ aharonbh/#Data. 10 20 30 40 50 60 0 0.05 0.1 0.15 0.2 0.25 0.3 Number of parts Error rate Performance and parts number 10 20 30 40 50 60 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Number of parts Error rate Performance and parts number Figure 4: Left: RPC error rates as a function of the number of model parts P in the two-stage BLP method, for 5 ≤P ≤60. The curves are presented for 6 representative subclasses, one from each basic level category presented in Fig. 3 Right: classification error of the first stage classifier as a function of P. This graph reports errors for the 6 basic level models used in the experiments reported on the left graph. In general, while adding only a minor improvement to inclusive class recognition, adding parts beyond 30 significantly improves subclass recognition performance. Classification results Table 1 summarizes the classification results. We can see that both twostage methods perform better than the one-stage method. This shows the advantage of the distinction between representation and classification, which allows the two-stage methods to use the more powerful SVM classifier. When comparing the two two-stage methods, BLP is a clear winner in 7 of the 13 experiments, while SLP has a clear advantage only in a single case. The representation based on the basic level model is hence usually preferable for the fine discriminations required. Overall, the BLP method is clearly superior to the other two methods in most of the experiments, achieving results comparable or superior to the others in 11 of the 13 problems. It is interesting to note that the SLP and BLP show comparable performance when given the individual face subclasses. Notice however, that in this case BLP is far more economical, learning and storing a single face model instead of the 19 individual models used by SLP. Subclass One stage method Subordinate level primacy Basic level primacy Cross motor. 14.5 (12.7) 9.9 (3.5) 5.5 ( 1.7) Sport motor. 10.5 (5.7) 6.6 (5.0) 4.6 (2.6) Males 20.6 (12.4) 24.7 (19.4) 21.9 (16.7) Females 10.6 (7.1) 10.6 (7.9) 8.2 (5.9) Dining chair 6.7 (3.6 ) 0 (0) 0 (0) Living room chair 6.7 (6.7) 0 (0) 0 (0) Coffee table 13.3 (6.2) 8.4 (6.7) 3.3 (3.6) Dining table 6.7 (3.6) 4.9 (3.6) 0 (0) Classic guitar 4.9 (3.1) 3.3 (0.5) 6.7 (3.1) Electric guitar 6.7 (3.6) 3.3 (3.6) 3.3 (2.6) Grand piano 10.0 (3.6) 10.0 (3.6) 6.7 (4.0) Upright piano 3.3 (3.6) 10.0 (6.7) 3.3 (0.5) Individuals 27.5∗ (24.8)∗ 17.9∗ (7.3)∗ 19.2∗ (6.5)∗ Table 1: Error rates (in percents), when separating subclass images from non-subclass and background images. The main numbers indicate equal error rate of the recall precision curve (RPC). Equal error rate of the ROC (EER) are reported in parentheses. The best result in each row is shown in bold. For the individuals subclasses, the mean over 19 people is reported (marked by ∗). Overall, the BLP method shows a clear advantage. Performance as a function of number of parts Fig. 4 presents errors as a function of P, the number of class model parts. The graph on the left plots RPC errors of the two stage BLP method on 6 representative data sets. The graph on the right describes the errors of the first stage class models in the task of discriminating the basic level classes background images. While the performance of inclusive class recognition stabilizes after ∼30 parts, the error rates in subclass recognition continue to drop significantly for most subclasses well beyond 30 parts. It seems that while later boosting rounds have minor contribution to class recognition in the first stage of the algorithm, the added parts enrich the class representation and allow better subclass recognition in the second stage. 4 Summary and Discussion We have addressed in this paper the challenging problem of distinguishing between subordinate classes of the same basic level category. We showed that two augmentations contribute to performance when solving such problems: First, using a two-stage method where representation and classification are solved separately. Second, using a larger sample from the more general basic level category to build a richer representation. We described a specific two stage method, and experimentally showed its advantage over two alternative variants. The idea of separating representation from classification in such a way was already discussed in [7]. However, our method is different both in motivation and in some important technical details. Technically speaking, we use an efficient algorithm to learn the generative model, and are therefore able to use a rich representation with dozens of parts (in [7] the representation typically includes 3 parts). Our experiments show that the large number of model parts i a critical for the success of the two stage method. The more important difference is that we use the hierarchy of natural objects, and learn the representation model for a more general class of objects - the basic level class (BLP). We show experimentally that this is preferable to using a model of the target subordinate (SLP). This distinction and its experimental support is our main contribution. Compared with the more traditional SLP method, the BLP method suggested here enjoys two significant advantages. First and most importantly, its accuracy is usually superior, as demonstrated by our experiments. Second, the computational efficiency of learning is much lower, as multiple SVM training sessions are typically much shorter than multiple applications of relational model learning. In our experiments, learning a generative relational model per class (or subclass) required 12-24 hours, while SVM training was typically done in a few seconds. This advantage is more pronounced as the number of subclasses of the same class increases. As scalability becomes an issue, this advantage becomes more important. References [1] S. Agarwal and D. Roth. Learning a sparse representation for object detection. In ECCV, 2002. [2] A. Bar-Hillel, T. Hertz, and D. Weinshall. Efficient learning of relational object class models. In ICCV, 2005. [3] G.C. Cawley. MATLAB SVM Toolbox [http://theoval.sys.uea.ac.uk/˜gcc/svm/toolbox]. [4] P. Feltzenswalb and D. Hutenlocher. Pictorial structures for object recognition. IJCV, 61:55–79, 2005. [5] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale invariant learning. In CVPR, 2003. [6] R. Fergus, P. Perona, and A. Zisserman. A sparse object category model for efficient learning and exhaustive recognition. In CVPR, 2005. [7] AD. Holub, M. Welling, and P. Perona. Combining generative models and fisher kernels for object class recognition. In International Conference on Computer Vision (ICCV), 2005. [8] T. Kadir and M. Brady. Scale, saliency and image description. IJCV, 45(2):83–105, November 2001. [9] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit shape model. In ECCV workshop on statistical learning in computer vision, 2004. [10] Fritz M., Leibe B., Caputo B., and Schiele B. Integrating representative and discriminative models for object category detection. In ICCV, pages 1363–1370, 2005. [11] E. Rosch, C.B. Mervis, W.D. Gray, D.M. Johnson, and P. Boyes-Braem. Basic objects in natural categories. Cognitive Psychology, 8:382–439, 1976. [12] R.E. Schapire and Y. Singer. Improved boosting using confidence-rated predictions. Machine Learning, 37(3):297–336, 1999. [13] B. Tversky and K. Hemenway. Objects, parts, and categories. Journal of Experimental Psychology: General, 113(2):169–197, 1984.
2006
63
3,085
Dirichlet-Enhanced Spam Filtering based on Biased Samples Steffen Bickel and Tobias Scheffer Max-Planck-Institut f¨ur Informatik, Saarbr¨ucken, Germany {bickel, scheffer}@mpi-inf.mpg.de Abstract We study a setting that is motivated by the problem of filtering spam messages for many users. Each user receives messages according to an individual, unknown distribution, reflected only in the unlabeled inbox. The spam filter for a user is required to perform well with respect to this distribution. Labeled messages from publicly available sources can be utilized, but they are governed by a distinct distribution, not adequately representing most inboxes. We devise a method that minimizes a loss function with respect to a user’s personal distribution based on the available biased sample. A nonparametric hierarchical Bayesian model furthermore generalizes across users by learning a common prior which is imposed on new email accounts. Empirically, we observe that bias-corrected learning outperforms naive reliance on the assumption of independent and identically distributed data; Dirichlet-enhanced generalization across users outperforms a single (“one size fits all”) filter as well as independent filters for all users. 1 Introduction Design and analysis of most machine learning algorithms are based on the assumption that the training data be drawn independently and from the same stationary distribution that the resulting model will be exposed to. In many application scenarios, however, control over the data generation process is less perfect, and so this iid assumption is often a naive over-simplification. In econometrics, learning from biased samples is a common phenomenon, where the willingness to respond to surveys is known to depend on several characteristics of the person queried; work that led to a method for correcting sample selection bias for a class of regression problems has been distinguished by a Nobel Prize [6]. In machine learning, the case of training data that is only biased with respect to the ratio of class labels has been studied [4, 7]. Zadrozny [14] has derived a bias correction theorem that applies when the bias is conditionally independent of the class label given the instance, and when every instance has a nonzero probability of being drawn into the sample. Sample bias correction for maximum entropy density estimation [3] and the analysis of the generalization error under covariate shift [12] follow the same intuition. In our email spam filtering setting, a server handles many email accounts (in case of our industrial partner, several millions), and delivers millions of emails per day. A magnitude of spam and “ham” (i.e., non-spam) sources are publicly available. They include collections of emails caught in “spam traps” – email addresses that are published on the web in an invisible font and are harvested by spammers [11] – the Enron corpus that was disclosed in the course of the Enron trial [8], and SpamAssassin data. These collections have diverse properties and none of them represents the global distribution of all emails, let alone the distribution received by some particular user. The resulting bias does not only hinder learning, but also leads to skewed accuracy estimates, since individuals may receive a larger proportion of emails that a filter classifies less confidently. The following data generation model is paramount to our problem setting. An unknown process, characterized by a distribution p(θi|β), generates parameters θi. The θi parameterize distributions p(x, y|θi) over instances x (emails) and class labels y. Each p(x, y|θi) corresponds to the i-th user’s distribution of incoming spam (y = +1) or ham (y = −1) messages x. The goal is to obtain a classifier fi : x 7→y for each θi that minimizes the expectation of some loss function E(x,y)∼θi[ℓ(f(x), y)], defined with respect to the (unknown) distribution θi. Labeled training data L are drawn from a blend of data sources (public email archives), resulting in a density p(x, y|λ) = p(x|λ)p(y|x, λ) with parameter λ that governs L. The relation between the θi and λ is such that (a) any x that has nonzero probability density p(x|λ) of being drawn into the sample L also has a nonzero probability p(x|θi) under the target distributions θi; and (b) the concept of spam is consensual for all users and the labeled data; i.e., p(y|x, λ) = p(y|x, θi) for all users i. In addition to the (nonempty) labeled sample, zero or more unlabeled data Ui are available for each θi and are drawn according to θi. The unlabeled sample Ui is the inbox of user i. The inbox is empty for a newly established account and grows from there on. Our problem setting corresponds to an application scenario in which users are not prepared to manually tag spam messages in their inbox. Due to privacy and legal constraints, we are not allowed to personally read (or label) any single personal email; but the unlabeled messages may be used as input to an automated procedure. The individual distributions θi are neither independent (identical spam messages are sent to many users), nor are they likely to be identical: distributions of inbound messages vary greatly between (professional, recreational, American, Chinese, ...) email users. We develop a nonparametric hierarchical Bayesian model that allows us to impose a common prior on new θi. Such generalization may be particularly helpful for users with little or no available data Ui. The desired outcome of the learning process is an array of personalized spam filters for all users. The rest of this paper is structured as follows. We devise our solution in Section 2. In Section 3, we study the effectiveness of correcting sample bias for spam, and of using a Dirichlet process to generalize across users, experimentally. Section 4 concludes. 2 Learning from Biased Data The available labeled data L are governed by p(x|λ); directly training a classifier on L would therefore minimize the expected loss E(x,y)∼λ[ℓ(f(x), y)] with respect to p(x|λ). By contrast, the task is to find classifiers fi that minimize, for user i, the expected loss E(x,y)∼θi[ℓ(f(x), y)] with respect to p(x|θi). We can minimize the loss with respect to θi from a sample L whose instances are governed by λ when each instance is re-weighted. The weights have to be chosen such that minimizing the loss on the weighted sample L amounts to minimizing the loss with respect to θi. In order to derive weighting factors with this property, consider the following model of the process that selects the labeled sample L. After drawing an instance x according to p(x|θi), a coin s is tossed with probability p(s|x, θi, λ). We move x into the labeled sample (and add the proper class label) if s = 1; otherwise, x is discarded. Our previous assumption that any x with positive p(x|λ) also has a positive p(x|θi) implies that there exists a p(s|x, θi, λ) such that p(x|λ) ∝p(x|θi)p(s = 1|x, θi, λ). (1) That is, repeatedly executing the above process with an appropriate p(s|x, θi, λ) will create a sample of instances governed by p(x|λ). Equation 1 defines p(s|x, θi, λ); the succeeding subsections will be dedicated to estimating it from the available data. Since p(s|x, θi, λ) describes the discrepancy between the sample distribution p(x|λ) and the target p(x|θi), we refer to it as the sample bias. But let us first show that minimizing the loss on L with instances re-weighted by p(s|x, θi, λ)−1 in fact minimizes the expected loss with respect to θi. The rationale behind this claim deviates only in minor points from the proof of the bias correction theorem of [14]. Proposition 1 introduces a normalizing constant p(s = 1|θi, λ). Its value can be easily obtained as it normalizes Equation 1. Proposition 1 The expected loss with respect to p(x, y|¯θi) = p(x, y|λ) p(s=1|θi,λ) p(s=1|x,θi,λ) equals the expected loss with respect to p(x, y|θi), when p(s|x, θi, λ) satisfies Equation 1. Proof. Equation 2 expands the expected value and the definition of p(x, y|¯θi) in Proposition 1. Equation 3 splits p(x, y|λ). We apply the definition of p(s|x, θi, λ) (Equation 1) and obtain Equation 4. Equation 4 is rewritten as an expected value. E(x,y)∼¯θi[ℓ(f(x), y)] = Z ℓ(f(x), y)p(x, y|λ) p(s = 1|θi, λ) p(s = 1|x, θi, λ)d(x, y) (2) = Z ℓ(f(x), y)p(y|x, λ)p(x|λ) p(s = |θi, λ) p(s = 1|x, θi, λ)d(x, y) (3) = Z ℓ(f(x), y)p(y|x, θi)p(x|θi)d(x, y) = E(x,y)∼θi[ℓ(f(x), y)] (4) 2.1 Individualized Bias Estimation Equation 1 says that there is an unknown p(s|x, θi, λ) with p(x|λ) ∝p(x|θi)p(s = 1|x, θi, λ) which we call the sample bias. We will now discuss how to obtain an estimate ˆpI(s|x, θi, λ). The individualized empirical sample bias is an estimate of the unknown true bias, conditioned on a user’s unlabeled inbox Ui and labeled data L; hence, ˆpI(s|x, θi, λ) = p(s|x, Ui, L). Equation 1 immediately implies p(s = 1|x, θi, λ) ∝p(x|λ) p(x|θi), (5) but neither p(x|λ) nor p(x|θi) are known. However, distribution p(x|λ) is reflected in the labeled sample L, and distribution p(x|θi) in the unlabeled inbox Ui. Instances in L are examples that have been selected into the labeled sample; i.e., s = 1|x ∈L. Instances in Ui have not been selected into the labeled sample; i.e., s = 0|x ∈Ui. We define sUi,L to be the vector of selection decisions for all instances in Ui and L. That is, sUi,L contains |Ui| elements that are 0, and |L| elements that are 1. A density estimator ˆp(s|x, λ, θi) can be trained on the instances in L and Ui, using vector sUi,L as target variable. We use a regularized logistic regression density estimator parameterized with wi: ˆpI(s = 1|x, λ, θi) = p(s = 1|x; wi) = 1 1 + e⟨wi,x⟩. (6) The likelihood of the density estimator is P(sUi,L|w, Ui, L) = Y xu∈Ui p(s = 0|xu, w) Y xℓ∈L p(s = 1|xℓ, w). (7) We train parameters wi = argmaxw log P(sUi,L|w, Ui, L) + log η(w) (we write η(w) for the regularizer) [15] using the fast implementation of regularized logistic regression of [9]. 2.2 Dirichlet-Enhanced Bias Estimation This section addresses estimation of the sample bias p(s|x, θn+1, λ) for a new user n+1 by generalizing across existing users U1, . . . , Un. The resulting estimate ˆpD(s|x, θn+1, λ) will be conditioned on the new user’s inbox Un+1 and the labeled data L, but also on all other users’ inboxes. We write ˆpD(s|x, θn+1, λ) = p(s|x, Un+1; L, U1, . . . , Un) for the Dirichlet-enhanced empirical sample bias. Equation 1 says that there is a p(s = 1|x, θn+1, λ) for user n + 1 that satisfies Equation 5. Let us assume a parametric form (we employ a logistic model), and let wn+1 be the parameters that satisfy p(s = 1|x, θn+1, λ) = p(s = 1|x; wn+1) ∝p(x|λ)/p(x|θn+1). We resort to a Dirichlet process (DP) [5] G(wi) as a model for the prior belief on wn+1 given w1, . . . , wn. Dirichlet process G|{α, G0} ∼DP(α, G0) with concentration parameter α and base distribution G0 generates parameters wi: The first element w1 is drawn according to G0; in our case, the uninformed prior. It generates wn+1 according to Equation 8, where δ(wi) is a point distribution centered at wi. wn+1|w1, . . . , wn ∼αG0 + Pn i=1 δ(wi) α + n (8) Equation 9 integrates over the parameter of the bias for new user n + 1. Equation 10 splits the posterior into the likelihood of the sample selection coin tosses and the common prior which is modeled as a Dirichlet process. p(s|x, Un+1; L, U1, . . . , Un) = Z p(s|x; w)p(w|Un+1; L, U1, . . . , Un)dw (9) p(w|Un+1, L, U1, . . . , Un) ∝ P(sUn+1,L|w, Un+1, L) ˆG(w|L, U1, . . . , Un) (10) Likelihood P(sUn+1,L|w, Un+1, L) is resolved in Equation 7 for a logistic model of the bias. 2.3 Estimation of the Dirichlet Process The parameters of previous users’ bias w1, . . . , wn constitute the prior wn+1|{wi}n i=1 ∼G for user n + 1. Since the parameters wi are not observable, an estimate wn+1|L, {Ui}n i=1 ∼ˆG has to be based on the available data. Exact calculation of this prior requires integrating over the w1, . . . , wn; since this is not feasible, MCMC sampling [10] or variational approximation [1] can be used. In our application, the model of p(s|x, θi, λ) involves a regularized logistic regression in a space of more than 800,000 dimensions. In each iteration of the MCMC process or the variational inference of [1], logistic density estimators for all users would need to be trained—which is prohibitive. We therefore follow [13] and approximate the Dirichlet Process as ˆG(w) ≈ αG0 + Pn i=1 φiδ(w∗ i ) α + n . (11) Compared to the original Equation 8, the sum of point distributions at true parameters wi is replaced by a weighted sum over point distributions at pivotal w∗ i . Parameter estimation is divided in two steps. First, pivotal models of the sample bias are trained for each user i, solely based on a user’s inbox and the labeled data. Secondly, parameters φi are estimated using variational EM; they express correlations between, and allow for generalization across, multiple users. Tresp and Yu [13] suggest to use a maximum likelihood estimate w∗ i ; we implement w∗ i by training logistic regression models p(s = 1|x; w∗ i ) = 1 1+e⟨w∗ i ,x⟩ with w∗ i = argmaxw log P(sUi,L|w, Ui, L)+ log η(w). (12) Algorithmically, the pivotal models are obtained analogously to the individualized estimation of the selection bias for each user described in Section 2.1. After the pivotal models have been identified, an EM algorithm maximizes the likelihood over the parameters φi. For the E step we rely on the assumption that the posterior is a weighted sum over point distributions at the pivotal density estimates (Equation 13). With this assumption, the posterior is no longer a continuous distribution and the E step resolves to the computation of a discrete number of variational parameters φij (Equation 14). ˆp(w|Uj, L) = Xn i=1 φijδ(w∗ i ) (13) φij ∝ P(sUj,L|w∗, Uj, L) ˆG(w∗ i ) (14) Equation 11 yields the M step with φi = Pn j=1 φij. Likelihood P(sUj,L|w∗, Uj, L), is calculated as in Equation 7. The entire estimation procedure is detailed in Table 1, steps 1 through 3. 2.4 Inference Having obtained pivotal models p(s|x; w∗ i ) and parameters φi, we need to infer the Dirichletenhanced empirical sample bias p(s|x, Ui; L, U1, . . . , Un). During the training procedure, i is one of the known users from U1, . . . , Un. At application time, we may furthermore experience a message bound for user n + 1. Without loss of generality, we discuss the inference problem for a new user n + 1. Inserting ˆG(w) into Eqs. 9 and 10 leads to Equation 15. Expanding ˆG(w) according to Eq. 11 yields Equation 16. p(s|x, Un+1; L, U1, . . . , Un) ∝ Z p(s|x; w)P(sUn+1,L|w, Un+1, L) ˆG(w)dw (15) ∝ α Z p(s|x; w)P(sUn+1,L|w, Un+1, L)G0(w)dw (16) + Xn i=1 p(s|x; w∗ i )P(sUn+1,L|w∗ i , Un+1, L)φi The second summand in Equation 16 is determined by summing over the pivotal models p(s|x; w∗ i ). The first summand can be determined by applying Bayes’ rule in Equation 17; G0 is the uninformed prior; the resulting term p(s|x, Un+1, L) = p(s|x; w∗ n+1) is the outcome of a new pivotal density estimator, trained to discriminate L against Un+1. It is determined as in Equation 12. Z p(s|x; w)P(sUn+1,L|w, Un+1, L)G0(w)dw ∝ Z p(s|x; w)p(w|Un+1, L)dw (17) = p(s|x, Un+1, L) (18) The Dirichlet-enhanced empirical sample bias p(s|x, Un+1; L, U1, . . . , Un) for user n + 1 is a weighted sum of the pivotal density estimate p(s|x; w∗ n+1) for user n+1, and models p(s|x; w∗ i ) of all users i; the latter are weighted according to their likelihood P(sUn+1,L|w∗ i , Un+1, L) of observing the messages of user n + 1. Inference for the users that are available at training time is carried out in step 4(a) of the training procedure (Table 1). Table 1: Dirichlet-enhanced, bias-corrected spam filtering. Input: Labeled data L, unlabeled inboxes U1, . . . , Un. 1. For all users i = 1 . . . n: Train a pivotal density estimator ˆp(s=1|x, w∗ i ) as in Eq. 12. 2. Initialize ˆG0(w∗ i ) by setting φi = 1 for i = 1 . . . n. 3. For t = 1, . . . until convergence: (a) E-step: For all i, j, estimate φt ij from Equation 14 using ˆGt−1 and the density estimators p(s|x, w∗ i ). (b) M-step: Estimate ˆGt(w∗ i ) according to Equation 11 using φi = Pn j=1 φt ij. 4. For all users i: (a) For all x ∈L: determine empirical sample bias p(s|x, Ui; L, U1, . . . , Un), conditioned on the observables according to Equation 16. (b) Train SVM classifier fi : X →{spam, ham} by solving Optimization Problem 1. Return classifiers fi for all users i. 2.5 Training a Bias-Corrected Support Vector Machine Given the requirement of high accuracy and the need to handle many attributes, SVMs are widely acknowledged to be a good learning mechanism for spam filtering [2]. The final bias-corrected SVM fn+1 can be trained by re-sampling or re-weighting L according to s(x) = p(s=1|θi,λ) p(s=1|x,Un+1;L,U1,...,Un), where p(s|x, Un+1; L, U1, . . . , Un) is the empirical sample bias and p(s=1|θi, λ) is the normalizer that assures P x∈L s(x) = |L|. Let xk ∈L be an example that incurs a margin violation (i.e., slack term) of ξk. The expected contribution of xk to the SVM criterion is s(x)ξk because xk will be drawn s(x) times on average into each re-sampled data set. Therefore, training the SVM on the re-sampled data or optimizing with re-scaled slack terms lead to identical optimization problems. Optimization Problem 1 Given labeled data L, re-sampling weights s(x), and regularization parameter C; over all v, b, ξ1, . . . , ξm, minimize 1 2|v|2+C Xm k=1 s(x)ξk (19) subject to ∀m k=1yk(⟨v, xk⟩+ b) ≥1 −ξk; ∀m k=1ξk ≥0. (20) The bias-corrected spam filter is trained in step 4(b) of the algorithm (Table 1). 2.6 Incremental Update The Dirichlet-enhanced bias correction procedure is intrinsically incremental, which fits into the typical application scenario. When a new user n + 1 subscribes to the email service, the prior Table 2: Email accounts used for experimentation. User Ham Spam Williams Enron/Williams Dornbos spam trap (www.dornbos.com) (part 1) Beck Enron/Beck spam trap of Bruce Guenter (www.em.ca/∼bruceg/spam) Farmer Enron/Farmer personal spam of Paul Wouters (www.xtdnet.nl/paul/spam) Kaminski Enron/Kaminski spam collection of SpamArchive.org (part 1) Kitchen Enron/Kitchen personal spam of the second author. Lokay Enron/Lokay spam collection of SpamAssassin (www.spamassassin.org) Sanders Enron/Sanders personal spam of Richard Jones (www.annexia.org/spam) German traveler Usenet/de.rec.reisen.misc Dornbos spam trap (www.dornbos.com) (part 2) German architect Usenet/de.sci.architektur spam collection of SpamArchive.org (part 2) wn+1|L, {Ui}n i=1 ∼ˆG is already available. A pivotal model p(s|x, Un+1; L) can be trained; when Un+1 is still empty (the new user has not yet received emails), then the regularizer of the density estimate p(s|x, Un+1, L) resolves to the uniform distribution. Inference of p(s|x, Un+1; L, U1, . . . , Un) for the new user proceeds as discussed in Section 2.4. When data Un+1 becomes available, the prior can be updated. This update is exercised by invoking the EM estimation procedure with additional parameters θ∗ n+1 and φ(n+1). The estimates of P(sUj,L|w∗ i , Uj, L) for all pairs of existing users i and j do not change and can be reused. The EM procedure returns the updated prior wn+2|L, {Ui}n+1 i=1 ∼ˆG for the next new user n + 2. 3 Experiments In our experiments, we study the relative benefit of the following filters. The baseline is constituted by a filter that is trained under iid assumption from the labeled data. The second candidate is a “one size fits all” bias-corrected filter. Here, all users’ messages are pooled as unlabeled data and the bias p(s|x, θn+1, λ) is modeled by an estimator ˆpO(s|x, θn+1, λ) = p(s|x, Sn+1 i=1 Ui, L). An individually bias-corrected filter uses estimators ˆpI(s|x, θn+1, λ) = p(s|x, Un+1, L). Finally, we assess the Dirichlet-enhanced bias-corrected filter. It uses the hierarchical Bayesian model to determine the empirical bias ˆpD(s|x, θn+1, λ) = p(s|x, Un+1; L, U1, . . . , Un) conditioned on the new user’s messages, the labeled data, and all previous users’ messages. Evaluating the filters with respect to the personal distributions of messages requires labeled emails from distinct users. We construct nine accounts using real but disclosed messages. Seven of them contain ham emails received by distinct Enron employees from the Enron corpus [8]; we use the individuals with the largest numbers of messages from a set of mails that have been cleaned from spam. We simulate two foreign users: the “German traveler” receives postings to a moderated German traveling newsgroup, the “German architect” postings to a newsgroup on architecture. Each account is augmented with between 2551 and 6530 spam messages from a distinct source, see Table 2. The number of ham emails varies between 1189 and 5983, reflecting about natural hamto-spam ratios. The ham section of the labeled data L contains 4000 ham emails from the SpamAssassin corpus, 1000 newsletters and 500 emails from Enron employee Taylor. The labeled data contain 5000 spam emails relayed by blacklisted servers. The data are available from the authors. The total of 76,214 messages are transformed into binary term occurrance vectors with a total of 834,661 attributes; charset and base64 decoding are applied, email headers are discarded, tokens occurring less than 4 times are removed. SVM parameter C, concentration parameter α, and the regularization parameter of the logistic regression are adjusted on a small reserved tuning set. We iterate over all users and let each one play the role of the new user n + 1. We then iterate over the size of the new user’s inbox and average 10 repetitions of the evaluation process, sampling Un+1 from the inbox and using the remaining messages as hold-out data for performance evaluation. We train the different filters on identical samples and measure the area under the ROC curve (AUC). Figure 1 shows the AUC performance of the iid baseline and the three bias-corrected filters for the first two Enron and one of the German users. Error bars indicate standard error of the difference to 0.992 0.994 0.996 2048 512 128 32 8 4 2 1 AUC size of user’s unlabeled inbox Williams Dirichlet one size fits all individual iid baseline 0.986 0.988 0.99 0.992 2048 512 128 32 8 4 2 1 AUC size of user’s unlabeled inbox Beck Dirichlet one size fits all individual iid baseline 0.9968 0.997 0.9972 2048 512 128 32 8 4 2 1 AUC size of user’s unlabeled inbox German traveler Dirichlet one size fits all individual iid baseline Figure 1: AUC of the iid baseline and the three bias-corrected filters versus size of |Un+1|. 0 0.1 0.2 0.3 0.4 2048 512 128 32 8 4 2 1 reduction of 1-AUC risk size of user’s unlabeled inbox average of all nine users Dirichlet one size fits all individual 0 0.2 0.4 0 0.9 0.99 1 reduction of 1-AUC risk strength of iid violation correction benefit vs. iid violation individually corrected 200 400 600 800 0 1 2 3 4 5 6 7 8 seconds number of users training time Dirichlet-enhanced one size fits all Dirichlet-enh. incremental individual iid baseline Figure 2: Average reduction of 1-AUC risk over all nine users (left); reduction of 1-AUC risk dependent on strength of iid violation (center); number of existing users vs. training time (right). the iid filter. Figure 2 (left) aggregates the results over all nine users by averaging the rate by which the risk 1−AUC is reduced. We compute this reduction as 1−1−AUCcorrected 1−AUCbaseline , where AUCcorrected is one of the bias-corrected filters and AUCbaseline is the AUC of the iid filter. The benefit of the individualized bias correction depends on the number of emails available for that user; the 1 −AUC risk is reduced by 35-40% when many emails are available. The “one size fits all” filter is almost independent of the number of emails of the new user. On average, the Dirichletenhanced filter reduces the risk 1 −AUC by about 35% for a newly created account and by almost 40% when many personal emails have arrived. It outperforms the “one size fits all” filter even for an empty Un+1 because fringe accounts (e.g., the German users) can receive a lower weight in the common prior. The baseline AUC of over 0.99 is typical for server-sided spam filtering; a 40% risk reduction that yields an AUC of 0.994 is still a very significant improvement of the filter that can be spent on a substantial reduction of the false positive rate, or on a higher rate of spam recognition. The question occurs how strong a violation of the iid assumption the bias correction techniques can compensate. In order to investigate, we control the violation of the iid property of the labeled data as follows. We create a strongly biased sample by using only Enron users as test accounts θi, and not using any Enron emails in the labeled data. We vary the proportion of strongly biased data versus randomly drawn Enron mails in the labeled training data (no email occurs in the training and testing data at the same time). When this proportion is zero, the labeled sample is drawn iid from the testing distributions; when it reaches 1, the sample is strongly biased. In Figure 2 (center) we observe that, averaged over all users, bias-correction is effective when the iid violation lies in a mid-range. It becomes less effective when the sample violates the iid assumption too strongly. In this case, “gaps” occur in λ; i.e., there are regions that have zero probability in the labeled data L ∼λ but nonzero probability in the testing data Ui ∼θi. Such gaps render schemes that aim at reconstructing p(x|θi) by weighting data drawn according to p(x|λ) ineffective. Figure 2 (right) displays the total training time over the number of users. We fix |Un+1| to 16 and vary the number of users that influence the prior. The iid baseline and the individually corrected filter scale constantly. The Dirichlet-enhanced filter scales linearly in the number of users that constitute the common prior; the EM algorithm with a quadratic complexity in the number of users contributes only marginally to the training time. The training time is dominated by the training of the pivotal models (linear complexity). The Dirichlet enhanced filter with incremental update scales favorably compared to the “one size fits all” filter. Figure 2 is limited to the 9 accounts that we have engineered; the execution time is in the order of minutes and allows to handle larger numbers of accounts. 4 Conclusion It is most natural to define the quality criterion of an email spam filter with respect to the distribution that governs the personal emails of its user. It is desirable to utilize available labeled email data, but assuming that these data were governed by the same distribution unduly over-simplifies the problem setting. Training a density estimator to characterize the difference between the labeled training data and the unlabeled inbox of a user, and using this estimator to compensate for this discrepancy, improves the performance of a personalized spam filter—provided that the inbox contains sufficiently many messages. Pooling the unlabeled inboxes of a group of users, training a density estimator on this pooled data, and using this estimator to compensate for the bias outperforms the individualized bias-correction only when very few unlabeled data for the new user are available. We developed a hierarchical Bayesian framework which uses a Dirichlet process to model the common prior for a group of users. The Dirichlet-enhanced bias correction method estimates – and compensates for – the discrepancy between labeled training and unlabeled personal messages, learning from the new user’s unlabeled inbox as well as from data of other users. Empirically, with a 35% reduction of the 1 −AUC risk for a newly created account, the Dirichlet-enhanced filter outperforms all other methods. When many unlabeled personal emails are available, both individualized and Dirichlet-enhanced bias correction reduce the 1 −AUC risk by nearly 40% on average. Acknowledgment This work has been supported by Strato Rechenzentrum AG and by the German Science Foundation DFG under grant SCHE540/10-2. References [1] D. Blei and M. Jordan. Variational methods for the Dirichlet process. In Proceedings of the International Conference on Machine Learning, 2004. [2] H. Drucker, D. Wu, and V. Vapnik. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048–1055, 1999. [3] M. Dudik, R. Schapire, and S. Phillips. Correcting sample selection bias in maximum entropy density estimation. In Advances in Neural Information Processing Systems, 2005. [4] C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the International Joint Conference on Artificial Intellligence, 2001. [5] T. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1:209–230, 1973. [6] J. Heckman. Sample selection bias as a specification error. Econometrica, 47:153–161, 1979. [7] N. Japkowicz and S. Stephen. The class imbalance problem: A systematic study. Intelligent Data Analysis, 6:429–449, 2002. [8] Bryan Klimt and Yiming Yang. The enron corpus: A new dataset for email classification research. In Proceedings of the European Conference on Machine Learning, 2004. [9] P. Komarek. Logistic Regression for Data Mining and High-Dimensional Classification. Doctoral dissertation, Carnegie Mellon University, 2004. [10] R. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [11] Matthew Prince, Benjamin Dahl, Lee Holloway, Arthur Kellera, and Eric Langheinrich. Understanding how spammers steal your e-mail address: An analysis of the first six months of data from project honey pot. In Proceedings of the Conference on Email and Anti-Spam, 2005. [12] M. Sugiyama and K.-R. M¨uller. Model selection under covariate shift. In Proceedings of the International Conference on Artificial Neural Networks, 2005. [13] Volker Tresp and Kai Yu. An introduction to nonparametric hierarchical Bayesian modelling with a focus on multi-agent learning. In Switching and Learning in Feedback Systems, volume 3355 of Lecture Notes in Computer Science, pages 290–312. Springer, 2004. [14] Bianca Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of the International Conference on Machine Learning, 2004. [15] T. Zhang and F. Oles. Text categorization based on regularized linear classifiers. Information Retrieval, 4(1):5–31, 2001.
2006
64
3,086
Stability of K-Means Clustering Alexander Rakhlin Department of Computer Science UC Berkeley Berkeley, CA 94720 rakhlin@cs.berkeley.edu Andrea Caponnetto Department of Computer Science University of Chicago Chicago, IL 60637 and D.I.S.I., Universit`a di Genova, Italy caponnet@uchicago.edu Abstract We phrase K-means clustering as an empirical risk minimization procedure over a class HK and explicitly calculate the covering number for this class. Next, we show that stability of K-means clustering is characterized by the geometry of HK with respect to the underlying distribution. We prove that in the case of a unique global minimizer, the clustering solution is stable with respect to complete changes of the data, while for the case of multiple minimizers, the change of Ω(n1/2) samples defines the transition between stability and instability. While for a finite number of minimizers this result follows from multinomial distribution estimates, the case of infinite minimizers requires more refined tools. We conclude by proving that stability of the functions in HK implies stability of the actual centers of the clusters. Since stability is often used for selecting the number of clusters in practice, we hope that our analysis serves as a starting point for finding theoretically grounded recipes for the choice of K. 1 Introduction Identification of clusters is the most basic tool for data analysis and unsupervised learning. While people are extremely good at pointing out the relevant structure in the data just by looking at the 2-D plots, learning algorithms struggle to match this performance. Part of the difficulty comes from the absence, in general, of an objective way to assess the clustering quality and to compare two groupings of the data. Ben-David et al [1, 2, 3] put forward the goal of establishing a Theory of Clustering. In particular, attempts have been made by [4, 2, 3] to study and theoretically justify the stability-based approach of evaluating the quality of clustering solutions. Building upon these ideas, we present a characterization of clustering stability in terms of the geometry of the function class associated with minimizing the objective function. To simplify the exposition, we focus on K-means clustering, although the analogous results can be derived for K-medians and other clustering algorithms which minimize an objective function. Let us first motivate the notion of clustering stability. While for a fixed K, two clustering solutions can be compared according to the K-means objective function (see the next section), it is not meaningful to compare the value of the objective function for different K. How can one decide, then, on the value of K? If we assume that the observed data is distributed independently according to some unknown distribution, the number of clusters K should correspond to the number of modes of the associated probability density. Since density estimation is a difficult task, another approach is needed. A stability-based solution has been used for at least a few decades by practitioners. The approach stipulates that, for each K in some range, several clustering solutions should be computed by sub-sampling or perturbing the data. The best value of K is that for which the clustering solutions are most “similar”. This rule of thumb is used in practice, although, to our knowledge, there is very little theoretical justification in the literature. The precise details of data sub-sampling in the method described above differ from one paper to another. For instance, Ben-Hur et al [5] randomly choose overlapping portions of the data and evaluate the distance between the resulting clustering solutions on the common samples. Lange et al [6], on the other hand, divide the sample into disjoint subsets. Similarly, Ben-David et al [3, 2] study stability with respect to complete change of the data (independent draw). These different approaches of choosing K prompted us to give a precise characterization of clustering stability with respect to both complete and partial changes of the data. It has been noted by [6, 4, 3] that the stability of clustering with respect to complete change of the data is characterized by the uniqueness of the minimum of the objective function with respect to the true distribution. Indeed, minimization of the K-means objective function can be phrased as an empirical risk minimization procedure (see [7]). The stability follows, under some regularity assumptions, from the convergence of empirical and expected means over a Glivenko-Cantelli class of functions. We prove stability in the case of a unique minimizer by explicitly computing the covering number in the next section and noting that the resulting class is VC-type. We go further in our analysis by considering the other two interesting cases: finite and infinite number of minimizers of the objective function. With the help of a stability result of [8, 9] for empirical risk minimization, we are able to prove that K-means clustering is stable with respect to changes of o(√n) samples, where n is the total number of samples. In fact, the rate of Ω(√n) changes is a sharp transition between stability and instability in these cases. 2 Preliminaries Let (Z, A, P) be a probability space with an unknown probability measure P. Let ∥· ∥denote the Euclidean norm. We assume from the outset that the data live in a Euclidean ball in Rm, i.e. Z ⊆B2(0, R) ⊂Rm for some R > 0 and Z is closed. A partition function C : Z 7→{1, . . . , K} assigns to each point Z its “cluster identity”. The goal of clustering is to find a good partition based on the sample Z1, . . . , Zn of n points, distributed independently according to P. In particular, for K-means clustering, the quality of C on Z1, . . . , Zn is measured by the within-point scatter1 (see [10]) W(C) = 1 2n K X k=1 X i,j:C(Zi)=C(Zj)=k ∥Zi −Zj∥2. (1) It is easy to verify that the (scaled) within-point scatter can be rewritten as W(C) = 1 n K X k=1 X i:C(Zi)=k ∥Zi −ck∥2 (2) where ck is the mean of the k-th cluster based on the assignment C (see Figure 1). We are interested in the minimizers of the within-point scatter. Such assignments have to map each point to its nearest cluster center. Since in this case the partition function C is completely determined by the K centers, we will often abuse the notation by associating C with the set {c1, . . . , cK}. The K-means clustering algorithm is an alternating procedure minimizing the within-point scatter W(C). The centers {ck}K k=1 are computed in the first step, following by the assignment of each Zi to its closest center ck; the procedure is repeated. The algorithm can get trapped in local minima, and various strategies, such as starting with several random assignments, are employed to overcome the problem. In this paper, we are not concerned with the algorithmic issues of the minimization procedure. Rather, we study stability properties of the minimizers of W(C). The problem of minimizing W(C) can be phrased as empirical risk minimization [7] over the function class HK = {hA(z) = ∥z −ai∥2, i = argmin j∈{1...K} ∥z −aj∥2 : A = {a1, . . . , aK} ∈ZK}, (3) 1We have scaled the within-point scatter by 1/n if compared to [10]. ∥ck −Zi∥2 c1 c2 Figure 1: The clustering objective is to place the centers ck to minimize the sum of squared distances from points to their closest centers. where the functions are obtained by selecting all possible K centers. Functions hA(z) in HK can also be written as hA(z) = K X i=1 ∥z −ai∥2I(z is closest to ai), where ties are broken, for instance, in the order of ai’s. Hence, functions hA ∈HK are K parabolas glued together with centers at a1, . . . , aK, as shown in Figure 1. With this notation, one can see that min C W(C) = min h∈HK 1 n n X i=1 h(Zi). Moreover, if C minimizes the left-hand side, hC has to minimize the right-hand side and vice versa. Hence, we will interchangeably use C and hC as minimizers of the within-point scatter. Several recent papers (e.g. [11]) have addressed the question of finding the distance metric for clustering. Fortunately, in our case there are several natural choices. One choice is to measure the similarity between the centers {ak}K k=1 and {bk}K k=1 of clusterings A and B. Another choice is to measure the Lq(P) distance between hA and hB for some q ≥1. In fact, we show that these two choices are essentially equivalent. 3 Covering Number for HK The following technical Lemma shows that a covering of the ball B2(0, R) induces a cover of HK in the L∞distance because small shifts of the centers imply small changes of the corresponding functions in HK. Lemma 3.1. For any ε > 0, N(HK, L∞, ε) ≤ 16R2K + ε ε mK . Proof. It is well-known that a Euclidean ball of radius R in Rm can be covered by N = 4R+δ δ m balls of radius δ (see Lemma 2.5 in [12]). Let T = {t1, . . . , tN} be the set of centers of such a cover. Consider an arbitrary function hA ∈HK with centers at {a1, . . . , aK}. By the definition of the cover, there exists ti1 ∈T such that ∥a1 −ti1∥≤δ. Let A1 = {ti1, a2, . . . , aK}. Since Z ⊆B2(0, R), ∥hA −hA1∥∞≤(2R)2 −(2R −δ)2 ≤4Rδ. We iterate through all the ai’s, replacing them by the members of T. After K steps, ∥hA −hAK∥∞≤4RKδ and all centers of AK belong to T. Hence, each function hA ∈H can be approximated to within 4RKδ by functions with centers in a finite set T. The upper bound on the number of functions in HK with centers in T is N K. Hence, N K = 4R+δ δ mK functions cover HK to within 4RKδ in the L∞norm. The Lemma follows by setting ε = 4RKδ. 4 Geometry of HK and Stability The above Lemma shows that HK is not too rich, as its covering numbers are polynomial. This is the first important aspect in the study of clustering stability. The second aspect is the geometry of HK with respect to the measure P. In particular, stability of K-means clustering depends on the number of functions h ∈HK with the minimum expectation Eh. Note that the number of minimizers depends only on P and K, and not on the data. Since Z is closed, the number of minimizers is at least one. The three important cases are: unique minimum, a finite number of minimizers (greater than one), and an infinite number of minimizers. The first case is the simplest one, and is a good starting point. Definition 4.1. For ϵ > 0 define Qϵ P = {h ∈HK : Eh ≤ inf h′∈HK Eh′ + ϵ}, the set of almost-minimizers of the expected error. In the case of a unique minimum of Eh, one can show that the diameter of Qε P tends to zero as ϵ →0.2 Lemma 3.1 implies that the class HK is VC-type. In particular, it is uniform Donsker, as well as uniform Glivenko-Cantelli. Hence, empirical averages of functions in HK uniformly converge to their expectations: lim n→∞P sup h∈HK Eh −1 n n X i=1 h(Zi) > ε ! = 0. Therefore, for any ε, δ > 0 P sup h∈HK Eh −1 n n X i=1 h(Zi) > ε ! < δ for n > nε,δ. Denote by hA the function corresponding to a minimum of W(C) on Z1, . . . , Zn. Suppose hC∗= argminh∈HK Eh, i.e. C∗is the best clustering, which can be computed only with the knowledge of P. Then, with probability at least 1 −δ, EhA ≤1 n n X i=1 hA(Zi) + ε and 1 n n X i=1 hC∗(Zi) ≤EhC∗+ ε for n > nε,δ. Furthermore, 1 n n X i=1 hA(Zi) ≤1 n n X i=1 hC∗(Zi) by the optimality of hA on the data. Combining the above, EhA ≤EhC∗+ 2ε with probability at least 1 −δ for n > nε,δ. Another way to state the result is EhA P−→ inf h′∈HK Eh′. Assuming the existence of a unique minimizer, i.e. diamL1(P )Qϵ P →0, we obtain ∥hA −hC∗∥L1(P ) P−→0. By triangle inequality, we immediately obtain the following Proposition. 2This can be easily proved by contradiction. Let us assume that the diameter does not tend to zero. Then there is a sequence of functions {h(t)} in Qϵ(t) P with ϵ(t) →0 such that ∥h(t)−h∗∥L1(P ) ≥ξ for some ξ > 0. Hence, by the compactness of HK, the sequence {h(t)} has an accumulation point h∗∗, and by the continuity of expectation, Eh∗∗= infh′∈HK Eh′. Moreover, ∥h∗−h∗∗∥L1 ≥ξ, which contradicts the uniqueness of the minimizer. Proposition 4.1. Let Z1, . . . , Zn, Z′ 1, . . . , Z′ n be i.i.d. samples. Suppose the clustering A minimizes W(C) over the set {Z1, . . . , Zn} while B is the minimizer over {Z′ 1, . . . , Z′ n}. Then ∥hA −hB∥L1(P ) P−→0. We have shown that in the case of a unique minimizer of the objective function (with respect to the distribution), two clusterings over independently drawn sets of points become arbitrarily close to each other with increasing probability as the number of points increases. If there are finite (but greater than one) number of minimizers h ∈HK of Eh, multinomial distribution estimates tell us that we expect stability with respect to o(√n) changes of points, while no stability is expected for Ω(√n) changes, as the next example shows. Example 1. Consider 1-mean minimization over Z = {x1, x2}, x1 ̸= x2, and P = 1 2(δx1 +δx2). It is clear that, given the training set Z1, . . . , Zn, the center of the minimizer of W(C) is either x1 or x2, according to the majority vote over the training set. Since the difference between the number of points on x1 and x2 is distributed according to a binomial with zero mean and the variance scaling as n, it is clear that by changing Ω(√n) points from Z1, . . . , Zn, it is possible to swap the majority vote with constant probability. Moreover, with probability approaching one, it is not possible to achieve the swap by a change of o(√n) points. A similar result can be shown for any K-means over a finite Z. The above example shows that, in general, it is not possible to prove closeness of clusterings over two sets of samples differing on Ω(√n) elements. In fact, this is a sharp threshold. Indeed, by employing the following Theorem, proven in [8, 9], we can show that even in the case of an infinite number of minimizers, clusterings over two sets of samples differing on o(√n) elements become arbitrarily close with increasing probability as the number of samples increases. This result cannot be deduced from the multinomial estimates, as it relies on the control of fluctuations of empirical means over a Donsker class. Recall that a class is Donsker if it satisfies a version of the central limit theorem for function classes. Theorem 4.1 (Corollary 11 in [9] or Corollary 2 in [8]). Assume that the class of functions F over Z is uniformly bounded and P-Donsker, for some probability measure P over Z. Let f (S) and f (T ) be minimizers over F of the empirical averages with respect to the sets S and T of n points i.i.d. according to P. Then, if |S △T| = o(√n), it holds ∥f (S) −f (T )∥L1(P ) P−→0. We apply the above theorem to HK which is P-Donsker for any P because its covering numbers in L∞scale polynomially (see Lemma 3.1). The boundedness condition is implied by the assumption that Z ⊆B2(0, R). We note that if the class HK were richer than P-Donsker, the stability result would not necessarily hold. Corollary 4.1. Suppose the clusterings A and B are minimizers of the K-means objective W(C) over the sets S and T, respectively. Suppose that |S △T| = o(√n). Then ∥hA −hB∥L1(P ) P−→0. The above Corollary holds even if the number of minimizers h ∈HK of Eh is infinite. This concludes the analysis of stability of K-means for the three interesting cases: unique minimizer, finite number (greater than one) of minimizers, and infinite number of minimizers. We remark that the distribution P and the number K alone determine which one of the above cases is in evidence. We have proved that stability of K-means clustering is characterized by the geometry of the class HK with respect to P. It is evident that the choice of K maximizing stability of clustering aims to choose K for which there is a unique minimizer. Unfortunately, for “small” n, stability with respect to a complete change of the data and stability with respect to o(√n) changes are indistinguishable, making this rule of thumb questionable. Moreover, as noted in [3], small changes of P lead to drastic changes in the number of minimizers. 5 Stability of the Centers Intuitively, stability of functions hA with respect to perturbation of the data Z1, . . . , Zn implies stability of the centers of the clusters. This intuition is made precise in this section. Let us first define a notion of distance between centers of two clusterings. Definition 5.1. Suppose {a1, . . . , aK} and {b1, . . . , bK} are centers of two clusterings A and B, respectively. Define a distance between these clusterings as dmax({a1, . . . , aK}, {b1, . . . , bK}) := max 1≤i≤K min 1≤j≤K (∥ai −bj∥+ ∥aj −bi∥) Lemma 5.1. Assume the density of P (with respect to the Lebesgue measure λ over Z) is bounded away from 0, i.e. dP > c dλ for some c > 0. Suppose ∥hA −hB∥L1(P ) ≤ε. Then dmax({a1, . . . , aK}, {b1, . . . , bK}) ≤  ε cc,m  1 m+2 where cc,m depends only on c and m. Proof. First, we note that dmax({a1, . . . , aK}, {b1, . . . , bK}) ≤2 max  max 1≤i≤K min 1≤j≤K ∥ai −bj∥, max 1≤i≤K min 1≤j≤K ∥aj −bi∥  Without loss of generality, assume that the maximum on the right-hand side is attained at a1 and b1 such that b1 is the closest center to a1 out of {b1, . . . , bK}. Suppose ∥a1 −b1∥= d. Since dmax({a1, . . . , aK}, {b1, . . . , bK}) ≤2d, it is enough to show that d is small (scales as a power of ε). Consider B2(a1, d/2), a ball of radius d/2 centered at a1. Since any point z ∈B2(a1, d/2) is closer to a1 than to b1, we have ∥z −a1∥2 ≤∥z −b1∥2. Refer to Figure 2 for the pictorial representation of the proof. Note that bj /∈B2(a1, d/2) for any j ∈{2 . . . K}. Also note that for any z ∈Z, ∥z −a1∥2 ≥ K X i=1 ∥z −ai∥2I(ai is closest to z) = hA(z). a1 b1 ( d 2)2 B(a1, d/2) d Figure 2: To prove Lemma 5.1 it is enough to show that the shaded area is upperbounded by the L1(P) distance between the functions hA and hB and lower-bounded by a power of d. We deduce that d cannot be large. Combining all the information, we obtain the following chain of inequalities ∥hA −hB∥L1(P ) = Z |hA(z) −hB(z)| dP(z) ≥ Z B2(a1,d/2) |hA(z) −hB(z)| dP(z) = Z B2(a1,d/2) hA(z) −∥z −b1∥2 dP(z) = Z B2(a1,d/2) ∥z −b1∥2 −hA(z)  dP(z) = Z B2(a1,d/2) ∥z −b1∥2 − K X i=1 ∥z −ai∥2I(ai is closest to z) ! dP(z) ≥ Z B2(a1,d/2) ∥z −b1∥2 −∥z −a1∥2 dP(z) ≥ Z B2(a1,d/2) (d/2)2 −∥z −a1∥2 dP(z) ≥c · 2πm/2 Γ(m/2) Z d/2 0 (d/2)2 −r2 rm−1dr = c · 2πm/2 Γ(m/2) 2 m(m + 2)(d/2)m+2 = cc,m · dm+2. Since, by assumption, ∥hA −hB∥L1(P ) ≤ε, we obtain d ≤  ε cc,m  1 m+2 . From the above lemma, we immediately obtain the following Proposition. Proposition 5.1. Assume the density of P (with respect to the Lebesgue measure λ over Z) is bounded away from 0, i.e. dP > c dλ for some c > 0. Suppose the clusterings A and B are minimizers of the K-means objective W(C) over the sets S and T, respectively. Suppose that |S △T| = o(√n). Then dmax({a1, . . . , aK}, {b1, . . . , bK}) P−→0. Hence, the centers of the minimizers of the within-point scatter are stable with respect to perturbations of o(√n) points. Similar results can be obtained for other procedures which optimize some function of the data by applying Theorem 4.1. 6 Conclusions We showed that K-means clustering can be phrased as empirical risk minimization over a class HK. Furthermore, stability of clustering is determined by the geometry of HK with respect to P. We proved that in the case of a unique minimizer, K-means is stable with respect to a complete change of the data, while for multiple minimizers, we still expect stability with respect to o(√n) changes. The rule for choosing K by maximizing stability can be viewed then as an attempt to select K such that HK has a unique minimizer with respect to P. Although used in practice, this choice of K is questionable, especially for small n. We hope that our analysis serves as a starting point for finding theoretically grounded recipes for choosing the number of clusters. References [1] Shai Ben-David. A framework for statistical clustering with a constant time approximation algorithms for k-median clustering. In COLT, pages 415–426, 2004. [2] Ulrike von Luxburg and Shai Ben-David. Towards a statistical theory of clustering. PASCAL Workshop on Statistics and Optimization of Clustering, 2005. [3] Shai Ben-David, Ulrike von Luxburg, and David Pal. A sober look at clustering stability. In COLT, 2006. [4] A. Rakhlin. Stability of clustering methods. NIPS Workshop ”Theoretical Foundations of Clustering”, December 2005. [5] A. Ben-Hur, A. Elisseeff, and I. Guyon. A stability based method for discovering structure in clustered data. In Pasific Symposium on Biocomputing, volume 7, pages 6–17, 2002. [6] T. Lange, M. Braun, V. Roth, and J. Buhmann. Stability-based model selection. In NIPS, 2003. [7] Joachim M. Buhmann. Empirical risk approximation: An induction principle for unsupervised learning. Technical Report IAI-TR-98-3, 3, 1998. [8] A. Caponnetto and A. Rakhlin. Some properties of empirical risk minimization over Donsker classes. AI Memo 2005-018, Massachusetts Institute of Technology, May 2005. [9] A. Caponnetto and A. Rakhlin. Stability properties of empirical risk minimization over Donsker classes. Journal of Machine Learning Research. Accepted. Available at http://cbcl.mit.edu/people/rakhlin/erm.pdf, 2006. [10] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning - Data Mining, Inference, and Prediction. Springer, 2002. [11] Marina Meil˘a. Comparing clusterings: an axiomatic view. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 577–584, New York, NY, USA, 2005. ACM Press. [12] S.A. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
2006
65
3,087
Convergence of Laplacian Eigenmaps Mikhail Belkin Department of Computer Science Ohio State University Columbus, OH 43210 mbelkin@cse.ohio-state.edu Partha Niyogi Department of Computer Science The University of Chicago Hyde Park, Chicago, IL 60637. niyogi@cs.uchicago.edu Abstract Geometrically based methods for various tasks of machine learning have attracted considerable attention over the last few years. In this paper we show convergence of eigenvectors of the point cloud Laplacian to the eigenfunctions of the Laplace-Beltrami operator on the underlying manifold, thus establishing the first convergence results for a spectral dimensionality reduction algorithm in the manifold setting. 1 Introduction The last several years have seen significant activity in geometrically motivated approaches to data analysis and machine learning. The unifying premise behind these methods is the assumption that many types of high-dimensional natural data lie on or near a lowdimensional manifold. Collectively this class of learning algorithms is often referred to as manifold learning algorithms. Some recent manifold algorithms include Isomap [14] and Locally Linear Embedding (LLE) [13]. In this paper we provide a theoretical analysis for the Laplacian Eigenmaps introduced in [2], a framework based on eigenvectors of the graph Laplacian associated to the point-cloud data. More specifically, we prove that under certain conditions, eigenvectors of the graph Laplacian converge to eigenfunction of the Laplace-Beltrami operator on the underlying manifold. We note that in mathematics the manifold Laplacian is a classical object of differential geometry with a rich tradition of inquiry. It is one of the key objects associated to a general differentiable Riemannian manifold. Indeed, several recent manifold learning algorithms are closely related to the Laplacian. The eigenfunction of the Laplacian are also eigenfunctions of heat diffusions, which is the point of view explored by Coifman and colleagues at Yale University in a series of recent papers on data analysis (e.g., [6]). Hessian Eigenmaps approach which uses eigenfunctions of the Hessian operator for data representation was proposed by Donoho and Grimes in [7]. Laplacian is the trace of the Hessian. Finally, as observed in [2], the cost function that is minimized to obtain the embedding of LLE is an approximation to the squared Laplacian. In the manifold learning setting, the underlying manifold is usually unknown. Therefore functional maps from the manifold need to be estimated using point cloud data. The common approximation strategy in these methods is to construct an adjacency graph associated to a point cloud. The underlying intuition is that since the graph is a proxy for the manifold, inference based on the structure of the graph corresponds to the desired inference based on the geometric structure of the manifold. Theoretical results to justify this intuition have been developed over the last few years. Building on recent results on functional convergence of approximation for the Laplace-Beltrami operator using heat kernels and results on consistency of eigenfunctions for empirical approximations of such operators, we show convergence of the Laplacian Eigenmaps algorithm. We note that in order to prove convergence of a spectral method, one needs to demonstrate convergence of the empirical eigenvalues and eigenfunctions. To our knowledge this is the first complete convergence proof for a spectral manifold learning method. 1.1 Prior and Related Work This paper relies on results obtained in [3, 1] for functional convergence of operators. It turns out, however, that considerably more careful analysis is required to ensure spectral convergence, which is necessary to guarantee convergence of the corresponding algorithms. To the best of our knowledge previous results are not sufficient to guarantee convergence for any spectral method in the manifold setting. Lafon in [10] generalized pointwise convergence results from [1] to the important case of an arbitrary probability distribution on the manifold. We also note [4], where a similar result is shown for the case of a domain in Rn. Those results were further generalized and presented with an empirical pointwise convergence theorem for the manifold case in [9]. We observe that the arguments in this paper are likely to allow one to use these results to show convergence of eigenfunctions for a wide class of probability distributions on the manifold. Empirical convergence of spectral clustering for a fixed kernel parameter t was analyzed in [11] and is used in this paper. However the geometric case requires t →0. The results in this paper as well as in [3, 1] are for the case of a uniform probability distribution on the manifold. Recently [8] provided deeper probabilistic analysis in that case. Finally we point out that while the analogies between the geometry of manifolds and the geometry of graphs are well-known in spectral graph theory and in certain areas of differential geometry (see, e.g., [5]) the exact nature of that parallel is usually not made precise. 2 Main Result The main result of this paper is to show convergence of eigenvectors of graph Laplacian associated to a point cloud dataset to eigenfunctions of the Laplace-Beltrami operator when the data is sampled from a uniform probability distribution on an embedded manifold. In what follows we will assume that the manifold M is a compact infinitely differentiable Riemannian submanifold of RN without boundary. Recall now that the Laplace-Beltrami operator ∆on M is a differential operator ∆: C2 →L2 defined as ∆f = −div (∇f) where ∇f is the gradient vector field and div denotes divergence. ∆is a positive semi-definite self-adjoint operator and has a discrete spectrum on a compact manifold. We will generally denote its ith smallest eigenvalue by λi and the corresponding eigenfunction by ei. See [12] for a thorough introduction to the subject. We define the operator Lt : L2(M) →L2(M) as follows (µ is the standard measure): Lt(f)(p) = (4πt)−k+2 2 Z M e−∥p−q∥2 4t f(p) dµq − Z M e−∥p−q∥2 4t f(q) dµq  If xi are the data points, the corresponding empirical version is given by ˆLt n(f)(p) = (4πt)−k+2 2 n X i e−∥p−xi∥2 4t f(p) − X i e−∥p−xi∥2 4t f(xi) ! The operator ˆLt n is (the extension of) the point cloud Laplacian that forms the basis of the Laplacian Eigenmaps algorithm for manifold learning. It is easy to see that it acts by matrix multiplication on functions restricted to the point cloud, with the matrix being the corresponding graph Laplacian. We will assume that xi are randomly i.i.d. sampled from M according to the uniform distribution. Our main theorem shows that that there is a way to choose a sequence tn, such that the eigenfunctions of the empirical operators ˆLtn n converge to the eigenfunctions of the LaplaceBeltrami operator ∆in probability. Theorem 2.1 Let λt n,i be the ith eigenvalue of ˆLt n and et n,i be the corresponding eigenfunction (which, for each fixed i, will be shown to exist for t sufficiently small). Let λi and ei be the corresponding eigenvalue and eigenfunction of ∆respectively. Then there exists a sequence tn →0, such that lim n→∞λtn n,i = λi lim n→∞∥etn n,i(x) −ei(x)∥2 = 0 where the limits are in probability. 3 Overview of the proof The proof of the main theorem consists of two main parts. One is spectral convergence of the functional approximation Lt to ∆as t →0 and the other is spectral convergence of the empirical approximation ˆLt n to Lt as the number of data points n tends to infinity. These two types of convergence are then put together to obtain the main Theorem 2.1. Part 1. The more difficult part of the proof is to show convergence of eigenvalues and eigenfunctions of the functional approximation Lt to those of ∆as t →0. To demonstrate convergence we will take a different functional approximation 1−Ht t of ∆, where Ht is the heat operator. While 1−Ht t does not converge uniformly to ∆they share an eigenbasis and for each fixed i the ith eigenvalue of 1−Ht t converges to the ith eigenvalue of ∆. We will then consider the operator Rt = 1−Ht t −Lt. A careful analysis of this operator, which constitutes the bulk of the proof paper, shows that Rt is a small relatively bounded perturbation of 1−Ht t , in the sense that for any function f we have ∥Rtf∥2 ∥1−Ht t f∥2 ≪1 as t →0. This will imply spectral convergence and lead to the following Theorem 3.1 Let λi, λt i, ei, et i be the ith smallest eigenvalues and the corresponding eigenfunctions of ∆and Lt respectively. Then lim t→0 |λi −λt i| = 0 lim t→0 ∥ei −et i∥2 = 0 Part 2. The second part is to show that the eigenfunctions of the empirical operator ˆLt n converge to eigenfunctions of Lt as n →∞in probability. That result follows readily from the previous work in [11] together with the analysis of the essential spectrum of Lt. The following theorem is obtained: Theorem 3.2 For a fixed sufficiently small t, let λt n,i and λt i be the ith eigenvalue of ˆLt n and Lt respectively. Let et n,i and et i be the corresponding eigenfunctions. Then lim n→∞λt n,i = λt i lim n→∞∥et n,i(x) −et i(x)∥2 = 0 assuming that λt i ≤1 2t. The convergence is almost sure. Observe that this implies convergence for any fixed i as soon as t is sufficiently small. Symbolically these two theorems can be represented by top line of the following diagram: Eig ˆLt n Eig ∆ Eig Lt .............................................................................................................................................................................. ............ n →∞ .............................................................................................................................................................................. ............ probabilistic .............................................................................................................................................................................. ............ t →0 .............................................................................................................................................................................. ............ deterministic ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ n →∞ tn →0 After demonstrating two types of convergence results in the top line of the diagram a simple argument shows that a sequence tn can be chosen to guarantee convergence as in the final Theorem 2.1 and provides the bottom arrow. 4 Spectral Convergence of Functional Approximations. 4.1 Main Objects and the Outline of the Proof Let M be a compact smooth smoothly embedded k-dimensional manifold in RN with the induced Riemannian structure and the corresponding induced measure µ. As above, we define the operator Lt : L2(M) →L2(M) as follows: Lt(f)(x) = (4πt)−k+2 2 Z M e−∥x−y∥2 4t f(x) dµy − Z M e−∥x−y∥2 4t f(y) dµy  As shown in previous work, this operator serves as a functional approximation to the Laplace-Beltrami operator on M. The purpose of this paper is to extend the previous results to the eigenvalues and eigenfunctions, which turn out to need some careful estimates. We start by reviewing certain properties of the Laplace-Beltrami operator and its connection to the heat equation. Recall that the heat equation on the manifold M is given by ∆h(x, t) = ∂h(x, t) ∂t where h(x, t) is the heat at time t at point x. Let f(x) = h(x, 0) be the initial heat distribution. We observe that from the definition of the derivative ∆f = lim t→0 1 t (h(x, t) −f(x)) It is well-known (e.g., [12]) that the solution to the heat equation at time t can be written as Htf(x) := h(x, t) = Z M Ht(x, y)f(y)dµy Here Ht is the heat operator and Ht(x, y) is the heat kernel of M. It is also well-known that the heat operator Ht can be written as Ht = e−t∆. We immediately see that ∆= limt→0 1−Ht t and that eigenfunctions of Ht and hence eigenfunction of 1−Ht t coincide with eigenfunctions of the Laplace operator. The ith eigenvalue of 1−Ht t is equal to 1−e−tλi t , where λi as usual is the ith eigenvalue of ∆. It is easy to observe that once the heat kernel Ht(x, y) is known, finding the Laplace operator poses no difficulty: ∆f = lim t→0 1 t  f(x) − Z M Ht(x, y)f(y) dµy  = lim t→0 1 −Ht t  f (1) Reconstructing the Laplacian from a point cloud is possible because of the fundamental fact that the manifold heat kernel Ht(x, y) can be approximated by the ambient space Gaussian and hence Lt is an approximation to 1−Ht t and can be shown to converge for a fixed f to ∆. This pointwise operator convergence is discussed in [10, 3, 1]. To obtain convergence of eigenfunctions, however, one typically needs the stronger uniform convergence. If An is a sequence of operators, we say that An →A uniformly in L2 if sup∥f∥2=1 ∥Anf −Af∥2 →0. This is sufficient for convergence of eigenfunctions and other spectral properties. It turns out that this type of convergence does not hold for functional approximation Lt as t →0, which presents a serious technical obstruction to proving convergence of spectral properties. To observe that Lt does not converge uniformly to ∆, observe that while 1−Ht t converges to ∆for each fixed function f, even this convergence is not uniform. Indeed, for a small t, we can always choose a sufficiently large λi ≫1/t and the corresponding eigenfunction ei of ∆, s.t. 1 −Ht t −∆  ei 2 = 1 t (1 −e−tλi) −λi ≈ 1 t −λi ≫1 Since Lt is an approximation to 1−Ht t , uniform convergence cannot be expected and the standard perturbation theory techniques do not apply. To overcome this obstacle we need the two following key ingredients: Observation 1. Eigenfunctions of 1−Ht t coincide with eigenfunctions of ∆. Observation 2. Lt is a small relatively bounded perturbation of 1−Ht t . While the first of these observations is immediate, the second is the technical core of this work. The relative boundedness of the perturbation will imply convergence of eigenfunctions of Lt to those of 1−Ht t and hence, by the Observation 1, to eigenfunctions of ∆. We now define the perturbation operator Rt = 1 −Ht t −Lt The relative boundedness of the self-adjoint perturbation operator Rt is formalized as follows: Theorem 4.1 For any 0 < ǫ < 2 k+2 there exists a constant C, such that for all t sufficiently small |⟨Rtf, f⟩| ⟨1−Ht t f, f⟩≤C max  t 2 k+2 −ǫ, t k+2 2 ǫ In particular lim t→0 sup ∥f∥2=1 ⟨Rtf, f⟩ ⟨1−Ht t f, f⟩= 0 and hence Rt is dominated by 1−Ht t on L2 as t tends to 0. This result implies that for small values of t, bottom eigenvalues and eigenfunction of Lt are close to those of 1−Ht t , which in turn implies convergence. To establish this result, we will need two key estimates on the size of the perturbation Rt in two different norms. Proposition 4.2 Let f ∈L2. There exists C ∈R, such that for all sufficiently small values of t ∥Rtf∥2 ≤C∥f∥2 Proposition 4.3 Let f ∈H k 2 +1, where H k 2 +1 is a Sobolev space. Then there is C ∈R, such that for all sufficiently small values of t ∥Rtf∥2 ≤C √ t∥f∥H k 2 +1 In what follows we give the proof of the Theorem 4.1 assuming the two Propositions above. The proof of the Propositions requires technical estimates of the heat kernel and can be found the longer version of the paper enclosed. 4.2 Proof of Theorem 4.1. Lemma 4.4 Let e be an eigenvector of ∆with the eigenvalue λ. Then for some universal constant C ∥e∥H k 2 +1 ≤Cλ k+2 4 (2) The details can be found in the long version. Now we can proceed with the Proof: [Theorem 4.1] Let ei(x) be the ith eigenfunction of ∆and let λi be the corresponding eigenvalue. Recall that ei form an orthonormal basis of L2(M). Thus any function f ∈L2(M) can be written uniquely as f(x) = P∞ i=0 aiei(x) where P a2 i < ∞. For technical resons we will assume that all our functions are perpendicular to the constant and the lowest eigenvalue is nonzero. Recall also that Htf = exp(−t∆)f, Htei = exp(−tλi)ei, 1 −Ht t ei = 1 −e−λit t ei (3) Now let us fix t and consider the function φ(x) = 1−e−xt t for positive x. It is easy to check that φ is a concave and increasing function of x. Put x0 = 1/ √ t. We have: φ(0) = 0 φ(x0) = 1 −e− √ t t φ(x0) x0 = 1 −e− √ t √ t Splitting the positive real line in two intervals [0, x0], [x0, ∞) and using concavity and monotonicity we observe that φ(x) ≥min 1 −e− √ t √ t x, 1 −e− √ t t ! Note that limt→0 1−e− √ t √ t = 1. Therefore for t sufficiently small φ(x) ≥min 1 2x, 1 2 √ t  Thus 1 −Ht t ei, ei  = 1 −e−λit t ≥1 2 min  λi, 1 √ t  (4) Now take f ∈L2, f(x) = P∞ 1 aiei(x). Without a loss of generality we can assume that ∥f∥2 = 1. Taking α > 0, we split f as a sum of f1 and f2 as following: f1 = X λi≤α aiei, f2 = X λi>α aiei It is clear that f = f1 + f2 and, since f1 and f2 are orthogonal, ∥f∥2 2 = ∥f1∥2 2 + ∥f2∥2 2. We will now deal separately with f1 and with f2. From the inequality (4) above, we observe that 1 −Ht t f, f  ≥1 2λ1 On the other hand, from the inequality (2), we see that if ei is a basis element present in the basis expansion of f1, ∥ei∥ k 2 +1 H ≤Cα k+2 4 Since ∆acts by rescaling basis elements, we have ∥f1∥H k 2 +1 ≤Cα k+2 4 . Therefore by Proposition 4.3 for t sufficiently small and some constant C′ ∥Rtf1∥2 ≤C′√ tα k+2 4 (5) Hence we see that ∥Rtf1∥2 ⟨1−Ht t f, f⟩≤2C′ λ1 √ t α k+2 4 (6) Consider now the second summand f2. Recalling that f2 only has basis components with eigenvalues greater than α and using the inequality (4) we see that 1 −Ht t f, f  ≥ 1 −Ht t f2, f2  ≥1 2 min  α, 1 √ t  ∥f2∥2 2 (7) On the other hand, by Proposition 4.2 ∥Rtf2∥2 ≤C1∥f2∥2 2 (8) Thus |⟨Rtf2, f2⟩| ⟨1−Ht t f, f⟩≤ ∥Rtf2∥2 ⟨1−Ht t f2, f2⟩≤C′ 1 max  1 α, √ t  (9) Finally, collecting inequalities 6 and 9 we see: |⟨Rtf, f⟩| ⟨1−Ht t f, f⟩≤∥Rtf1∥+ ∥Rtf2∥ ⟨1−Ht t f, f⟩ ≤C  max  1 α, √ t  + √ t α k+2 4  (10) where C is a constant independent of t and α. Choosing α = t− 2 k+2 +ǫ where 0 < ǫ < 2 k+2 yields the desired result. □ 5 Spectral Convergence of Empirical Approximation Proposition 5.1 For t sufficiently small SpecEss (Lt) ⊂ 1 2t−1, ∞  where SpecEss denotes the essential spectrum of the operator. Proof: As noted before Ltf is a difference of a multiplication operator and a compact operator Ltf(p) = g(p)f(p) −Kf (11) where g(p) = (4πt)−k+2 2 Z M e−∥p−q∥2 4t dµq and Kf is a convolution with a Gaussian. As noted in [11], it is a fact in basic perturbation theory SpecEss (Lt) = rg g where rg g is the range of the function g : M →R. To estimate rg g observe first that lim t→∞(4πt)−k 2 Z M e−∥p−q∥2 4t dµq = 1 We thus see that for t sufficiently small (4πt)−k 2 Z M e−∥p−y∥2 4t dµy > 1 2 and hence g(t) > 1 2t−1. □ Lemma 5.2 Let et be an eigenfunction of Lt, Ltet = λtet, λt < 1 2t−1. Then et ∈C∞. We see that Theorem 3.2 follows easily: Proof: [Theorem 3.2] By the Proposition 5.1 we see that the part of the spectrum of Lt between 0 and 1 2t−1 is discrete. It is a standard fact of functional analysis that such points are eigenvalues and there are corresponding eigenspaces of finite dimension. Consider now λt i ∈[0, 1 2t−1] and the corresponding eigenfunction et i. The Theorem 4 then follows from Theorem 23 and Proposition 25 in [11], which show convergence of spectral properties for the empirical operators. □ 6 Main Theorem We are finally in position to prove the main Theorem 4.1: Proof: [Theorem 4.1] From Theorems 3.2 and 3.1 we obtain the following convergence results: Eig ˆLt n Eig ∆ Eig Lt .................................................................................................................................... ............ n →∞ .................................................................................................................................... ............ t →0 where the first convergence is almost surely for λi ≤1 2t−1. Given any i ∈N and any ǫ > 0, we can choose t′ < 2λ−1 i , s.t. for all t < t′ we have ∥ei −et i∥2 < ǫ 2. On the other hand, by using the first arrow, we see that lim n→∞P n ∥et n,i −et i∥2 ≥ǫ 2 o = 0 Thus for any p > 0 and for each t there exists an N, s.t. P {∥et n,i −ei∥2 > ǫ} < p Inverting this relationship, we see that for any N and for any probability p(N) there exists a tN, s.t. ∀n>N P {∥etN n,i −ei∥2 > ǫ} < p(N) Making p(N) tend to zero, we obtain convergence in probability. □ References [1] M. Belkin, Problems of Learning on Manifolds, Univ. of Chicago, Ph.D. Diss., 2003. [2] M. Belkin, P. Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, NIPS 2001. [3] M. Belkin, P. Niyogi, Towards a Theoretical Foundation for Laplacian-Based Manifold Methods, COLT 2005. [4] O. Bousquet, O. Chapelle, M. Hein, Measure Based Regularization, NIPS 2003. [5] F. R. K. Chung. (1997). Spectral Graph Theory. Regional Conference Series in Mathematics, number 92. [6] R.R.Coifman, S. Lafon, A. Lee, M. Maggioni, B. Nadler, F. Warner and S. Zucker, Geometric diffusions as a tool for harmonic analysis and structure definition of data, submitted to the Proceedings of the National Academy of Sciences (2004). [7] D. L. Donoho, C. E. Grimes, Hessian Eigenmaps: new locally linear embedding techniques for high-dimensional data, PNAS, vol. 100 pp. 5591-5596. [8] E. Gine, V. Kolchinski, Empirical Graph Laplacian Approximation of Laplace-Beltrami Operators: Large Sample Results, preprint. [9] M. Hein, J.-Y. Audibert, U. von Luxburg, From Graphs to Manifolds – Weak and Strong Pointwise Consistency of Graph Laplacians, COLT 2005. [10] S. Lafon, Diffusion Maps and Geodesic Harmonics, Ph.D.Thesis, Yale University, 2004. [11] U. von Luxburg, M. Belkin, O. Bousquet, Consistency of Spectral Clustering, Max Planck Institute for Biological Cybernetics Technical Report TR 134, 2004. [12] S. Rosenberg, The Laplacian on a Riemannian Manifold, Cambridge Univ. Press, 1997. [13] Sam T. Roweis, Lawrence K. Saul. (2000). Nonlinear Dimensionality Reduction by Locally Linear Embedding, Science, vol 290. [14] J.B.Tenenbaum, V. de Silva, J. C. Langford. (2000). A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science, Vol 290.
2006
66
3,088
Bayesian Policy Gradient Algorithms Mohammad Ghavamzadeh Yaakov Engel Department of Computing Science, University of Alberta Edmonton, Alberta, Canada T6E 4Y8 {mgh,yaki}@cs.ualberta.ca Abstract Policy gradient methods are reinforcement learning algorithms that adapt a parameterized policy by following a performance gradient estimate. Conventional policy gradient methods use Monte-Carlo techniques to estimate this gradient. Since Monte Carlo methods tend to have high variance, a large number of samples is required, resulting in slow convergence. In this paper, we propose a Bayesian framework that models the policy gradient as a Gaussian process. This reduces the number of samples needed to obtain accurate gradient estimates. Moreover, estimates of the natural gradient as well as a measure of the uncertainty in the gradient estimates are provided at little extra cost. 1 Introduction Policy Gradient (PG) methods are Reinforcement Learning (RL) algorithms that maintain a parameterized action-selection policy and update the policy parameters by moving them in the direction of an estimate of the gradient of a performance measure. Early examples of PG algorithms are the class of REINFORCE algorithms of Williams [1] which are suitable for solving problems in which the goal is to optimize the average reward. Subsequent work (e.g., [2, 3]) extended these algorithms to the cases of infinite-horizon Markov decision processes (MDPs) and partially observable MDPs (POMDPs), and provided much needed theoretical analysis. However, both the theoretical results and empirical evaluations have highlighted a major shortcoming of these algorithms, namely, the high variance of the gradient estimates. This problem may be traced to the fact that in most interesting cases, the time-average of the observed rewards is a high-variance (although unbiased) estimator of the true average reward, resulting in the sample-inefficiency of these algorithms. One solution proposed for this problem was to use a small (i.e., smaller than 1) discount factor in these algorithms [2, 3], however, this creates another problem by introducing bias into the gradient estimates. Another solution, which does not involve biasing the gradient estimate, is to subtract a reinforcement baseline from the average reward estimate in the updates of PG algorithms (e.g., [4, 1]). Another approach for speeding-up policy gradient algorithms was recently proposed in [5] and extended in [6, 7]. The idea is to replace the policy-gradient estimate with an estimate of the so-called natural policy-gradient. This is motivated by the requirement that a change in the way the policy is parametrized should not influence the result of the policy update. In terms of the policy update rule, the move to a natural-gradient rule amounts to linearly transforming the gradient using the inverse Fisher information matrix of the policy. However, both conventional and natural policy gradient methods rely on Monte-Carlo (MC) techniques to estimate the gradient of the performance measure. Monte-Carlo estimation is a frequentist procedure, and as such violates the likelihood principle [8].1 Moreover, although MC estimates are unbiased, they tend to produce high variance estimates, or alternatively, require excessive sample sizes (see [9] for a discussion). 1The likelihood principle states that in a parametric statistical model, all the information about a data sample that is required for inferring the model parameters is contained in the likelihood function of that sample. In [10] a Bayesian alternative to MC estimation is proposed. The idea is to model integrals of the form R f(x)p(x)dx as Gaussian Processes (GPs). This is done by treating the first term f in the integrand as a random function, the randomness of which reflects our subjective uncertainty concerning its true identity. This allows us to incorporate our prior knowledge on f into its prior distribution. Observing (possibly noisy) samples of f at a set of points (x1, x2, . . . , xM) allows us to employ Bayes’ rule to compute a posterior distribution of f, conditioned on these samples. This, in turn, induces a posterior distribution over the value of the integral. In this paper, we propose a Bayesian framework for policy gradient, by modeling the gradient as a GP. This reduces the number of samples needed to obtain accurate gradient estimates. Moreover, estimates of the natural gradient and the gradient covariance are provided at little extra cost. 2 Reinforcement Learning and Policy Gradient Methods Reinforcement Learning (RL) [11, 12] is a class of learning problems in which an agent interacts with an unfamiliar, dynamic and stochastic environment, where the agent’s goal is to optimize some measure of its long-term performance. This interaction is conventionally modeled as a MDP. Let P(S) be the set of probability distributions on (Borel) subsets of a set S. A MDP is a tuple (X, A, q, P, P0) where X and A are the state and action spaces, respectively; q(·|a, x) ∈P(R) is the probability distribution over rewards; P(·|a, x) ∈P(X) is the transition probability distribution; (we assume that P and q are stationary); and P0(·) ∈P(X) is the initial state distribution. We denote the random variable distributed according to q(·|a, x) as r(x, a). In addition, we need to specify the rule according to which the agent selects actions at each possible state. We assume that this rule does not depend explicitly on time. A stationary policy µ(·|x) ∈P(A) is a probability distribution over actions, conditioned on the current state. The MDP controlled by the policy µ induces a Markov chain over state-action pairs. We generically denote by ξ = (x0, a0, x1, a1, . . . , xT −1, aT −1, xT ) a path generated by this Markov chain. The probability (or density) of such a path is given by Pr(ξ|µ) = P0(x0) T −1 Y t=0 µ(at|xt)P(xt+1|xt, at). (1) We denote by R(ξ) = PT t=0 γtr(xt, at) the (possibly discounted, γ ∈[0, 1]) cumulative return of the path ξ. R(ξ) is a random variable both because the path ξ is a random variable, and because even for a given path, each of the rewards sampled in it may be stochastic. The expected value of R(ξ) for a given ξ is denoted by ¯R(ξ). Finally, let us define the expected return, η(µ) = E(R(ξ)) = Z ¯R(ξ) Pr(ξ|µ)dξ. (2) Gradient-based approaches to policy search in RL have recently received much attention as a means to sidetrack problems of partial observability and of policy oscillations and even divergence encountered in value-function based methods (see [11], Sec. 6.4.2 and 6.5.3). In policy gradient (PG) methods, we define a class of smoothly parameterized stochastic policies {µ(·|x; θ), x ∈X, θ ∈Θ}, estimate the gradient of the expected return (2) with respect to the policy parameters θ from observed system trajectories, and then improve the policy by adjusting the parameters in the direction of the gradient [1, 2, 3]. The gradient of the expected return η(θ) = η(µ(·|·; θ)) is given by 2 ∇η(θ) = Z ¯R(ξ)∇Pr(ξ; θ) Pr(ξ; θ) Pr(ξ; θ)dξ, (3) where Pr(ξ; θ) = Pr(ξ|µ(·|·; θ)). The quantity ∇Pr(ξ;θ) Pr(ξ;θ) = ∇log Pr(ξ; θ) is known as the score function or likelihood ratio. Since the initial state distribution P0 and the transition distribution P are independent of the policy parameters θ, we can write the score of a path ξ using Eq. 1 as u(ξ) = ∇Pr(ξ; θ) Pr(ξ; θ) = T −1 X t=0 ∇µ(at|xt; θ) µ(at|xt; θ) = T −1 X t=0 ∇log µ(at|xt; θ). (4) 2Throughout the paper, we use the notation ∇to denote ∇θ – the gradient w.r.t. the policy parameters. Previous work on policy gradient methods used classical Monte-Carlo to estimate the gradient in Eq. 3. These methods generate i.i.d. sample paths ξ1, . . . , ξM according to Pr(ξ; θ), and estimate the gradient ∇η(θ) using the MC estimator c ∇ηMC(θ) = 1 M M X i=1 R(ξi)∇log Pr(ξi; θ) = 1 M M X i=1 R(ξi) Ti−1 X t=0 ∇log µ(at,i|xt,i; θ). (5) 3 Bayesian Quadrature Bayesian quadrature (BQ) [10] is a Bayesian method for evaluating an integral using samples of its integrand. We consider the problem of evaluating the integral ρ = Z f(x)p(x)dx. (6) If p(x) is a probability density function, this becomes the problem of evaluating the expected value of f(x). In MC estimation of such expectations, samples (x1, x2, . . . , xM) are drawn from p(x), and the integral is estimated as ˆρMC = 1 M PM i=1 f(xi). ˆρMC is an unbiased estimate of ρ, with variance that diminishes to zero as M →∞. However, as O’Hagan points out, MC estimation is fundamentally unsound, as it violates the likelihood principle, and moreover, does not make full use of the data at hand [9] . The alternative proposed in [10] is based on the following reasoning: In the Bayesian approach, f(·) is random simply because it is numerically unknown. We are therefore uncertain about the value of f(x) until we actually evaluate it. In fact, even then, our uncertainty is not always completely removed, since measured samples of f(x) may be corrupted by noise. Modeling f as a Gaussian process (GP) means that our uncertainty is completely accounted for by specifying a Normal prior distribution over functions. This prior distribution is specified by its mean and covariance, and is denoted by f(·) ∼N {f0(·), k(·, ·)}. This is shorthand for the statement that f is a GP with prior mean E(f(x)) = f0(x) and covariance Cov(f(x), f(x′)) = k(x, x′), respectively. The choice of kernel function k allows us to incorporate prior knowledge on the smoothness properties of the integrand into the estimation procedure. When we are provided with a set of samples DM = {(xi, yi)}M i=1, where yi is a (possibly noisy) sample of f(xi), we apply Bayes’ rule to condition the prior on these sampled values. If the measurement noise is normally distributed, the result is a Normal posterior distribution of f|DM. The expressions for the posterior mean and covariance are standard: E(f(x)|DM) = f0(x) + kM(x)⊤CM(yM −f 0), (7) Cov(f(x), f(x′)|DM) = k(x, x′) −kM(x)⊤CMkM(x′). Here and in the sequel, we make use of the definitions: f 0 = (f0(x1), . . . , f0(xM))⊤ , yM = (y1, . . . , yM)⊤, kM(x) = (k(x1, x), . . . , k(xM, x))⊤ , [KM]i,j = k(xi, xj) , CM = (KM + ΣM)−1 , and [ΣM]i,j is the measurement noise covariance between the ith and jth samples. Typically, it is assumed that the measurement noise is i.i.d., in which case ΣM = σ2I, where σ2 is the noise variance and I is the identity matrix. Since integration is a linear operation, the posterior distribution of the integral in Eq. 6 is also Gaussian, and the posterior moments are given by E(ρ|DM) = Z E(f(x)|DM)p(x)dx , Var(ρ|DM) = ZZ Cov(f(x), f(x′)|DM)p(x)p(x′)dxdx′. (8) Substituting Eq. 7 into Eq. 8, we get E(ρ|DM) = ρ0 + z⊤ MCM(yM −f 0) , Var(ρ|DM) = z0 −z⊤ MCMzM, (9) where we made use of the definitions: ρ0 = Z f0(x)p(x)dx , zM = Z kM(x)p(x)dx , z0 = ZZ k(x, x′)p(x)p(x′)dxdx′. (10) Note that ρ0 and z0 are the prior mean and variance of ρ, respectively. Model 1 Model 2 Known part p(ξ; θ) = Pr(ξ; θ) p(ξ; θ) = ∇Pr(ξ; θ) Uncertain part f(ξ; θ) = ¯R(ξ)∇log Pr(ξ; θ) f(ξ) = ¯R(ξ) Measurement y(ξ) = R(ξ)∇log Pr(ξ; θ) y(ξ) = R(ξ) Prior mean of f E(f(ξ; θ)) = 0 E(f(ξ)) = 0 Prior cov. of f Cov(f(ξ; θ), f(ξ′; θ)) = k(ξ, ξ′)I Cov(f(ξ), f(ξ′)) = k(ξ, ξ′) E(∇ηB(θ)|DM) = Y MCMzM ZMCMyM Cov(∇ηB(θ)|DM) = (z0 −z⊤ MCMzM)I Z0 −ZMCMZ⊤ M Kernel function k(ξi, ξj) = ` 1 + u(ξi)⊤G−1u(ξj) ´2 k(ξi, ξj) = u(ξi)⊤G−1u(ξj) zM (zM)i = 1 + u(ξi)⊤G−1u(ξi) ZM = U M z0 z0 = 1 + n Z0 = G −U MCMU ⊤ M Table 1: Summary of the Bayesian policy gradient Models 1 and 2. In order to prevent the problem from “degenerating into infinite regress”, as phrased by O’Hagan [10], we should choose the functions p, k, and f0 so as to allow us to solve the integrals in Eq. 10 analytically. For instance, O’Hagan provides the analysis required for the case where the integrands in Eq. 10 are products of multivariate Gaussians and polynomials, referred to as Bayes-Hermite quadrature. One of the contributions of the present paper is in providing analogous analysis for kernel functions that are based on the Fisher kernel [13, 14]. It is important to note that in MC estimation, samples must be drawn from the distribution p(x), whereas in the Bayesian approach, samples may be drawn from arbitrary distributions. This affords us with flexibility in the choice of sample points, allowing us, for instance to actively design the samples (x1, x2, . . . , xM). 4 Bayesian Policy Gradient In this section, we use Bayesian quadrature to estimate the gradient of the expected return with respect to the policy parameters, and propose Bayesian policy gradient (BPG) algorithms. In the frequentist approach to policy gradient our performance measure was η(θ) from Eq. 2, which is the result of averaging the cumulative return R(ξ) over all possible paths ξ and all possible returns accumulated in each path. In the Bayesian approach we have an additional source of randomness, which is our subjective Bayesian uncertainty concerning the process generating the cumulative returns. Let us denote ηB(θ) = Z R(ξ) Pr(ξ; θ)dξ. (11) ηB(θ) is a random variable both because of the noise in R(ξ) and the Bayesian uncertainty. Under the quadratic loss, our Bayesian performance measure is E(ηB(θ)|DM). Since we are interested in optimizing performance rather than evaluating it, we evaluate the posterior distribution of the gradient of ηB(θ). For the mean we have ∇E (ηB(θ)|DM) = E (∇ηB(θ)|DM) = E „Z R(ξ)∇Pr(ξ; θ) Pr(ξ; θ) Pr(ξ; θ)dξ |DM « . (12) Consequently, in BPG we cast the problem of estimating the gradient of the expected return in the form of Eq. 6. As described in Sec. 3, we partition the integrand into two parts, f(ξ; θ) and p(ξ; θ). We will place the GP prior over f and assume that p is known. We will then proceed by calculating the posterior moments of the gradient ∇ηB(θ) conditioned on the observed data. Next, we investigate two different ways of partitioning the integrand in Eq. 12, resulting in two distinct Bayesian models. Table 1 summarizes the two models we use in this work. Our choice of Fisher-type kernels was motivated by the notion that a good representation should depend on the data generating process (see [13, 14] for a thorough discussion). Our particular choices of linear and quadratic Fisher kernels were guided by the requirement that the posterior moments of the gradient be analytically tractable. In Table 1 we made use of the following definitions: F M = (f(ξ1; θ), . . . , f(ξM; θ)) ∼ N(0, KM), Y M = (y(ξ1), . . . , y(ξM)) ∼N(0, KM + σ2I), U M = ˆ u(ξ1) , u(ξ2) , . . . , u(ξM) ˜ , ZM = R ∇Pr(ξ; θ)kM(ξ)⊤dξ, and Z0 = RR k(ξ, ξ′)∇Pr(ξ; θ)∇Pr(ξ′; θ)⊤dξdξ′. Finally, n is the number of policy parameters, and G = E ` u(ξ)u(ξ)⊤´ is the Fisher information matrix. We can now use Models 1 and 2 to define algorithms for evaluating the gradient of the expected return with respect to the policy parameters. The pseudo-code for these algorithms is shown in Alg. 1. The generic algorithm (for either model) takes a set of policy parameters θ and a sample size M as input, and returns an estimate of the posterior moments of the gradient of the expected return. Algorithm 1 : A Bayesian Policy Gradient Evaluation Algorithm 1: BPG Eval(θ, M) // policy parameters θ ∈Rn, sample size M > 0 // 2: Set G = G(θ) , D0 = ∅ 3: for i = 1 to M do 4: Sample a path ξi using the policy µ(θ) 5: Di = Di−1 S {ξi} 6: Compute u(ξi) = PTi−1 t=0 ∇log µ(at|st; θ) 7: R(ξi) = PTi−1 t=0 r(st, at) 8: Update Ki using Ki−1 and ξi 9: y(ξi) = R(ξi)u(ξi) (Model 1) or y(ξi) = R(ξi) (Model 2) (zM)i = 1 + u(ξi)⊤G−1u(ξi) (Model 1) or ZM(:, i) = u(ξi) (Model 2) 10: end for 11: CM = (KM + σ2I)−1 12: Compute the posterior mean and covariance: E(∇ηB(θ)|DM) = Y MCMzM , Cov(∇ηB(θ)|DM) = (z0 −z⊤ MCMzM)I (Model 1) or E(∇ηB(θ)|DM) = ZMCMyM , Cov(∇ηB(θ)|DM) = Z0 −ZMCMZ⊤ M (Model 2) 13: return E (∇ηB(θ)|DM) , Cov (∇ηB(θ)|DM) The kernel functions used in Models 1 and 2 are both based on the Fisher information matrix G(θ). Consequently, every time we update the policy parameters we need to recompute G. In Alg. 1 we assume that G is known, however, in most practical situations this will not be the case. Let us briefly outline two possible approaches for estimating the Fisher information matrix. MC Estimation: At each step j, our BPG algorithm generates M sample paths using the current policy parameters θj in order to estimate the gradient ∇ηB(θj). We can use these generated sample paths to estimate the Fisher information matrix G(θj) by replacing the expectation in G with empirical averaging as ˆGMC(θj) = 1 PM i=1 Ti PM i=1 PTi−1 t=0 ∇log µ(at|xt; θj)∇log µ(at|xt; θj)⊤. Model-Based Policy Gradient: The Fisher information matrix depends on the probability distribution over paths. This distribution is a product of two factors, one corresponding to the current policy, and the other corresponding to the MDP dynamics P0 and P (see Eq. 1). Thus, if the MDP dynamics are known, the Fisher information matrix can be evaluated off-line. We can model the MDP dynamics using some parameterized model, and estimate the model parameters using maximum likelihood or Bayesian methods. This would be a model-based approach to policy gradient, which would allow us to transfer information between different policies. Alg. 1 can be made significantly more efficient, both in time and memory, by sparsifying the solution. Such sparsification may be performed incrementally, and helps to numerically stabilize the algorithm when the kernel matrix is singular, or nearly so. Here we use an on-line sparsification method from [15] to selectively add a new observed path to a set of dictionary paths DM, which are used as a basis for approximating the full solution. Lack of space prevents us from discussing this method in further detail (see Chapter 2 in [15] for a thorough discussion). The Bayesian policy gradient (BPG) algorithm is described in Alg. 2. This algorithm starts with an initial vector of policy parameters θ0 and updates the parameters in the direction of the posterior mean of the gradient of the expected return, computed by Alg. 1. This is repeated N times, or alternatively, until the gradient estimate is sufficiently close to zero. Algorithm 2 : A Bayesian Policy Gradient Algorithm 1: BPG(θ0, α, N, M) // initial policy parameters θ0, learning rates (αj)N−1 j=0 , number of policy updates N > 0, BPG Eval sample size M > 0 // 2: for j = 0 to N −1 do 3: ∆θj = E (∇ηB(θj)|DM) from BPG Eval(θj, M) 4: θj+1 = θj+αj∆θj (regular gradient) or θj+1 = θj+αjG−1(θj)∆θj (natural gradient) 5: end for 6: return θN 5 Experimental Results In this section, we compare the BQ and MC gradient estimators in a continuous-action bandit problem and a continuous state and action linear quadratic regulation (LQR) problem. We also evaluate the performance of the BPG algorithm (Alg. 2) on the LQR problem, and compare it with a standard MC-based policy gradient (MCPG) algorithm. 5.1 A Bandit Problem In this simple example, we compare the BQ and MC estimates of the gradient (for a fixed set of policy parameters) using the same samples. Our simple bandit problem has a single state and A = R. Thus, each path ξi consists of a single action ai. The policy, and therefore also the distribution over paths is given by a ∼N(θ1 = 0, θ2 2 = 1). The score function of the path ξ = a and the Fisher information matrix are given by u(ξ) = [a, a2 −1]⊤and G = diag(1, 2), respectively. Table 2 shows the exact gradient of the expected return and its MC and BQ estimates (using 10 and 100 samples) for two versions of the simple bandit problem corresponding to two different deterministic reward functions r(a) = a and r(a) = a2. The average over 104 runs of the MC and BQ estimates and their standard deviations are reported in Table 2. The true gradient is analytically tractable and is reported as “Exact” in Table 2 for reference. Exact MC (10) BQ (10) MC (100) BQ (100) r(a) = a „ 1 0 « „ 0.9950 ± 0.438 −0.0011 ± 0.977 « „ 0.9856 ± 0.050 0.0006 ± 0.060 « „ 1.0004 ± 0.140 0.0040 ± 0.317 « „ 1.000 ± 0.000001 0.000 ± 0.000004 « r(a) = a2 „ 0 2 « „ 0.0136 ± 1.246 2.0336 ± 2.831 « „ 0.0010 ± 0.082 1.9250 ± 0.226 « „ 0.0051 ± 0.390 1.9869 ± 0.857 « „ 0.000 ± 0.000003 2.000 ± 0.000011 « Table 2: The true gradient of the expected return and its MC and BQ estimates for two bandit problems. As shown in Table 2, the BQ estimate has much lower variance than the MC estimate for both small and large sample sizes. The BQ estimate also has a lower bias than the MC estimate for the large sample size (M = 100), and almost the same bias for the small sample size (M = 10). 5.2 A Linear Quadratic Regulator In this section, we consider the following linear system in which the goal is to minimize the expected return over 20 steps. Thus, it is an episodic problem with paths of length 20. System Policy Initial State: x0 ∼N(0.3, 0.001) Actions: at ∼µ(·|xt; θ) = N(λxt, σ2) Rewards: rt = x2 t + 0.1a2 t Parameters: θ = (λ , σ)⊤ Transitions: xt+1 = xt + at + nx; nx ∼N(0, 0.01) We first compare the BQ and MC estimates of the gradient of the expected return for the policy induced by the parameters λ = −0.2 and σ = 1. We use several different sample sizes (number of paths used for gradient estimation) M = 5j , j = 1, . . . , 20 for the BQ and MC estimates. For each sample size, we compute both the MC and BQ estimates 104 times, using the same samples. The true gradient is estimated using MC with 107 sample paths for comparison purposes. Figure 1 shows the mean squared error (MSE) (first column), and the mean absolute angular error (second column) of the MC and BQ estimates of the gradient for several different sample sizes. The absolute angular error is the absolute value of the angle between the true gradient and the estimated gradient. In this figure, the BQ gradient estimate was calculated using Model 1 without sparsification. With a good choice of sparsification threshold, we can attain almost identical results much faster and more efficiently with sparsification. These results are not shown here due to space limitations. To give an intuition concerning the speed and the efficiency attained by sparsification, we should mention that the dimension of the feature space for the kernel used in Model 1 is 6 (Proposition 9.2 in [14]). Therefore, we deal with a kernel matrix of size 6 with sparsification versus a kernel matrix of size M = 5j , j = 1, . . . , 20 without sparsification. We ran another set of experiments, in which we add i.i.d. Gaussian noise to the rewards: rt = x2 t+ 0.1a2 t + nr ; nr ∼N(0, σ2 r = 0.1). In Model 2, we can model this by the measurement noise covariance matrix Σ = Tσ2 rI, where T = 20 is the path length. Since each reward rt is a Gaussian random variable with variance σ2 r, the return R(ξ) = PT −1 t=0 rt will also be a Gaussian random variable with variance Tσ2 r. The results are presented in the third and fourth columns of Figure 1. These experiments indicate that the BQ gradient estimate has lower variance than its MC counterpart. In fact, whereas the performance of the MC estimate improves as 1 M , the performance of the BQ estimate improves at a higher rate. 0 20 40 60 80 100 102 103 104 105 106 Number of Paths Mean Squared Error MC BQ 0 20 40 60 80 100 103 104 105 106 Number of Paths Mean Squared Error MC BQ 0 20 40 60 80 100 100 101 102 Number of Paths Mean Absolute Angular Error (deg) MC BQ 0 20 40 60 80 100 100 101 102 Number of Paths Mean Absolute Angular Error (deg) MC BQ Figure 1: Results for the LQR problem using Model 1 (left) and Model 2 (right), without sparsification. The Model 2 results are for a LQR problem, in which the rewards are corrupted by i.i.d. Gaussian noise. For each algorithm, we show the MSE (left) and the mean absolute angular error (right), as functions of the number of sample paths M. Note that the errors are plotted on a logarithmic scale. All results are averages over 104 runs. Next, we use BPG to optimize the policy parameters in the LQR problem. Figure 2 shows the performance of the BPG algorithm with the regular (BPG) and the natural (BPNG) gradient estimates, versus a MC-based policy gradient (MCPG) algorithm, for the sample sizes (number of sample paths used for estimating the gradient of a policy) M = 5, 10, 20, and 40. We use Alg. 2 with the number of updates set to N = 100, and Model 1 for the BPG and BPNG methods. Since Alg. 2 computes the Fisher information matrix for each set of policy parameters, an estimate of the natural gradient is provided at little extra cost at each step. The returns obtained by these methods are averaged over 104 runs for sample sizes 5 and 10, and over 103 runs for sample sizes 20 and 40. The policy parameters are initialized randomly at each run. In order to ensure that the learned parameters do not exceed an acceptable range, the policy parameters are defined as λ = −1.999 + 1.998/(1 + eν1) and σ = 0.001 + 1/(1 + eν2). The optimal solution is λ∗≈−0.92 and σ∗= 0.001 (ηB(λ∗, σ∗) = 0.1003) corresponding to ν∗ 1 ≈−0.16 and ν∗ 2 →∞. 0 20 40 60 80 100 100 101 Number of Updates (Sample Size = 5) Average Expected Return MC BPG BPNG Optimal 0 20 40 60 80 100 100 101 Number of Updates (Sample Size = 10) Average Expected Return MC BPG BPNG Optimal 0 20 40 60 80 100 100 101 Number of Updates (Sample Size = 20) Average Expected Return MC BPG BPNG Optimal 0 20 40 60 80 100 100 101 Number of Updates (Sample Size = 40) Average Expected Return MC BPG BPNG Optimal Figure 2: A comparison of the average expected returns of BPG using regular (BPG) and natural (BPNG) gradient estimates, with the average expected return of the MCPG algorithm for sample sizes 5, 10, 20, and 40. Figure 2 shows that MCPG performs better than the BPG algorithm for the smallest sample size (M = 5), whereas for larger samples BPG dominates MCPG. This phenomenon is also reported in [16]. We use two different learning rates for the two components of the gradient. For a fixed sample size, each method starts with an initial learning rate, and decreases it according to the schedule αj = α0(20/(20 + j)). Table 3 summarizes the best initial learning rates for each algorithm. The selected learning rates for BPNG are significantly larger than those for BPG and MCPG, which explains why BPNG initially learns faster than BPG and MCPG, but contrary to our expectations, eventually performs worse. M = 5 M = 10 M = 20 M = 40 MCPG 0.01, 0.05 0.05, 0.10 0.05, 0.10 0.10, 0.15 BPG 0.01, 0.03 0.07, 0.10 0.15, 0.20 0.10, 0.30 BPNG 0.03, 0.50 0.09, 0.30 0.45, 0.90 0.80, 0.90 Figure 3: Initial learning rates used by the PG algorithms. So far we have assumed that the Fisher information matrix is known. In the next experiment, we estimate it using both MC and maximum likelihood (ML) methods as described in Sec. 4. In ML estimation, we assume that the transition probability function is P(xt+1|xt, at) = N(β1xt + β2at + β3, β2 4), and then estimate its parameters by observing state transitions. Figure 4 shows that when the Fisher information matrix is estimated using MC (BPG-MC), the BPG algorithm still performs better than MCPG, and outperforms the BPG algorithm in which the Fisher information matrix is estimated using ML (BPG-ML). Moreover, as we increase the sample size, its performance converges to the performance of the BPG algorithm in which the Fisher information matrix is known (BPG). 0 20 40 60 80 100 10−0.5 10−0.4 10−0.3 10−0.2 10−0.1 Number of Updates (Sample Size = 10) Average Expected Return MC BPG BPG−ML BPG−MC Optimal 0 20 40 60 80 100 10−0.5 10−0.4 10−0.3 10−0.2 10−0.1 Number of Updates (Sample Size = 20) Average Expected Return MC BPG BPG−ML BPG−MC Optimal 0 20 40 60 80 100 10−0.5 10−0.4 10−0.3 10−0.2 10−0.1 Number of Updates (Sample Size = 40) Average Expected Return MC BPG BPG−ML BPG−MC Optimal Figure 4: A comparison of the average return of BPG when the Fisher information matrix is known (BPG), and when it is estimated using MC (BPG-MC) and ML (BPG-ML) methods, for sample sizes 10, 20, and 40 (from left to right). The average return of the MCPG algorithm is also provided for comparison. 6 Discussion In this paper we proposed an alternative approach to conventional frequentist policy gradient estimation procedures, which is based on the Bayesian view. Our algorithms use GPs to define a prior distribution over the gradient of the expected return, and compute the posterior, conditioned on the observed data. The experimental results are encouraging, but we conjecture that even higher gains may be attained using this approach. This calls for additional theoretical and empirical work. Although the proposed policy updating algorithm (Alg. 2) uses only the posterior mean of the gradient in its updates, we hope that more elaborate algorithms can be devised that would make judicious use of the covariance information provided by the gradient estimation algorithm (Alg. 1). Two obvious possibilities are: 1) risk-aware selection of the update step-size and direction, and 2) using the variance in a termination condition for Alg. 1. Other interesting directions include 1) investigating other possible partitions of the integrand in the expression for ∇ηB(θ) into a GP term f and a known term p, 2) using other types of kernel functions, such as sequence kernels, 3) combining our approach with MDP model estimation, to allow transfer of learning between different policies, 4) investigating methods for learning the Fisher information matrix, 5) extending the Bayesian approach to Actor-Critic type of algorithms, possibly by combining BPG with the Gaussian process temporal difference (GPTD) algorithms of [15]. Acknowledgments We thank Rich Sutton and Dale Schuurmans for helpful discussions. M.G. would like to thank Shie Mannor for his useful comments at the early stages of this work. M.G. is supported by iCORE and Y.E. is partially supported by an Alberta Ingenuity fellowship. References [1] R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 1992. [2] P. Marbach. Simulated-Based Methods for Markov Decision Processes. PhD thesis, MIT, 1998. [3] J. Baxter and P. Bartlett. Infinite-horizon policy-gradient estimation. JAIR, 15:319–350, 2001. [4] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of NIPS 12, pages 1057–1063, 2000. [5] S. Kakade. A natural policy gradient. In Proceedings of NIPS 14, 2002. [6] J. Bagnell and J. Schneider. Covariant policy search. In Proceedings of the 18th IJCAI, 2003. [7] J. Peters, S. Vijayakumar, and S. Schaal. Reinforcement learning for humanoid robotics. In Proceedings of the Third IEEE-RAS International Conference on Humanoid Robots, 2003. [8] J. Berger and R. Wolpert. The Likelihood Principle. Inst. of Mathematical Statistics, Hayward, CA, 1984. [9] A. O’Hagan. Monte Carlo is fundamentally unsound. The Statistician, 36:247–249, 1987. [10] A. O’Hagan. Bayes-Hermite quadrature. Journal of Statistical Planning and Inference, 29, 1991. [11] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [12] R. Sutton and A. Barto. An Introduction to Reinforcement Learning. MIT Press, 1998. [13] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Proceedings of NIPS 11. MIT Press, 1998. [14] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004. [15] Y. Engel. Algorithms and Representations for Reinforcement Learning. PhD thesis, The Hebrew University of Jerusalem, Israel, 2005. [16] C. Rasmussen and Z. Ghahramani. Bayesian Monte Carlo. In Proceedings of NIPS 15. MIT Press, 2003.
2006
67
3,089
Handling Advertisements of Unknown Quality in Search Advertising Sandeep Pandey Christopher Olston Carnegie Mellon University Yahoo! Research spandey@cs.cmu.edu olston@yahoo-inc.com Abstract We consider how a search engine should select advertisements to display with search results, in order to maximize its revenue. Under the standard “pay-per-click” arrangement, revenue depends on how well the displayed advertisements appeal to users. The main difficulty stems from new advertisements whose degree of appeal has yet to be determined. Often the only reliable way of determining appeal is exploration via display to users, which detracts from exploitation of other advertisements known to have high appeal. Budget constraints and finite advertisement lifetimes make it necessary to explore as well as exploit. In this paper we study the tradeoffbetween exploration and exploitation, modeling advertisement placement as a multi-armed bandit problem. We extend traditional bandit formulations to account for budget constraints that occur in search engine advertising markets, and derive theoretical bounds on the performance of a family of algorithms. We measure empirical performance via extensive experiments over real-world data. 1 Introduction Search engines are invaluable tools for society. Their operation is supported in large part through advertising revenue. Under the standard “pay-per-click” arrangement, search engines earn revenue by displaying appealing advertisements that attract user clicks. Users benefit as well from this arrangement, especially when searching for commercial goods or services. Successful advertisement placement relies on knowing the appeal or “clickability” of advertisements. The main difficulty is that the appeal of new advertisements that have not yet been “vetted” by users can be difficult to estimate. In this paper we study the problem of placing advertisements to maximize a search engine’s revenue, in the presence of uncertainty about appeal. 1.1 The Advertisement Problem Consider the following advertisement problem [8], illustrated in Figure 1. There are m advertisers A1, A2 . . . Am who wish to advertise on a search engine. The search engine runs a large auction where each advertiser submits its bids to the search engine for the query phrases in which it is interested. Advertiser Ai submits advertisement ai,j to target query phrase Qj, and promises to pay bi,j amount of money for each click on this advertisement, where bi,j is Ai’s bid for advertisement ai,j. Advertiser Ai can also specify a daily budget (di) that is the total amount of money it is willing to pay for the clicks on its advertisements in a day. Given a user search query on phrase Qj, the search engine selects a constant number C ≥1 of advertisements from the candidate set of advertisements {a∗,j}, targeted to Qj. The objective in selecting advertisements is to maximize the search engine’s total daily revenue. The arrival sequence of user queries is not known in advance. For now we assume that each day a new set of advertisements is given to the search engine and the set remains fixed through out the day; we drop both of these assumptions later in Section 4. Budgets a A2 A3 A4 1,3 a 5,4 a 5,5 a 2,1 a 3,2 a 4,4 a 3,4 a A1 A5 d1 d2 d3 d4 d5 Q1 Q2 Q3 Q4 Q5 Query Ads Advertisers phrases 1,1 Figure 1: Advertiser and query model. for all ads VI constraints I III V IV II general CTR, general CTR, no budget constraints GREEDY MSVV ratio=1−1/e ratio=1/2 budget GREEDY GREEDY ratio=1 ratio=1 this paper CTR known CTR unknown CTR = 1 this paper Figure 2: Problem variants. High revenue is achieved by displaying advertisements that have high bids as well as high likelihood of being clicked on by users. Formally, the click-through rate (CTR) ci,j of advertisement ai,j is the probability of a user to click on advertisement ai,j given that the advertisement was displayed to the user for query phrase Qj. In the absence of budget constraints, revenue is maximized by displaying advertisements with the highest ci,j · bi,j value. The work of [8] showed how to maximize revenue in the presence of budget constraints, but under the assumption that all CTRs are known in advance. In this paper we tackle the more difficult but realistic problem of maximizing advertisement revenue when CTRs are not necessarily known at the outset, and must be learned on the fly. We show the space of problem variants (along with the best known advertisement policies) in Figure 2. GREEDY refers to selection of advertisements according to expected revenue (i.e., ci,j ·bi,j). In Cells I and III GREEDY performs as well as the optimal policy, where the optimal policy also knows the arrival sequence of queries in advance. We write “ratio=1” in Figure 2 to indicate that GREEDY has the competitive ratio of 1. For Cells II and IV the greedy policy is not optimal, but is nevertheless 1/2 competitive. An alternative policy for Cell II was given in [8], which we refer to as MSVV; it achieves a competitive ratio of 1 −1/e. In this paper we give the first policies for Cells V and VI, where we must choose which advertisements to display while simultaneously estimating click-through rates of advertisements. 1.2 Exploration/Exploitation Tradeoff The main issue we face while addressing Cells V and VI is to balance the exploration/exploitation tradeoff. To maximize short-term revenue, the search engine should exploit its current, imperfect CTR estimates by displaying advertisements whose estimated CTRs are large. On the other hand, to maximize long-term revenue, the search engine needs to explore, i.e., identify which advertisements have the largest CTRs. This kind of exploration entails displaying advertisements whose current CTR estimates are of low confidence, which inevitably leads to displaying some low-CTR ads in the short-term. This kind of tradeoffbetween exploration and exploitation shows up often in practice, e.g., in clinical trials, and has been extensively studied in the context of the multi-armed bandit problem [4]. In this paper we draw upon and extend the existing bandit literature to solve the advertisement problem in the case of unknown CTR. In particular, first in Section 3 we show that the unbudgeted variant of the problem (Cell V in Figure 2) is an instance of the multi-armed bandit problem. Then, in Section 4 we introduce a new kind of bandit problem that we termed the budgeted multi-armed multi-bandit problem (BMMP), and show that the budgeted unknown-CTR advertisement problem (Cell VI) is an instance of BMMP. We propose policies for BMMP and give performance bounds. We evaluate our policies empirically over real-world data in Section 5. In the extended technical version of the paper [9] we show how to extend our policies to address various practical considerations, e.g., exploiting any prior information available about the CTRs of ads, permitting advertisers to submit and revoke advertisements at any time, not just at day boundaries. 2 Related Work We have already discussed the work of [8], which addresses the advertisement problem under the assumption that CTRs are known. There has not been much published work on estimating CTRs. Reference [7] discusses how contextual information such as user demographic or ad topic can be used to estimate CTRs, and makes connections to the recommender and bandit problems, but stops short of presenting technical solutions. Some methods for estimating CTRs are proposed in [5] with the focus of thwarting click fraud. Reference [1] studies how to maximize user clicks on banner ads. The key problem addressed in [1] is to satisfy the contracts made with the advertisers in terms of the minimum guaranteed number of impressions (as opposed to the budget constraints in our problem). Reference [10] looks at the advertisement problem from an advertiser’s point of view, and gives an algorithm for identifying the most profitable set of keywords for the advertiser. 3 Unbudgeted Unknown-CTR Advertisement Problem In this section we address Cell V of Figure 2, where click-through rates are initially unknown and budget constraints are absent (i.e., di = ∞for all advertisers Ai). Our unbudgeted problem is an instance of the multi-armed bandit problem [4], which is the following: we have K arms where each arm has an associated reward and payoffprobability. The payoff probability is not known to us while the reward may or may not be known (both versions of the bandit problem exist). With each invocation we activate exactly C ≤K arms. 1 Each activated arm then yields the associated reward with its payoffprobability and nothing with the remaining probability. The objective is to determine a policy for activating the arms so as to maximize the total reward over some number of invocations. To solve the unbudgeted unknown-CTR advertisement problem, we create a multi-armed bandit problem instance for each query phrase Q, where ads targeted for the query phrase are the arms, bid values are the rewards and CTRs are the payoffprobabilities of the bandit instance. Since there are no budget constraints, we can treat each query phrase independently and solve each bandit instance in isolation. 2 The number of invocations for a bandit instance is not known in advance because the number of queries of phrase Q in a given day is not known in advance. A variety of policies have been proposed for the bandit problem, e.g., [2, 3, 6], any of which can be applied to our unbudgeted advertisement problem. The policies proposed in [3] are particularly attractive because they have a known performance bound for any number of invocations not known in advance (in our context the number of queries is not known a priori). In the case of C = 1, the policies of [3] make O(ln n) number of mistakes, on expectation, in n invocations (which is also the asymptotic lower bound on the number of mistakes [6]). A mistake occurs when a suboptimal arm is chosen by a policy (the optimal arm is the one with the highest expected reward). We consider a specific policy from [3] called UCB and apply it to our problem (other policies from [3] can also be used). UCB is proposed under a slightly different reward model; we adapt it to our context to produce the following policy that we call MIX (for mixing exploration with exploitation). We prove a performance bound of O(ln n) mistakes for MIX for any C ≥1 in [9]. Policy MIX : Each time a query for phrase Qj arrives: 1. Display the C ads targeted for Qj that have the highest priority. The priority Pi,j of ad ai,j is a function of its current CTR estimate (ˆci,j), its bid value (bi,j), the number of times it has been displayed so far (ni,j), and the number of times phrase Qj has been queried so far in the day (nj). Formally, priority Pi,j is defined as: Pi,j = (  ˆci,j + q 2 ln nj ni,j  · bi,j if ni,j > 0 ∞ otherwise 1The conventional multi-armed bandit problem is defined for C = 1. We generalize it to any C ≥1 in this paper. 2We assume CTRs to be independent of one another. 2. Monitor the clicks made by users and update the CTR estimates ˆci,j accordingly. ˆci,j is the average click-through rate observed so far, i.e., the number of times ad ai,j has been clicked on divided by the total number of times it has been displayed. Policy MIX manages the exploration/exploitation tradeoffin the following way. The priority function has two factors: an exploration factor q 2 ln nj ni,j  that diminishes with time, and an exploitation factor (ˆci,j). Since ˆci,j can be estimated only when ni,j ≥1, the priority value is set to ∞for an ad which has never been displayed before. Importantly, the MIX policy is practical to implement because it can be evaluated efficiently using a single pass over the ads targeted for a query phrase. Furthermore, it incurs minimal storage overhead because it keeps only three numbers (ˆci,j, ni,j and bi,j) with each ad and one number (nj) with each query phrase. 4 Budgeted Unknown-CTR Advertisement Problem We now turn to the more challenging case in which advertisers can specify daily budgets (Cell VI of Figure 2). Recall from Section 3 that in the absence of budget constraints, we were able to treat the bandit instance created for a query phrase independent of the other bandit instances. However, budget constraints create dependencies between query phrases targeted by an advertiser. To model this situation, we introduce a new kind of bandit problem that we call Budgeted Multi-armed Multi-bandit Problem (BMMP), in which multiple bandit instances are run in parallel under overarching budget constraints. We derive generic policies for BMMP and give performance bounds. 4.1 Budgeted Multi-armed Multi-bandit Problem BMMP consists of a finite set of multi-armed bandit instances, B = {B1, B2 . . . B|B|}. Each bandit instance Bi has a finite number of arms and associated rewards and payoffprobabilities as described in Section 3. In BMMP each arm also has an associated type. Each type Ti ∈T has budget di ∈[0, ∞] which specifies the maximum amount of reward that can be generated by activating all the arms of that type. Once the specified budget is reached for a type, the corresponding arms can still be activated but no further reward is earned. With each invocation of the bandit system, one bandit instance from B is invoked; the policy has no control over which bandit instance is invoked. Then the policy activates C arms of the invoked bandit instance, and the activated arms generate some (possibly zero) total reward. It is easy to see that the budgeted unknown-CTR advertisement problem is an instance of BMMP. Each query phrase acts as a bandit instance and the ads targeted for it act as bandit arms, as described in Section 3. Each advertiser defines a unique type of arms and gives a budget constraint for that type; all ads submitted by an advertiser belong to the type defined by it. When a query is submitted by a user, the corresponding bandit instance is invoked. We now show how to derive a policy for BMMP given as input a policy POL for the regular multi-armed bandit problem such as one of the policies from [3]. The derived policy, denoted by BPOL (Budget-aware POL), is as follows: • Run |B| instances of POL in parallel, denoted POL1, POL2, . . . POL|B|. • Whenever bandit instance Bi is invoked: 1. Discard any arm(s) of Bi whose type’s budget is newly depleted, i.e., has become depleted since the last invocation of Bi. 2. If one or more arms of Bi was discarded during step 1, restart POLi. 3. Let POLi decide which of the remaining arms of Bi to activate. Observe that in the second step of BPOL, when POL is restarted, POL loses any state it has built up, including any knowledge gained about the payoffprobabilities of bandit arms. Surprisingly, despite this seemingly imprudent behavior, we can still derive a good performance bound for BPOL, provided that POL has certain properties, as we discuss in the next section. In practice, since most bandit policies can take prior information about the payoffprobabilities as input, when restarting POL we can supply the previous payoff probability estimates as the prior (as done in our experiments). 4.2 Performance Bound for BMMP Policies Let S denote the sequence of bandit instances that are invoked, i.e., S = {S(1), S(2) . . . S(N)} where S(n) denotes the index of the bandit instance invoked at the nth invocation. We compare the performance of BPOL with that of the optimal policy, denoted by OPT, where OPT has advance knowledge of S and the exact payoffprobabilities of all bandit instances. We claim that bpol(N) ≥opt(N)/2−O(f(N)) for any N, where bpol(N) and opt(N) denote the total expected reward obtained after N invocations by BPOL and OPT, respectively, and f(n) denotes the expected number of mistakes made by POL after n invocations of the the regular multi-armed bandit problem (for UCB, f(n) is O(ln n) [3]). Our complete proof is rather involved. Here we give a high-level outline of the proof (the complete proof is given in [9]). For simplicity we focus on the C = 1 case; C ≥1 is a simple extension thereof. Since bandit arms generate rewards stochastically, it is not clear how we should compare BPOL and OPT. For example, even if BPOL and OPT behave in exactly the same way (activate the same arm on each bandit invocation), we cannot guarantee that both will have the same total reward in the end. To enable meaningful comparison, we define a payoff instance, denoted by I, such that I(i, n) denotes the reward generated by arm i of bandit instance S(n) for invocation n in payoffinstance I. The outcome of running BPOL or OPT on a given payoffinstance is deterministic because the rewards are fixed in the payoff instance. Hence, we can compare BPOL and OPT on per payoffinstance basis. Since each payoffinstance arises with a certain probability, denoted as P(I), by taking expectation over all possible payoffinstances of execution we can compare the expected performance of BPOL and OPT. Let us consider invocation n in payoffinstance I. Let B(I, n) and O(I, n) denote the arms of bandit instance S(n) activated under BPOL and OPT respectively. Based on the different possibilities that can arise, we classify invocation n into one of three categories: • Category 1: The arm activated by OPT, O(I, n), is of smaller or equal expected reward in comparison to the arm activated by BPOL, B(I, n). The expected reward of an arm is the product of its payoffprobability and reward. • Category 2: Arm O(I, n) is of greater expected reward than B(I, n), but O(I, n) is not available for BPOL to activate at invocation n due to budget restrictions. • Category 3: Arm O(I, n) is of greater expected reward than B(I, n) and both arms O(I, n) and B(I, n) are available for BPOL to activate, but BPOL prefers to activate B(I, n) over O(I, n). Let us denote the invocations of category k (1, 2 or 3) by N k(I) for payoffinstance I. Let bpolk(N) and optk(N) denote the expected reward obtained during the invocations of category k (1, 2 or 3) by BPOL and OPT respectively. In [9] we show that bpolk(N) = X I∈I  P(I) · X n∈N k(I) I(B(I, n), n)  Similarly, optk(N) = X I∈I  P(I) · X n∈N k(I) I(O(I, n), n)  Then for each k we bound optk(N) in terms of bpol(N). In [9] we provide proof of each of the following bounds: Lemma 1 opt1(N) ≤bpol1(N). Lemma 2 opt2(N) ≤bpol(N) + (|T | · rmax), where |T | denotes the number of arm types and rmax denotes the maximum reward. Lemma 3 opt3(N) = O(f(N)). From the above bounds we obtain our overall claim: Theorem 1 bpol(N) ≥opt(N)/2 −O(f(N)), where bpol(N) and opt(N) denote the total expected reward obtained under BPOL and OPT respectively. Proof: opt(N) = opt1(N) + opt2(N) + opt3(N) ≤ bpol1(N) + bpol(N) + |T | · rmax  + O(f(N)) ≤ 2 · bpol(N) + O(f(N)) Hence, bpol(N) ≥opt(N)/2 −O(f(N)). If we supply MIX (Section 3) as input to our generic BPOL framework, we obtain BMIX, a policy for the budgeted unknown-CTR advertisement problem. Due to the way MIX structures and maintains its internal state, it is not necessary to restart a MIX instance when an advertiser’s budget is depleted in BMIX, as specified in the generic BPOL framework (the exact steps of BMIX are given in [9]). So far, for modeling purposes, we have assumed the search engine receives an entirely new batch of advertisements each day. In reality, ads may persist over multiple days. With BMIX, we can carry forward an ad’s CTR estimate (ˆci,j) and display count (ni,j) from day to day until an ad is revoked, to avoid having to re-learn CTR’s from scratch each day. Of course the daily budgets reset daily, regardless of how long each ad persists. In fact, with a little care we can permit ads to be submitted and revoked at arbitrary times (not just at day boundaries). We describe this extension, as well as how we can incorporate and leverage prior beliefs about CTR’s, in [9]. 5 Experiments From our general result of Section 4, we have a theoretical performance guarantee for BMIX. In this section we study BMIX empirically. In particular, we compare it with the greedy policy proposed for the known-CTR advertisement problem (Cells 1-IV in Figure 2). GREEDY displays the C ads targeted for a query phrase that have the highest ˆci,j ·bi,j  values among the ads whose advertisers have enough remaining budgets; to induce a minimal amount of exploration, for an ad which has never been displayed before, GREEDY treats ˆci,j as ∞ (our policies do this as well). GREEDY is geared exclusively toward exploitation. Hence, by comparing GREEDY with our policies, we can gauge the importance of exploration. We also propose and evaluate the following variants of BMIX that we expect to perform well in practice: 1. Varying the Exploration Factor. Internally, BMIX runs instances of MIX to select which ads to display. As mentioned in Section 4, the priority function of MIX consists of an exploration factor q 2 ln nj ni,j  and an exploitation factor (ci,j). In [3] it was shown empirically that the following heuristical exploitation factor performs well, despite the absence of a known performance guarantee: s ln nj ni,j · min n1 4, Vi,j(ni,j, nj) o where Vi,j(ni,j, nj) =  ˆci,j · (1 −ˆci,j)  + s 2 ln nj ni,j Substituting this expression in place of q 2 ln nj ni,j in the priority function of BMIX gives us a new (heuristical) policy we call BMIX-E. 2. Budget Throttling. It is shown in [8] that in the presence of budget constraints, it is beneficial to display the ads of an advertiser less often as the advertiser’s remaining budget decreases. In particular, they propose to multiply bids from advertiser Ai by the following discount factor: φ(d′ i) = 1 −e−d′ i/di where d′ i is the current remaining budget of advertiser Ai for the day and di is its total daily budget. Following this idea we can replace bi,j by φ(d′ i) · bi,j  in the priority function of BMIX, yielding a variant we call BMIX-T. Policy BMIX-ET refers to use of heuristics 1 and 2 together. 5.1 Experiment Setup We evaluate advertisement policies by conducting simulations over real-world data. Our data set consists of a sample of 85,000 query phrases selected at random from the Yahoo! query log for the date of February 12, 2006. Since we have the frequency counts of these query phrases but not the actual order, we ran the simulations multiple times with random orderings of the query instances and report the average revenue in all our experiment results. The total number of query instances is 2 million. For each query phrase we have the list of advertisers interested in it and the ads submitted by them to Yahoo!. We also have the budget constraints of the advertisers. Roughly 60% of the advertisers in our data set impose daily budget constraints. In our simulation, when an ad is displayed, we decide whether a click occurs by flipping a coin weighted by the true CTR of the ad. Since true CTRs are not known to us (this is the problem we are trying to solve!), we took the following approach to assign CTRs to ads: from a larger set of Yahoo! ads we selected those ads that have been displayed more than thousand times, and therefore we have highly accurate CTR estimates. We regarded the distribution of these CTR estimates as the true CTR distribution. Then for each ad ai,j in the dataset we sampled a random value from this distribution and assigned it as CTR ci,j of the ad. (Although this method may introduce some skew compared with the (unknown) true distribution, it is the best we could do short of serving live ads just for the purpose of measuring CTRs). We are now ready to present our results. Due to lack of space we consider a simple setting here where the set of ads is fixed and no prior information about CTR is available. We study the more general setting in [9]. 5.2 Exploration/Exploitation Tradeoff We ran each of the policies for a time horizon of ten days; each policy carries over its CTR estimates from one day to the next. Budget constraints are renewed each day. For now we fix the number of displayed ads (C) to 1. Figure 3 plots the revenue generated by each policy after a given number of days (for confidentiality reasons we have changed the unit of revenue). All policies (including GREEDY) estimate CTRs based on past observations, so as time passes by their estimates become more reliable and their performance improves. Note that the exploration factor of BMIX-E causes it to perform substantially better than that of BMIX. The budget throttling heuristic (BMIX-T and BMIX-ET) did not make much difference in our experiments. All of our proposed policies perform significantly better than GREEDY, which underscores the importance of balancing exploration and exploitation. GREEDY is geared exclusively toward exploitation, so one might expect that early on it would outperform the other policies. However, that does not happen because GREEDY immediately fixates on ads that are not very profitable (i.e., low ci,j · bi,j). Next we vary the number of ads displayed for each query (C). Figure 4 plots total revenue over ten days on the y-axis, and C on the x-axis. Each policy earns more revenue when more ads are displayed (larger C). Our policies outperform GREEDY consistently across different values of C. In fact, GREEDY must display almost twice as many ads as BMIX-E to generate the same amount of revenue. 1 4 7 10 Time horizon (days) 0 1 2 3 4 5 Total revenue GREEDY BMIX BMIX-T BMIX-E BMIX-ET Figure 3: Revenue generated by different advertisement policies (C=1). 1 4 7 10 Ads per query (C) 0 4 8 12 Total revenue GREEDY BMIX BMIX-T BMIX-E BMIX-ET Figure 4: Effect of C (number of ads displayed per query). 6 Summary and Future Work In this paper we studied how a search engine should select which ads to display in order to maximize revenue, when click-through rates are not initially known. We dealt with the underlying exploration/exploitation tradeoffusing multi-armed bandit theory. In the process we contributed to bandit theory by proposing a new variant of the bandit problem that we call budgeted multi-armed multi-bandit problem (BMMP). We proposed a policy for solving BMMP and derived a performance guarantee. Practical extensions of our advertisement policies are given in the extended version of the paper. Extensive experiments over real ad data demonstrate substantial revenue gains compared to a greedy strategy that has no provision for exploration. Several useful extensions of this problem can be conceived. One such extension would be to exploit similarity in ad attributes while inferring CTRs, as suggested in [7], instead of estimating the CTR of each ad independently. Also, an adversarial formulation of this problem merits study, perhaps leading to general consideration of how to manage exploration versus exploitation in game-theoretic scenarios. References [1] N. Abe and A. Nakamura. Learning to Optimally Schedule Internet Banner Advertisements. In ICML, 1999. [2] R. Agrawal. Sample Mean Based Index Policies with O(log n) Regret for the MultiArmed Bandit Problem. Advances in Applied Probability, 27:1054–1078, 1995. [3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time Analysis of the Multi-Armed Bandit Problem. Machine Learning, 47:235–256, 2002. [4] D. A. Berry and B. Fristedt. Bandit Problems: Sequential Allocation of Experiments. Chapman and Hall, London, 1985. [5] N. Immorlica, K. Jain, M. Mahdian, and K. Talwar. Click Fraud Resistant Methods for Learning Click-Through Rates. In WINE, 2005. [6] T. Lai and H. Robbins. Asymptotically Efficient Adaptive Allocation Rules. Advances in Applied Mathematics, 6:4–22, 1985. [7] O. Madani and D. Decoste. Contextual Recommender Problems. In Proceedings of the 1st International Workshop on Utility-based Data Mining, 2005. [8] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani. AdWords and Generalized On-line Matching. In FOCS, 2005. [9] S. Pandey and C. Olston. Handling advertisements of unknown quality in search advertising, October, 2006. Technical report, available via http://www.cs.cmu.edu/ ∼spandey/publications/ctrEstimation.pdf. [10] P. Rusmevichientong and D. Williamson. An Adaptive Algorithm for Selecting Profitable Keywords for Search-Based Advertising Services. In EC, 2006.
2006
68
3,090
Part-based Probabilistic Point Matching using Equivalence Constraints Graham McNeill, Sethu Vijayakumar Institute of Perception, Action and Behavior School of Informatics, University of Edinburgh, Edinburgh, UK. EH9 3JZ [graham.mcneill, sethu.vijayakumar]@ed.ac.uk Abstract Correspondence algorithms typically struggle with shapes that display part-based variation. We present a probabilistic approach that matches shapes using independent part transformations, where the parts themselves are learnt during matching. Ideas from semi-supervised learning are used to bias the algorithm towards finding ‘perceptually valid’ part structures. Shapes are represented by unlabeled point sets of arbitrary size and a background component is used to handle occlusion, local dissimilarity and clutter. Thus, unlike many shape matching techniques, our approach can be applied to shapes extracted from real images. Model parameters are estimated using an EM algorithm that alternates between finding a soft correspondence and computing the optimal part transformations using Procrustes analysis. 1 Introduction Shape-based object recognition is a key problem in machine vision and content-based image retrieval (CBIR). Over the last decade, numerous shape matching algorithms have been proposed that perform well on benchmark shape retrieval tests. However, many of these techniques share the same limitations: Firstly, they operate on contiguous shape boundaries (i.e. the ordering of the boundary points matters) and assume that every point on one boundary has a counterpart on the boundary it is being matched to (c.f. Fig. 1c). Secondly, they have no principled mechanism for handling occlusion, non-boundary points and clutter. Finally, they struggle to handle shapes that display significant part-based variation. The first two limitations mean that many algorithms are unsuitable for matching shapes extracted from real images; the latter is important since many common objects (natural and man made) display part-based variation. Techniques that match unordered point sets (e.g. [1]) are appealing since they do not require ordered boundary information and can work with non-boundary points. The methods described in [2, 3, 4] can handle outliers, occlusions and clutter, but are not designed to handle shapes whose parts are independently transformed. In this paper, we introduce a probabilistic model that retains the desirable properties of these techniques but handles parts explicitly by learning the most likely part structure and correspondence simultaneously. In this framework, a part is defined as a set of points that undergo a common transformation. Learning these variation-based parts from scratch is an underconstrained problem. To address this, we incorporate prior knowledge about valid part assignments using two different mechanisms. Firstly, the distributions of our hierarchical mixture model are chosen so that the learnt parts are spatially localized. Secondly, ideas from semi-supervised learning [5] are used to encourage a perceptually meaningful part decomposition. The algorithm is introduced in Sec. 2 and described in detail in Sec. 3. Examples are given in Sec. 4 and a sequential approach for tackling model selection (the number of parts) and parameter initialization is introduced in Sec. 5. a. Occlusion b. Irreg. sampling c. Localized dissimilarity Figure 1: Examples of probabilistic point matching (PPM) using the technique described in [4]. In each case, the initial alignment and the final match are shown. 2 Part-based Point Matching (PBPM): Motivation and Overview The PBPM algorithm combines three key ideas: Probabilistic point matching (PPM): Probabilistic methods that find a soft correspondence between unlabeled point sets [2, 3, 4] are well suited to problems involving occlusion, absent features and clutter (Fig. 1). Natural Part Decomposition (NPD): Most shapes have a natural part decomposition (NPD) (Fig. 2) and there are several algorithms available for finding NPDs (e.g. [6]). We note that in tasks such as object recognition and CBIR, the query image is frequently a template shape (e.g. a binary image or line drawing) or a high quality image with no occlusion or clutter. In such cases, one can apply an NPD algorithm prior to matching. Throughout this paper, it is assumed that we have obtained a sensible NPD for the query shape only1 – it is not reasonable to assume that an NPD can be computed for each database shape/image. Variation-based Part Decomposition (VPD): A different notion of parts has been used in computer vision [7], where a part is defined as a set of pixels that undergo the same transformations across images. We refer to this type of part decomposition (PD) as a variation-based part decomposition (VPD). Given two shapes (i.e. point sets), PBPM matches them by applying a different transformation to each variation-based part of the generating shape. These variation-based parts are learnt during matching, where the known NPD of the data shape is used to bias the algorithm towards choosing a ‘perceptually valid’ VPD. This is achieved using the equivalence constraint Constraint 1 (C1): Points that belong to the same natural part should belong to the same variation-based part. As we shall see in Sec. 3, this influences the learnt VPD by changing the generative model from one that generates individual data points to one that generates natural parts (subsets of data points). To further increase the perceptual validity of the learnt VPD, we assume that variation-based parts are composed of spatially localized points of the generating shape. PBPM aims to find the correct correspondence at the level of individual points, i.e. each point of the generating shape should be mapped to the correct position on the data shape despite the lack of an exact point wise correspondence (e.g. Fig. 1b). Soft correspondence techniques that achieve this using a single nonlinear transformation [2, 3] perform well on some challenging problems. However, the smoothness constraints used to control the nonlinearity of the transformation will prevent these techniques from selecting the discontinuous transformations associated with part-based movements. PBPM learns an independent linear transformation for each part and hence, can find the correct global match. In relation to the point matching literature, PBPM is motivated by the success of the techniques described in [8, 2, 3, 4] on non-part-based problems. It is perhaps most similar to the work of Hancock and colleagues (e.g. [8]) in that we use ‘structural information’ about the point sets to constrain the matching problem. In addition to learning multiple parts and transformations, our work differs in the type of structural information used (the NPD rather then the Delauney triangulation) and the way in which this information is incorporated. With respect to the shape-matching literature, PBPM can be seen as a novel correspondence technique for use with established NPD algorithms. Despite the large number of NPD algorithms, there 1The NPDs used in the examples were constructed manually. a. b. c. d. Figure 2: The natural part decomposition (NPD) (b-d) for different representations of a shape (a). are relatively few NPD-based correspondence techniques. Siddiqi and Kimia show that the parts used in their NPD algorithm [6] correspond to specific types of shocks when shock graph representations are used. Consequently, shock graphs implicitly capture ideas about natural parts. The Inner-Distance method of Ling and Jacobs [9] handles part articulation without explicitly identifying the parts. 3 Part-based Point Matching (PBPM): Algorithm 3.1 Shape Representation Shapes are represented by point sets of arbitrary size. The points need not belong to the shape boundary and the ordering of the points is irrelevant. Given a generating shape X = (x1, x2, . . . , xM)T ∈ RM×2 and a data shape Y = (y1, y2, . . . , yN)T ∈RN×2 (generally M ̸= N), our task is to compute the correspondence between X and Y. We assume that an NPD of Y is available, expressed as a partition of Y into subsets (parts): Y = SL l=1 Yl. 3.2 The Probabilistic Model We assume that a data point y is generated by the mixture model p(y) = V X v=0 p(y|v)πv, (1) where v indexes the variation-based parts. A uniform background component, y|(v=0) ∼Uniform, ensures that all data points are explained to some extent and hence, robustifies the model against outliers. The distribution of y given a foreground component v is itself a mixture model : p(y|v) = M X m=1 p(y|m, v)p(m|v), v = 1, 2, . . . , V, (2) with y|(m, v) ∼N(Tvxm, σ2I). (3) Here, Tv is the transformation used to match points of part v on X to points of part v on Y. Finally, we define p(m|v) in such a way that the variation-based parts v are forced to be spatially coherent: p(m|v) = exp{−(xm −µv)T Σ−1 v (xm −µv)/2} P m exp{−(xm −µv)T Σ−1 v (xm −µv)/2}, (4) where µv ∈R2 is a mean vector and Σv ∈R2×2 is a covariance matrix. In words, we identify m ∈{1, . . ., M} with the point xm that it indexes and assume that the xm follow a bivariate Gaussian distribution. Since m must take a value in {1, . . ., M}, the distribution is normalized using the points x1, . . . , xM only. This assumption means that the xm themselves are essentially generated by a GMM with V components. However, this GMM is embedded in the larger model and maximizing the data likelihood will balance this GMM’s desire for coherent parts against the need for the parts and transformations to explain the actual data (the yn). Having defined all the distributions, the next step is to estimate the parameters whilst making use of the known NPD of Y. 3.3 Parameter Estimation With respect to the model defined in the previous section, C1 states that all yn that belong to the same subset Yl were generated by the same mixture component v. This requirement can be enforced using the technique introduced by Shental et. al. [5] for incorporating equivalence constraints between data points in mixture models. The basic idea is to estimate the model parameters using the EM algorithm. However, when taking the expectation (of the complete log-likelihood) we now only sum over assignments of data points to components which are valid with respect to the constraints. Assuming that subsets and points within subsets are sampled i.i.d., it can be shown that the expectation is given by: E = V X v=0 L X l=1 p(v|Yl) log πv + V X v=0 L X l=1 X yn∈Yl p(v|Yl) log p(yn|v). (5) Note that eq.(5) involves p(v|Yl) – the responsibility of a component v for a subset Yl, rather than the term p(v|yn) that would be present in an unconstrained mixture model. Using the expression for p(yn|v) in eq.(2) and rearranging slightly, we have E = V X v=0 L X l=1 p(v|Yl) log πv + L X l=1 p(v=0|Yl) log{u|Yl|} + V X v=1 L X l=1 X yn∈Yl p(v|Yl) log ( M X m=1 p(yn|m, v)p(m|v) ) , (6) where u is the constant associated with the uniform distribution p(yn|v=0). The parameters to be estimated are πv (eq.(1)), µv, Σv (eq.(4)) and the transformations Tv (eq.(3)). With the exception of πv, these are found by maximizing the final term in eq.(6). For a fixed v, this term is the log-likelihood of data points y1, . . . , yN under a mixture model, with the modification that there is a weight, p(v|Yl), associated with each data point. Thus, we can treat this subproblem as a standard maximum likelihood problem and derive the EM updates as usual. The resulting EM algorithm is given below. E-step. Compute the responsibilities using the current parameters: p(m|yn, v) = p(yn|m, v)p(m|v) P m p(yn|m, v)p(m|v), v = 1, 2, . . . , V (7) p(v|Yl) = πv Q yn∈Yl p(yn|v) P v πv Q yn∈Yl p(yn|v) (8) M-step. Update the parameters using the responsibilities: πv = 1 L L X l=1 p(v|Yl) (9) µv = P n,m p(v|Yl,n)p(m|yn, v)xm P n,m p(v|Yl,n)p(m|yn, v) (10) Σv = P n,m p(v|Yl,n)p(m|yn, v)(xm −µv)(xm −µv)T P n,m p(v|Yl,n)p(m|yn, v) (11) Tv = arg min T X n,m p(v|Yl,n)p(m|yn, v)∥yn −Tvxm∥2 (12) where Yl,n is the subset Yl containing yn. Here, we define Tvx ≡svΓvx + cv, where sv is a scale parameter, cv ∈R2 is a translation vector and Γv is a 2D rotation matrix. Thus, eq.(12) becomes a weighted Procrustes matching problem between two points sets, each of size N × M – the extent to which xm corresponds to yn in the context of part v is given by p(v|Yl,n)p(m|yn, v). This least squares problem for the optimal transformation parameters sv, Γv and cv can be solved analytically [8]. The weights associated with the updates in eqs.(10-12) are similar to p(v|yn)p(m|yn, v) = p(m, v|yn), the responsibility of the hidden variables (m, v) for the observed data, yn. The difference is that p(v|yn) is replaced by p(v|Yl,n), and hence, the impact of the equivalence constraints is propagated throughout the model. The same fixed variance σ2 (eq.(3)) is used in all experiments. For the examples in Sec. 4, we initialize πv, µv and Σv by fitting a standard GMM to the xm. In Sec. 5, we describe a sequential algorithm that can be used to select the number of parts V as well as provide initial estimates for all parameters. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 3: An example of applying PBPM with V =3. Final match VPD of Y VPD of X PBPM 2 parts 6 parts 5 parts 4 parts PPM Figure 4: Results for the problem in Fig. 3 using PPM [4] and PBPM with V = 2, 4, 5 and 6. 4 Examples As discussed in Secs. 1 and 2, unsupervised matching of shapes with moving parts is a relatively unexplored area – particularly for shapes not composed of single closed boundaries. This makes it difficult to quantitatively assess the performance of our algorithm. Here, we provide illustrative examples which demonstrate the various properties of PBPM and then consider more challenging problems involving shapes extracted from real images. The number of parts, V , is fixed prior to matching in these examples; a technique for estimating V is described in Sec. 5. To visualize the matches found by PBPM, each point yn is assigned to a part v using maxv p(v|yn). Points assigned to v=0 are removed from the figure. For each yn assigned to some v ∈{1, . . . , V }, we find mn ≡arg maxm p(m|yn, v) and assign xmn to v. Those xm not assigned to any parts are removed from the figure. The means and the ellipses of constant probability density associated with the distributions N(µv, Σv) are plotted on the original shape X. We also assign the xm to natural parts using the known natural part label of the yn that they are assigned to. Fig. 3 shows an example of matching two human body shapes using PBPM with V =3. The learnt VPD is intuitive and the match is better than that found using PPM (Fig. 4). The results obtained using different values of V are shown in Fig. 4. Predictably, the match improves as V increases, but the improvement is negligible beyond V =4. When V =5, one of the parts is effectively repeated, suggesting that four parts is sufficient to cover all the interesting variation. However, when V =6 all parts are used and the VPD looks very similar to the NPD – only the lower leg and foot on each side are grouped together. In Fig. 5, there are two genuine variation-based parts and X contains additional features. PBPM effectively ignores the extra points of X and finds the correct parts and matches. In Fig. 6, the left leg is correctly identified and rotated, whereas the right leg of Y is ‘deleted’. We find that deletion from the generating shape tends to be very precise (e.g. Fig. 5), whereas PBPM is less inclined to delete points from the data shape when it involves breaking up natural parts (e.g. Fig. 6). This is X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 5: Some features of X are not present on Y; the main building of X is smaller and the tower is more central. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 6: The left legs do not match and most of the right leg of X is missing. largely due to the equivalence constraints trying to keep natural parts intact, though the value of the uniform density, u, and the way in which points are assigned to parts is also important. In Figs. 7 and 8, a template shape is matched to the edge detector output from two real images. We have not focused on optimizing the parameters of the edge detector since the aim is to demonstrate the ability of PBPM to handle suboptimal shape representations. The correct correspondence and PDs is estimated in all cases, though the results are less precise for these difficult problems. Six parts are used in Fig. 8, but two of these are initially assigned to clutter and end up playing no role in the final match. The object of interest in X is well matched to the template using the other four parts. Note that the left shoulder is not assigned to the same variation-based part as the other points of the torso, i.e. the soft equivalence constraint has been broken in the interests of finding the best match. We have not yet considered the choice of V . Figs. 4 (with V =5) and 8 indicate that it may be possible to start with more parts than are required and either allow extraneous parts to go unused or perhaps prune parts during matching. Alternatively, one could run PBPM for a range of V and use a model selection technique based on a penalized log-likelihood function (e.g. BIC) to select a V . Finally, one could attempt to learn the parts in a sequential fashion. This is the approach considered in the next section. 5 Sequential Algorithm for Initialization When part variation is present, one would expect PBPM with V =1 to find the most significant part and allow the background to explain the remaining parts. This suggests a sequential approach whereby a single part is learnt and removed from further consideration at each stage. Each new part/component should focus on data points that are currently explained by the background. This is achieved by modifying the technique described in [7] for fitting mixture models sequentially. Specifically, assume that the first part (v=1) has been learnt and now learn the second part using the X = edge detector output. Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 7: Matching a template shape to an object in a cluttered scene. X = edge detector output. Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 8: Matching a template shape to a real image. weighted log-likelihood J2 = L X l=1 z1 l log{p(Yl|v=2)π2 + u|Yl|(1 −π1 −π2)}. (13) Here, π1 is known and z1 l ≡ u|Yl|(1 −π1) p(Yl|v=1)π1 + u|Yl|(1 −π1) (14) is the responsibility of the background component for the subset Yl after learning the first part – the superscript of z indicates the number of components that have already been learnt. Using the modified log-likelihood in eq.(13) has the desired effect of forcing the new component (v=2) to explain the data currently explained by the uniform component. Note that we use the responsibilities for the subsets Yl rather than the individual yn [7], in line with the assumption that complete subsets belong to the same part. Also, note that eq.(13) is a weighted sum of log-likelihoods over the subsets, it cannot be written as a sum over data points since these are not sampled i.i.d. due to the equivalence constraints. Maximizing eq.(13) leads to similar EM updates to those given in eqs.(7)(12). Having learnt the second part, additional components v = 3, 4, . . . are learnt in the same way except for minor adjustments to eqs.(13) and (14) to incorporate all previously learnt components. The sequential algorithm terminates when the uniform component is not significantly responsible for any data or the most recently learnt component is not significantly responsible for any data. As discussed in [7], the sequential algorithm is expected to have fewer problems with local minima since the objective function will be smoother (a single component competes against a uniform component at each stage) and the search space smaller (fewer parameters are learnt at each stage). Preliminary experiments suggest that the sequential algorithm is capable of solving the model selection problem (choosing the number of parts) and providing good initial parameter values for the full model described in Sec. 3. Some examples are given in Figs. 9 and 10 – the initial transformations for each part are not shown. The outcome of the sequential algorithm is highly dependent on the value of the uniform density, u. We are currently investigating how the model can be made more robust to this value and also how the used xm should be subtracted (in a probabilistic sense) at each step. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 9: Results for PBPM; V and initial parameters were found using the sequential approach. X and initial Gaussians for p(m|v) Y NPD of Y Initial alignment VPD of with final Gaussians for X p(m|v) VPD of Y NPD of X Final match Input Output Transformed X Figure 10: Results for PBPM; V and initial parameters were found using the sequential approach. 6 Summary and Discussion Despite the prevalence of part-based objects/shapes, there has been relatively little work on the associated correspondence problem. In the absence of class models and training data (i.e. the unsupervised case), this is a particularly difficult task. In this paper, we have presented a probabilistic correspondencealgorithm that handles part-based variation by learning the parts and correspondence simultaneously. Ideas from semi-supervised learning are used to bias the algorithm towards finding a ‘perceptually valid’ part decomposition. Future work will focus on robustifying the sequential approach described in Sec. 5. References [1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. PAMI, 24:509–522, 2002. [2] H. Chui and A. Rangarajan. A new point matching algorithm for non-rigid registration. Comp. Vis. and Image Understanding, 89:114–141, 2003. [3] Z. Tu and A.L. Yuille. Shape matching and recognition using generative models and informative features. In ECCV, 2004. [4] G. McNeill and S. Vijayakumar. A probabilistic approach to robust shape matching. In ICIP, 2006. [5] Noam Shental, Aharon Bar-Hillel, Tomer Hertz, and Daphna Weinshall. Computing Gaussian mixture models with EM using equivalence constraints. In NIPS. 2004. [6] Kaleem Siddiqi and Benjamin B. Kimia. Parts of visual form: Computational aspects. PAMI, 17(3):239– 251, 1995. [7] M. Titsias. Unsupervised Learning of Multiple Objects in Images. PhD thesis, Univ. of Edinburgh, 2005. [8] B. Luo and E.R. Hancock. A unified framework for alignment and correspondence. Computer Vision and Image Understanding, 92(26-55), 2003. [9] H. Ling and D.W. Jacobs. Using the inner-distance for classification of ariculated shapes. In CVPR, 2005.
2006
69
3,091
Multi-Task Feature Learning Andreas Argyriou Department of Computer Science University College London Gower Street, London WC1E 6BT, UK a.argyriou@cs.ucl.ac.uk Theodoros Evgeniou Technology Management and Decision Sciences, INSEAD, Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.edu Massimiliano Pontil Department of Computer Science University College London Gower Street, London WC1E 6BT, UK m.pontil@cs.ucl.ac.uk Abstract We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks. 1 Introduction Learning multiple related tasks simultaneously has been empirically [2, 3, 8, 9, 12, 18, 19, 20] as well as theoretically [2, 4, 5] shown to often significantly improve performance relative to learning each task independently. This is the case, for example, when only a few data per task are available, so that there is an advantage in “pooling” together data across many related tasks. Tasks can be related in various ways. For example, task relatedness has been modeled through assuming that all functions learned are close to each other in some norm [3, 8, 15, 19]. This may be the case for functions capturing preferences in users’ modeling problems [9, 13]. Tasks may also be related in that they all share a common underlying representation [4, 5, 6]. For example, in object recognition, it is well known that the human visual system is organized in a way that all objects1 are represented – at the earlier stages of the visual system – using a common set of features learned, e.g. local filters similar to wavelets [16]. In modeling users’ preferences/choices, it may also be the case that people make product choices (e.g. of books, music CDs, etc.) using a common set of features describing these products. In this paper, we explore the latter type of task relatedness, that is, we wish to learn a lowdimensional representation which is shared across multiple related tasks. Inspired by the fact that the well known 1−norm regularization problem provides such a sparse representation for the single 1We consider each object recognition problem within each object category, e.g. recognizing a face among faces, or a car among cars, to be a different task. task case, in Section 2 we generalize this formulation to the multiple task case. Our method learns a few features common across the tasks by regularizing within the tasks while keeping them coupled to each other. Moreover, the method can be used, as a special case, to select (not learn) a few features from a prescribed set. Since the extended problem is nonconvex, we develop an equivalent convex optimization problem in Section 3 and present an algorithm for solving it in Section 4. A similar algorithm was investigated in [9] from the perspective of conjoint analysis. Here we provide a theoretical justification of the algorithm in connection with 1-norm regularization. The learning algorithm simultaneously learns both the features and the task functions through two alternating steps. The first step consists of independently learning the parameters of the tasks’ regression or classification functions. The second step consists of learning, in an unsupervised way, a low-dimensional representation for these task parameters, which we show to be equivalent to learning common features across the tasks. The number of common features learned is controlled, as we empirically show, by the regularization parameter, much like sparsity is controlled in the case of single-task 1-norm regularization. In Section 5, we report experiments on a simulated and a real data set which demonstrate that the proposed method learns a few common features across the tasks while also improving the performance relative to learning each task independently. Finally, in Section 6 we briefly compare our approach with other related multi-task learning methods and draw our conclusions. 2 Learning sparse multi-task representations We begin by introducing our notation. We let IR be the set of real numbers and IR+ (IR++) the subset of non-negative (positive) ones. Let T be the number of tasks and define INT := {1, . . . , T}. For each task t ∈INT , we are given m input/output examples (xt1, yt1), . . . (xtm, ytm) ∈IRd × IR. Based on this data, we wish to estimate T functions ft : IRd →IR, t ∈INT , which approximate well the data and are statistically predictive, see e.g. [11]. If w, u ∈IRd, we define ⟨w, u⟩:= Pd i=1 wiui, the standard inner product in IRd. For every p ≥1, we define the p-norm of vector w as ∥w∥p := (Pd i=1 |wi|p) 1 p . If A is a d × T matrix we denote by ai ∈IRT and aj ∈IRd the i-th row and the j-th column of A respectively. For every r, p ≥1 we define the (r, p)-norm of A as ∥A∥r,p := Pd i=1 ∥ai∥p r  1 p . We denote by Sd the set of d × d real symmetric matrices and by Sd + the subset of positive semidefinite ones. If D is a d × d matrix, we define trace(D) := Pd i=1 Dii. If X is a p × q real matrix, range(X) denotes the set {x ∈IRp : x = Xz, for some z ∈IRq}. We let Od be the set of d × d orthogonal matrices. Finally, D+ denotes the pseudoinverse of a matrix D. 2.1 Problem formulation The underlying assumption in this paper is that the functions ft are related so that they all share a small set of features. Formally, our hypothesis is that the functions ft can be represented as ft(x) = d X i=1 aithi(x), t ∈INT , (2.1) where hi : IRd →IR are the features and ait ∈IR are the regression parameters. Our main assumption is that all the features but a few have zero coefficients across all the tasks. For simplicity, we focus on linear features, that is, hi(x) = ⟨ui, x⟩, where ui ∈IRd. In addition, we assume that the vectors ui are orthonormal. Thus, if U denotes the d × d matrix with columns the vectors ui, then U ∈Od. The functions ft are linear as well, that is ft(x) = ⟨wt, x⟩, where wt = P i aitui. Extensions to nonlinear functions may be done, for example, by using kernels along the lines in [8, 15]. Since this is not central in the present paper we postpone its discussion to a future occasion. Let us denote by W the d × T matrix whose columns are the vectors wt and by A the d × T matrix with entries ait. We then have that W = UA. Our assumption that the tasks share a “small” set of features means that the matrix A has “many” rows which are identically equal to zero and, so, the corresponding features (columns of matrix U) will not be used to represent the task parameters (columns of matrix W). In other words, matrix W is a low rank matrix. We note that the problem of learning a low-rank matrix factorization which approximates a given partially observed target matrix has been considered in [1], [17] and references therein. We briefly discuss its connection to our current work in Section 4. In the following, we describe our approach to computing the feature vectors ui and the parameters ait. We first consider the case that there is only one task (say task t) and the features ui are fixed. To learn the parameter vector at ∈IRd from data {(xti, yti)}m i=1 we would like to minimize the empirical error Pm i=1 L(yti, ⟨at, U ⊤xti⟩) subject to an upper bound on the number of nonzero components of at, where L : IR × IR →IR+ is a prescribed loss function which we assume to be convex in the second argument. This problem is intractable and is often relaxed by requiring an upper bound on the 1-norm of at. That is, we consider the problem min Pm i=1 L(yti, ⟨at, U ⊤xti⟩) : ∥at∥2 1 ≤α2 , or equivalently the unconstrained problem min ( m X i=1 L(yti, ⟨at, U ⊤xti⟩) + γ∥at∥2 1 : at ∈IRd ) , (2.2) where γ > 0 is the regularization parameter. It is well known that using the 1-norm leads to sparse solutions, that is, many components of the learned vector at are zero, see [7] and references therein. Moreover, the number of nonzero components of a solution to problem (2.2) is “typically” a nonincreasing function of γ [14]. We now generalize problem (2.2) to the multi-task case. For this purpose, we introduce the regularization error function E(A, U) = T X t=1 m X i=1 L(yti, ⟨at, U ⊤xti⟩) + γ∥A∥2 2,1. (2.3) The first term in (2.3) is the average of the empirical error across the tasks while the second one is a regularization term which penalizes the (2, 1)-norm of the matrix A. It is obtained by first computing the 2-norm of the (across the tasks) rows ai (corresponding to feature i) of matrix A and then the 1-norm of the vector b(A) = (∥a1∥2, . . . , ∥ad∥2). This norm combines the tasks and ensures that common features will be selected across them. Indeed, if the features U are prescribed and ˆA minimizes the function E over A, the number of nonzero components of the vector b( ˆA) will typically be non-increasing with γ like in the case of 1-norm single-task regularization. Moreover, the components of the vector b( ˆA) indicate how important each feature is and favor uniformity across the tasks for each feature. Since we do not simply want to select the features but also learn them, we further minimize the function E over U, that is, we consider the optimization problem min n E(A, U) : U ∈Od, A ∈IRd×T o . (2.4) This method learns a low-dimensional representation which is shared across the tasks. As in the single-task case, the number of features will be typically non-increasing with the regularization parameter – we shall present experimental evidence of this fact in Section 5 (see Figure 1 therein). We note that when the matrix U is not learned and we set U = Id×d, problem (2.4) computes a common set of variables across the tasks. That is, we have the following convex optimization problem min ( T X t=1 m X i=1 L(yti, ⟨at, xti⟩) + γ∥A∥2 2,1 : A ∈IRd×T ) . (2.5) We shall return to problem (2.5) in Section 4 where we present an algorithm for solving it. 3 Equivalent convex optimization formulation Solving problem (2.4) is a challenging task for two main reasons. First, it is a non-convex problem, although it is separately convex in each of the variables A and U. Second, the norm ∥A∥2,1 is nonsmooth which makes it more difficult to optimize. A main result in this paper is that problem (2.4) can be transformed into an equivalent convex problem. To this end, for every W ∈IRd×T and D ∈Sd +, we define the function R(W, D) = T X t=1 m X i=1 L(yti, ⟨wt, xti⟩) + γ T X t=1 ⟨wt, D+wt⟩. (3.1) Theorem 3.1. Problem (2.4) is equivalent to the problem min n R(W, D) : W ∈IRd×T , D ∈Sd +, trace(D) ≤1, range(W) ⊆range(D) o . (3.2) That is, ( ˆA, ˆU) is an optimal solution for (2.4) if and only if ( ˆW, ˆD) = ( ˆU ˆA, ˆUDiag(ˆλ) ˆU ⊤) is an optimal solution for (3.2), where ˆλi := ∥ˆai∥2 ∥ˆA∥2,1 . (3.3) Proof. Let W = UA and D = UDiag( ∥ai∥2 ∥A∥2,1 )U ⊤. Then ∥ai∥2 = ∥W ⊤ui∥2 and hence T X t=1 ⟨wt, D+wt⟩= trace(W ⊤D+W) = ∥A∥2,1 trace(W ⊤UDiag(∥W ⊤ui∥2)+U ⊤W) = ∥A∥2,1 trace d X i=1 (∥W ⊤ui∥2)+ W ⊤uiu ⊤ i W  = ∥A∥2,1 d X i=1 ∥W ⊤ui∥2 = ∥A∥2 2,1 . Therefore, minW,D R(W, D) ≤minA,U E(A, U). Conversely, let D = UDiag(λ)U ⊤. Then T X t=1 ⟨wt, D+wt⟩= trace(W ⊤UDiag(λ+ i )U ⊤W) = trace(Diag(λ+ i )AA ⊤) ≥∥A∥2 2,1 , by Lemma 4.2. Note that the range constraint ensures that W is a multiple of the submatrix of U which corresponds to the nonzero eigenvalues of D, and hence if λi = 0 then ai = 0 as well. Therefore, minA,U E(A, U) ≤minW,D R(W, D). In problem (3.2) we have constrained the trace of D, otherwise the optimal solution would be to simply set D = ∞and only minimize the empirical error term in (3.1). Similarly, we have imposed the range constraint to ensure that the penalty term is bounded below and away from zero. Indeed, without this constraint, it may be possible that DW = 0 when W does not have full rank, in which case there is a matrix D for which PT t=1 ⟨wt, D+wt⟩= trace(W ⊤D+W) = 0. We note that the rank of matrix D indicates how many common relevant features the tasks share. Indeed, it is clear from equation (3.3) that the rank of matrix D equals the number of nonzero rows of matrix A. We now show that the function R in equation (3.1) is jointly convex in W and D. For this purpose, we define the function f(w, D) = w⊤D+w, if D ∈Sd + and w ∈range(D), and f(w, D) = +∞ otherwise. Clearly, R is convex provided f is convex. The latter is true since a direct computation expresses f as the supremum of a family of convex functions, namely we have that f(w, D) = sup{w⊤v + trace(ED) : E ∈Sd, v ∈IRd, 4E + vv⊤⪯0}. 4 Learning algorithm We solve problem (3.2) by alternately minimizing the function R with respect to D and the wt (recall that wt is the t-th column of matrix W). When we keep D fixed, the minimization over wt simply consists of learning the parameters wt independently by a regularization method, for example by an SVM or ridge regression type method2. For a fixed value of the vectors wt, we learn D by simply solving the minimization problem min ( T X t=1 ⟨wt, D+wt⟩: D ∈Sd +, trace(D) ≤1, range(W) ⊆range(D) ) . (4.1) The following theorem characterizes the optimal solution of problem (4.1). 2As noted in the introduction, other multi-task learning methods can be used. For example, we can also penalize the variance of the wt’s – “forcing”them to be close to each other – as in [8]. This would only slightly change the overall method. Algorithm 1 (Multi-Task Feature Learning) Input: training sets {(xti, yti)}m i=1, t ∈INT Parameters: regularization parameter γ Output: d × d matrix D, d × T regression matrix W = [w1, . . . , wT ] Initialization: set D = Id×d d while convergence condition is not true do for t = 1, . . . , T do compute wt = argmin nPm i=1 L(yti, ⟨w, xti⟩) + γ⟨w, D+w⟩: w ∈IRd, w ∈range(D) o end for set D = (W W ⊤) 1 2 trace(W W ⊤) 1 2 end while Theorem 4.1. Let C = WW ⊤. The optimal solution of problem (4.1) is D = C 1 2 trace C 1 2 (4.2) and the optimal value equals (trace C 1 2 )2. We first introduce the following lemma which is useful in our analysis. Lemma 4.2. For any b = (b1, . . . , bd) ∈IRd, we have that inf ( d X i=1 b2 i λi : λi > 0, d X i=1 λi ≤1 ) = ∥b∥2 1 (4.3) and any minimizing sequence converges to ˆλi = |bi| ∥b∥1 , i ∈INd. Proof. From the Cauchy-Schwarz inequality we have that ∥b∥1 = P bi̸=0 λ 1 2 i λ −1 2 i |bi| ≤ (P bi̸=0 λi) 1 2 (P bi̸=0 λ−1 i b2 i ) 1 2 ≤(Pd i=1 λ−1 i b2 i ) 1 2 . Convergence to the infimum is obtained when Pd i=1 λi →1 and |bi| λi −|bj| λj →0 for all i, j ∈INd such that bi, bj ̸= 0. Hence λi →|bi| ∥b∥1 . The infimum is attained when bi ̸= 0 for all i ∈INd. Proof of Theorem 4.1. We write D = UDiag(λ)U ⊤, with U ∈Od and λ ∈IRd +. We first minimize over λ. For this purpose, we use Lemma 4.2 to obtain that inf{trace(W ⊤UDiag(λ)−1U ⊤W) : λ ∈IRd ++, d X i=1 λi ≤1} = ∥U ⊤W∥2 2,1 = d X i=1 ∥W ⊤ui∥2 2. Next we show that min{∥U ⊤W∥2 2,1 : U ∈Od} = (trace C 1 2 )2 and a minimizing U is a system of eigenvectors of C. To see this, note that trace(WW ⊤uiu ⊤ i ) = trace(C 1 2 uiu ⊤ i uiu ⊤ i C 1 2 ) trace(uiu ⊤ i uiu ⊤ i ) ≥(trace(C 1 2 uiu ⊤ i uiu ⊤ i ))2 = trace(C 1 2 uiu ⊤ i ) = u ⊤ i C 1 2 ui since uiu⊤ i uiu⊤ i = uiu⊤ i . The equality is verified if and only if C 1 2 uiu⊤ i = auiu⊤ i which implies that C 1 2 ui = aui, that is, if ui is an eigenvector of C. The optimal a is trace(C 1 2 ). The expression trace(WW ⊤) 1 2 in (4.2) is simply the sum of the singular values of W and is sometimes called the trace norm. As shown in [10], the trace norm is the convex envelope of rank(W) in the unit ball, which gives another interpretation of the relationship between the rank and γ in our experiments. Using the trace norm, problem (3.2) becomes a regularization problem which depends only on W. 10 −4 10 −2 10 0 2 4 6 8 10 12 14 16 18 10 0 10 1 0 5 10 15 Figure 1: Number of features learned versus the regularization parameter γ (see text for description). However, since the trace norm is nonsmooth, we have opted for the above alternating minimization strategy which is simple to implement and has a natural interpretation. Indeed, Algorithm 1 alternately performs a supervised and an unsupervised step, where in the latter step we learn common representations across the tasks and in the former step we learn task-specific functions using these representations. We conclude this section by noting that when matrix D in problem (3.2) is additionally constrained to be diagonal, problem (3.2) reduces to problem (2.5). Formally, we have the following corollary. Corollary 4.3. Problem (2.5) is equivalent to the problem min ( R(W, Diag(λ)) : W ∈IRd×T , λ ∈IRd +, d X i=1 λi ≤1, λi ̸= 0 when wi ̸= 0 ) (4.4) and the optimal λ is given by λi = ∥wi∥2 ∥W∥2,1 , i ∈INd. (4.5) Using this corollary we can make a simple modification to Algorithm 1 in order to use it for variable selection. That is, we modify the computation of the matrix D (penultimate line in Algorithm 1) as D = Diag(λ), where the vector λ = (λ1, . . . , λd) is computed using equation (4.5). 5 Experiments In this section, we present experiments on a synthetic and a real data set. In all of our experiments, we used the square loss function and automatically tuned the regularization parameter γ with leaveone-out cross validation. Synthetic Experiments. We created synthetic data sets by generating T = 200 task parameters wt from a 5-dimensional Gaussian distribution with zero mean and covariance equal to Diag(1, 0.25, 0.1, 0.05, 0.01). These are the relevant dimensions we wish to learn. To these we kept adding up to 20 irrelevant dimensions which are exactly zero. The training and test sets were selected randomly from [0, 1]25 and contained 5 and 10 examples per task respectively. The outputs yti were computed from the wt and xti as yti = ⟨wt, xti⟩+ ν, where ν is zero-mean Gaussian noise with standard deviation equal to 0.1. We first present, in Figure 1, the number of features learned by our algorithm, as measured by rank(D). The plot on the left corresponds to a data set of 200 tasks with 25 input dimensions and that on the right to a real data set of 180 tasks described in the next subsection. As expected, the number of features decreases with γ. Figure 2 depicts the performance of our algorithm for T = 10, 25, 100 and 200 tasks along with the performance of 200 independent standard ridge regressions on the data. For T = 10, 25 and 100, we averaged the performance metrics over runs on all the data so that our estimates have comparable variance. In agreement with past empirical and theoretical evidence (see e.g. [4]), learning multiple tasks together significantly improves on learning the tasks independently. Moreover, the performance of the algorithm improves when more tasks are available. This improvement is moderate for low dimensionalities but increases as the number of irrelevant dimensions increases. 5 10 15 20 25 0.04 0.06 0.08 0.1 0.12 0.14 0.16 T = 200 T = 100 T = 25 T = 10 independent 5 10 15 20 25 0.7 0.8 0.9 1 1.1 1.2 T = 200 T = 100 T = 25 T = 10 independent Figure 2: Test error (left) and residual of learned features (right) vs. dimensionality of the input. 0 50 100 150 200 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5 5.1 5.2 5.3 2 4 6 8 10 12 14 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 TE RAM SC CPU HD CD CA CO AV WA SW GU PR −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 Figure 3: Test error vs. number of tasks (left) for the computer survey data set. Significance of features (middle) and attributes learned by the most important feature (right). On the right, we have plotted a residual measure of how well the learned features approximate the actual ones used to generate the data. More specifically, we depict the Frobenius norm of the difference of the learned and actual D’s versus the input dimensionality. We observe that adding more tasks leads to better estimates of the underlying features. Conjoint analysis experiment. We then tested the method using a real data set about people’s ratings of products from [13]. The data was taken from a survey of 180 persons who rated the likelihood of purchasing one of 20 different personal computers. Here the persons correspond to tasks and the PC models to examples. The input is represented by the following 13 binary attributes: telephone hot line (TE), amount of memory (RAM), screen size (SC), CPU speed (CPU), hard disk (HD), CD-ROM/multimedia (CD), cache (CA), Color (CO), availability (AV), warranty (WA), software (SW), guarantee (GU) and price (PR). We also added an input component accounting for the bias term. The output is an integer rating on the scale 0−10. Following [13], we used 4 examples per task as the test data and 8 examples per task as the training data. As shown in Figure 3, the performance of our algorithm improves with the number of tasks. It also performs much better than independent ridge regressions, whose test error is equal to 16.53. In this particular problem, it is also important to investigate which features are significant to all consumers and how they weight the 13 computer attributes. We demonstrate the results in the two adjacent plots, which were obtained with the data for all 180 tasks. In the middle, the distribution of the eigenvalues of D is depicted, indicating that there is a single most important feature which is shared by all persons. The plot on the right shows the weight of each input dimension in this most important feature. This feature seems to weight the technical characteristics of a computer (RAM, CPU and CD-ROM) against its price. Therefore, in this application our algorithm is able to discern interesting patterns in people’s decision process. School data. Preliminary experiments with the school data used in [3] achieved explained variance 37.1% compared to 29.5% in that paper. These results will be reported in future work. 6 Conclusion We have presented an algorithm which learns common sparse function representations across a pool of related tasks. To our knowledge, our approach provides the first convex optimization formulation for multi-task feature learning. Although convex optimization methods have been derived for the simpler problem of feature selection [12], prior work on multi-task feature learning has been based on more complex optimization problems which are not convex [2, 4, 6] and, so, are at best only guaranteed to converge to a local minimum. Our algorithm shares some similarities with recent work in [2] where they also alternately update the task parameters and the features. Two main differences are that their formulation is not convex and that, in our formulation, the number of learned features is not a parameter but it is controlled by the regularization parameter. This work may be extended in different directions. For example, it would be interesting to explore whether our formulation can be extended to more general models for the structure across the tasks, like in [20] where ICA type features are learned, or to hierarchical feature models like in [18]. Acknowledgments We wish to thank Yiming Ying and Raphael Hauser for observations on the convexity of (3.2), Charles Micchelli for valuable suggestions and the anonymous reviewers for their useful comments. This work was supported by EPSRC Grants GR/T18707/01 and EP/D071542/1, and by the IST Programme of the European Commission, under the PASCAL Network of Excellence IST-2002506778. References [1] J.Abernethy, F. Bach, T. Evgeniou and J-P. Vert. Low-rank matrix factorization with attributes. Technical report N24/06/MM, Ecole des Mines de Paris, 2006. [2] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Machine Learning Research. 6: 1817–1853, 2005. [3] B. Bakker and T. Heskes. Task clustering and gating for Bayesian multi–task learning. J. of Machine Learning Research, 4: 83–99, 2003. [4] J. Baxter. A model for inductive bias learning. J. of Artificial Intelligence Research, 12: 149–198, 2000. [5] S. Ben-David and R. Schuller. Exploiting task relatedness for multiple task learning. Proceedings of Computational Learning Theory (COLT), 2003. [6] R. Caruana. Multi–task learning. Machine Learning, 28: 41–75, 1997. [7] D. Donoho. For most large underdetermined systems of linear equations, the minimal l1-norm nearsolution approximates the sparsest near-solution. Preprint, Dept. of Statistics, Stanford University, 2004. [8] T. Evgeniou, C.A. Micchelli and M. Pontil. Learning multiple tasks with kernel methods. J. Machine Learning Research, 6: 615–637, 2005. [9] T. Evgeniou, M. Pontil and O. Toubia. A convex optimization approach to modeling consumer heterogeneity in conjoint estimation. INSEAD N 2006/62/TOM/DS. [10] M. Fazel, H. Hindi and S. P. Boyd. A rank minimization heuristic with application to minimum order system approximation. Proceedings, American Control Conference, 6, 2001. [11] T. Hastie, R. Tibshirani and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer Verlag Series in Statistics, New York, 2001. [12] T. Jebara. Multi-task feature and kernel selection for SVMs. Proc. of ICML 2004. [13] P.J. Lenk, W.S. DeSarbo, P.E. Green, M.R. Young. Hierarchical Bayes conjoint analysis: recovery of partworth heterogeneity from reduced experimental designs. Marketing Science, 15(2): 173–191, 1996. [14] C.A. Micchelli and A. Pinkus. Variational problems arising from balancing several error criteria. Rendiconti di Matematica, Serie VII, 14: 37-86, 1994. [15] C. A. Micchelli and M. Pontil. On learning vector–valued functions. Neural Computation, 17:177–204, 2005. [16] T. Serre, M. Kouh, C. Cadieu, U. Knoblich, G. Kreiman, T. Poggio. Theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex. AI Memo No. 2005-036, MIT, Cambridge, MA, October, 2005. [17] N. Srebro, J.D.M. Rennie, and T.S. Jaakkola. Maximum-margin matrix factorization. NIPS 2004. [18] A. Torralba, K. P. Murphy and W. T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. Proc. of CVPR’04, pages 762–769, 2004. [19] K. Yu, V. Tresp and A. Schwaighofer. Learning Gaussian processes from multiple tasks. Proc. of ICML 2005. [20] J. Zhang, Z. Ghahramani and Y. Yang. Learning Multiple Related Tasks using Latent Independent Component Analysis. NIPS 2006.
2006
7
3,092
Greedy Layer-Wise Training of Deep Networks Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle Universit´e de Montr´eal Montr´eal, Qu´ebec {bengioy,lamblinp,popovicd,larocheh}@iro.umontreal.ca Abstract Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization. 1 Introduction Recent analyses (Bengio, Delalleau, & Le Roux, 2006; Bengio & Le Cun, 2007) of modern nonparametric machine learning algorithms that are kernel machines, such as Support Vector Machines (SVMs), graph-based manifold and semi-supervised learning algorithms suggest fundamental limitations of some learning algorithms. The problem is clear in kernel-based approaches when the kernel is “local” (e.g., the Gaussian kernel), i.e., K(x, y) converges to a constant when ||x −y|| increases. These analyses point to the difficulty of learning “highly-varying functions”, i.e., functions that have a large number of “variations” in the domain of interest, e.g., they would require a large number of pieces to be well represented by a piecewise-linear approximation. Since the number of pieces can be made to grow exponentially with the number of factors of variations in the input, this is connected with the well-known curse of dimensionality for classical non-parametric learning algorithms (for regression, classification and density estimation). If the shapes of all these pieces are unrelated, one needs enough examples for each piece in order to generalize properly. However, if these shapes are related and can be predicted from each other, “non-local” learning algorithms have the potential to generalize to pieces not covered by the training set. Such ability would seem necessary for learning in complex domains such as Artificial Intelligence tasks (e.g., related to vision, language, speech, robotics). Kernel machines (not only those with a local kernel) have a shallow architecture, i.e., only two levels of data-dependent computational elements. This is also true of feedforward neural networks with a single hidden layer (which can become SVMs when the number of hidden units becomes large (Bengio, Le Roux, Vincent, Delalleau, & Marcotte, 2006)). A serious problem with shallow architectures is that they can be very inefficient in terms of the number of computational units (e.g., bases, hidden units), and thus in terms of required examples (Bengio & Le Cun, 2007). One way to represent a highly-varying function compactly (with few parameters) is through the composition of many non-linearities, i.e., with a deep architecture. For example, the parity function with d inputs requires O(2d) examples and parameters to be represented by a Gaussian SVM (Bengio et al., 2006), O(d2) parameters for a one-hidden-layer neural network, O(d) parameters and units for a multi-layer network with O(log2 d) layers, and O(1) parameters with a recurrent neural network. More generally, boolean functions (such as the function that computes the multiplication of two numbers from their d-bit representation) expressible by O(log d) layers of combinatorial logic with O(d) elements in each layer may require O(2d) elements when expressed with only 2 layers (Utgoff & Stracuzzi, 2002; Bengio & Le Cun, 2007). When the representation of a concept requires an exponential number of elements, e.g., with a shallow circuit, the number of training examples required to learn the concept may also be impractical. Formal analyses of the computational complexity of shallow circuits can be found in (Hastad, 1987) or (Allender, 1996). They point in the same direction: shallow circuits are much less expressive than deep ones. However, until recently, it was believed too difficult to train deep multi-layer neural networks. Empirically, deep networks were generally found to be not better, and often worse, than neural networks with one or two hidden layers (Tesauro, 1992). As this is a negative result, it has not been much reported in the machine learning literature. A reasonable explanation is that gradient-based optimization starting from random initialization may get stuck near poor solutions. An approach that has been explored with some success in the past is based on constructively adding layers. This was previously done using a supervised criterion at each stage (Fahlman & Lebiere, 1990; Lengell´e & Denoeux, 1996). Hinton, Osindero, and Teh (2006) recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. The training strategy for such networks may hold great promise as a principle to help address the problem of training deep networks. Upper layers of a DBN are supposed to represent more “abstract” concepts that explain the input observation x, whereas lower layers extract “low-level features” from x. They learn simpler concepts first, and build on them to learn more abstract concepts. This strategy, studied in detail here, has not yet been much exploited in machine learning. We hypothesize that three aspects of this strategy are particularly important: first, pre-training one layer at a time in a greedy way; second, using unsupervised learning at each layer in order to preserve information from the input; and finally, fine-tuning the whole network with respect to the ultimate criterion of interest. We first extend DBNs and their component layers, Restricted Boltzmann Machines (RBM), so that they can more naturally handle continuous values in input. Second, we perform experiments to better understand the advantage brought by the greedy layer-wise unsupervised learning. The basic question to answer is whether or not this approach helps to solve a difficult optimization problem. In DBNs, RBMs are used as building blocks, but applying this same strategy using auto-encoders yielded similar results. Finally, we discuss a problem that occurs with the layer-wise greedy unsupervised procedure when the input distribution is not revealing enough of the conditional distribution of the target variable given the input variable. We evaluate a simple and successful solution to this problem. 2 Deep Belief Nets Let x be the input, and gi the hidden variables at layer i, with joint distribution P(x, g1, g2, . . . , gℓ) = P(x|g1)P(g1|g2) · · · P(gℓ−2|gℓ−1)P(gℓ−1, gℓ), where all the conditional layers P(gi|gi+1) are factorized conditional distributions for which computation of probability and sampling are easy. In Hinton et al. (2006) one considers the hidden layer gi a binary random vector with ni elements gi j: P(gi|gi+1) = ni Y j=1 P(gi j|gi+1) with P(gi j = 1|gi+1) = sigm(bi j + ni+1 X k=1 W i kjgi+1 k ) (1) where sigm(t) = 1/(1 + e−t), the bi j are biases for unit j of layer i, and W i is the weight matrix for layer i. If we denote g0 = x, the generative model for the first layer P(x|g1) also follows (1). 2.1 Restricted Boltzmann machines The top-level prior P(gℓ−1, gℓ) is a Restricted Boltzmann Machine (RBM) between layer ℓ−1 and layer ℓ. To lighten notation, consider a generic RBM with input layer activations v (for visible units) and hidden layer activations h (for hidden units). It has the following joint distribution: P(v, h) = 1 Z eh′Wv+b′v+c′h, where Z is the normalization constant for this distribution, b is the vector of biases for visible units, c is the vector of biases for the hidden units, and W is the weight matrix for the layer. Minus the argument of the exponential is called the energy function, energy(v, h) = −h′Wv −b′v −c′h. (2) We denote the RBM parameters together with θ = (W, b, c). We denote Q(h|v) and P(v|h) the layer-to-layer conditional distributions associated with the above RBM joint distribution. The layer-to-layer conditionals associated with the RBM factorize like in (1) and give rise to P(vk = 1|h) = sigm(bk + P j Wjkhj) and Q(hj = 1|v) = sigm(cj + P k Wjkvk). 2.2 Gibbs Markov chain and log-likelihood gradient in an RBM To obtain an estimator of the gradient on the log-likelihood of an RBM, we consider a Gibbs Markov chain on the (visible units, hidden units) pair of variables. Gibbs sampling from an RBM proceeds by sampling h given v, then v given h, etc. Denote vt for the t-th v sample from that chain, starting at t = 0 with v0, the “input observation” for the RBM. Therefore, (vk, hk) for k →∞is a sample from the joint P(v, h). The log-likelihood of a value v0 under the model of the RBM is log P(v0) = log X h P(v0, h) = log X h e−energy(v0,h) −log X v,h e−energy(v,h) and its gradient with respect to θ = (W, b, c) is ∂log P(v0) ∂θ = − X h0 Q(h0|v0)∂energy(v0, h0) ∂θ + X vk,hk P(vk, hk)∂energy(vk, hk) ∂θ for k → ∞. An unbiased sample is −∂energy(v0, h0) ∂θ + Ehk ∂energy(vk, hk) ∂θ |vk  , where h0 is a sample from Q(h0|v0) and (vk, hk) is a sample of the Markov chain, and the expectation can be easily computed thanks to P(hk|vk) factorizing. The idea of the Contrastive Divergence algorithm (Hinton, 2002) is to take k small (typically k = 1). A pseudo-code for Contrastive Divergence training (with k = 1) of an RBM with binomial input and hidden units is presented in the Appendix (Algorithm RBMupdate(x, ϵ, W, b, c)). This procedure is called repeatedly with v0 = x sampled from the training distribution for the RBM. To decide when to stop one may use a proxy for the training criterion, such as the reconstruction error −log P(v1 = x|v0 = x). 2.3 Greedy layer-wise training of a DBN A greedy layer-wise training algorithm was proposed (Hinton et al., 2006) to train a DBN one layer at a time. One first trains an RBM that takes the empirical data as input and models it. Denote Q(g1|g0) the posterior over g1 associated with that trained RBM (we recall that g0 = x with x the observed input). This gives rise to an “empirical” distribution bp1 over the first layer g1, when g0 is sampled from the data empirical distribution bp: we have bp1(g1) = X g0 bp(g0)Q(g1|g0). Note that a 1-level DBN is an RBM. The basic idea of the greedy layer-wise strategy is that after training the top-level RBM of a ℓ-level DBN, one changes the interpretation of the RBM parameters to insert them in a (ℓ+ 1)-level DBN: the distribution P(gℓ−1|gℓ) from the RBM associated with layers ℓ−1 and ℓis kept as part of the DBN generative model. In the RBM between layers ℓ−1 and ℓ, P(gℓ) is defined in terms on the parameters of that RBM, whereas in the DBN P(gℓ) is defined in terms of the parameters of the upper layers. Consequently, Q(gℓ|gℓ−1) of the RBM does not correspond to P(gℓ|gℓ−1) in the DBN, except when that RBM is the top layer of the DBN. However, we use Q(gℓ|gℓ−1) of the RBM as an approximation of the posterior P(gℓ|gℓ−1) for the DBN. The samples of gℓ−1, with empirical distribution bpℓ−1, are converted stochastically into samples of gℓ with distribution bpℓthrough bpℓ(gℓ) = P gℓ−1 bpℓ−1(gℓ−1)Q(gℓ|gℓ−1). Although bpℓcannot be represented explicitly it is easy to sample unbiasedly from it: pick a training example and propagate it stochastically through the Q(gi|gi−1) at each level. As a nice side benefit, one obtains an approximation of the posterior for all the hidden variables in the DBN, at all levels, given an input g0 = x. Mean-field propagation (see below) gives a fast deterministic approximation of posteriors P(gℓ|x). Note that if we consider all the layers of a DBN from level i to the top, we have a smaller DBN, which generates the marginal distribution P(gi) for the complete DBN. The motivation for the greedy procedure is that a partial DBN with ℓ−i levels starting above level i may provide a better model for P(gi) than does the RBM initially associated with level i itself. The above greedy procedure is justified using a variational bound (Hinton et al., 2006). As a consequence of that bound, when inserting an additional layer, if it is initialized appropriately and has enough units, one can guarantee that initial improvements on the training criterion for the next layer (fitting bpℓ) will yield improvement on the training criterion for the previous layer (likelihood with respect to bpℓ−1). The greedy layer-wise training algorithm for DBNs is quite simple, as illustrated by the pseudo-code in Algorithm TrainUnsupervisedDBN of the Appendix. 2.4 Supervised fine-tuning As a last training stage, it is possible to fine-tune the parameters of all the layers together. For example Hinton et al. (2006) propose to use the wake-sleep algorithm (Hinton, Dayan, Frey, & Neal, 1995) to continue unsupervised training. Hinton et al. (2006) also propose to optionally use a mean-field approximation of the posteriors P(gi|g0), by replacing the samples gi−1 j at level i −1 by their bit-wise mean-field expected value µi−1 j , with µi = sigm(bi + W iµi−1). According to these propagation rules, the whole network now deterministically computes internal representations as functions of the network input g0 = x. After unsupervised pre-training of the layers of a DBN following Algorithm TrainUnsupervisedDBN (see Appendix) the whole network can be further optimized by gradient descent with respect to any deterministically computable training criterion that depends on these representations. For example, this can be used (Hinton & Salakhutdinov, 2006) to fine-tune a very deep auto-encoder, minimizing a reconstruction error. It is also possible to use this as initialization of all except the last layer of a traditional multi-layer neural network, using gradient descent to fine-tune the whole network with respect to a supervised training criterion. Algorithm DBNSupervisedFineTuning in the appendix contains pseudo-code for supervised fine-tuning, as part of the global supervised learning algorithm TrainSupervisedDBN. Note that better results were obtained when using a 20-fold larger learning rate with the supervised criterion (here, squared error or cross-entropy) updates than in the contrastive divergence updates. 3 Extension to continuous-valued inputs With the binary units introduced for RBMs and DBNs in Hinton et al. (2006) one can “cheat” and handle continuous-valued inputs by scaling them to the (0,1) interval and considering each input continuous value as the probability for a binary random variable to take the value 1. This has worked well for pixel gray levels, but it may be inappropriate for other kinds of input variables. Previous work on continuous-valued input in RBMs include (Chen & Murray, 2003), in which noise is added to sigmoidal units, and the RBM forms a special form of Diffusion Network (Movellan, Mineiro, & Williams, 2002). We concentrate here on simple extensions of the RBM framework in which only the energy function and the allowed range of values are changed. Linear energy: exponential or truncated exponential Consider a unit with value y of an RBM, connected to units z of the other layer. p(y|z) can be obtained from the terms in the exponential that contain y, which can be grouped in ya(z) for linear energy functions as in (2), where a(z) = b+w′z with b the bias of unit y, and w the vector of weights connecting unit y to units z. If we allow y to take any value in interval I, the conditional density of y becomes p(y|z) = exp(ya(z))1y∈I R v exp(va(z))1v∈Idv . When I = [0, ∞), this is an exponential density with parameter a(z), and the normalizing integral equals −1/a(z), but only exists if ∀z, a(z) < 0 Computing the density, computing the expected value (= −1/a(z)) and sampling would all be easy. Alternatively, if I is a closed interval (as in many applications of interest), or if we would like to use such a unit as a hidden unit with non-linear expected value, the above density is a truncated exponential. For simplicity we consider the case I = [0, 1] here, for which the normalizing integral, which always exists, is exp(−a(z))−1 a(z) . The conditional expectation of u given z is interesting because it has a sigmoidal-like saturating and monotone non-linearity: E[y|z] = 1 1−exp(−a(z)) − 1 a(z). A sampling from the truncated exponential is easily obtained from a uniform sample U, using the inverse cumulative F −1 of the conditional density y|z: F −1(U) = log(1−U×(1−exp(a(z)))) a(z) . In both truncated and not truncated cases, the Contrastive Divergence updates have the same form as for binomial units (input value times output value), since the updates only depend on the derivative of the energy with respect to the parameters. Only sampling is changed, according to the unit’s conditional density. Quadratic energy: Gaussian units To obtain Gaussian-distributed units, one adds quadratic terms to the energy. Adding P i d2 i y2 i gives rise to a diagonal covariance matrix between units of the same layer, where yi is the continuous value of a Gaussian unit and d2 i is a positive parameter that is equal to the inverse of the variance of yi. In 0 50 100 150 200 250 300 350 400 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 classification error on training set Deep Network with no pre−training DBN with partially supervised pre−training DBN with unsupervised pre−training Figure 1: Training classification error vs training iteration, on the Cotton price task, for deep network without pre-training, for DBN with unsupervised pre-training, and DBN with partially supervised pre-training. Illustrates optimization difficulty of deep networks and advantage of partially supervised training. Abalone Cotton train. valid. test. train. valid. test. 1. Deep Network with no pre-training 4.23 4.43 4.2 45.2% 42.9% 43.0% 2. Logistic regression · · · 44.0% 42.6% 45.0% 3. DBN, binomial inputs, unsupervised 4.59 4.60 4.47 44.0% 42.6% 45.0% 4. DBN, binomial inputs, partially supervised 4.39 4.45 4.28 43.3% 41.1% 43.7% 5. DBN, Gaussian inputs, unsupervised 4.25 4.42 4.19 35.7% 34.9% 35.8% 6. DBN, Gaussian inputs, partially supervised 4.23 4.43 4.18 27.5% 28.4% 31.4% Table 1: Mean squared prediction error on Abalone task and classification error on Cotton task, showing improvement with Gaussian units. this case the variance is unconditional, whereas the mean depends on the inputs of the unit: for a unit y with inputs z and inverse variance d2, E[y|z] = a(z) 2d2 . The Contrastive Divergence updates are easily obtained by computing the derivative of the energy with respect to the parameters. For the parameters in the linear terms of the energy function (e.g., b and w above), the derivatives have the same form (input unit value times output unit value) as for the case of binomial units. For quadratic parameter d > 0, the derivative is simply 2dy2. Gaussian units were previously used as hidden units of an RBM (with binomial or multinomial inputs) applied to an information retrieval task (Welling, Rosen-Zvi, & Hinton, 2005). Our interest here is to use them for continuous-valued inputs. Using continuous-valued hidden units Although we have introduced RBM units with continuous values to better deal with the representation of input variables, they could also be considered for use in the hidden layers, in replacement or complementing the binomial units which have been used in the past. However, Gaussian and exponential hidden units have a weakness: the mean-field propagation through a Gaussian unit gives rise to a purely linear transformation. Hence if we have only such linear hidden units in a multi-layered network, the mean-field propagation function that maps inputs to internal representations would be completely linear. In addition, in a DBN containing only Gaussian units, one would only be able to model Gaussian data. On the other hand, combining Gaussian with other types of units could be interesting. In contrast with Gaussian or exponential units, remark that the conditional expectation of truncated exponential units is non-linear, and in fact involves a sigmoidal form of non-linearity applied to the weighted sum of its inputs. Experiment 1 This experiment was performed on two data sets: the UCI repository Abalone data set (split in 2177 training examples, 1000 validation examples, 1000 test examples) and a financial data set. The latter has real-valued input variables representing averages of returns and squared returns for which the binomial approximation would seem inappropriate. The target variable is next month’s return of a Cotton futures contract. There are 13 continuous input variables, that are averages of returns over different time-windows up to 504 days. There are 3135 training examples, 1000 validation examples, and 1000 test examples. The dataset is publicly available at http://www.iro.umontreal.ca/˜lisa/ fin_data/. In Table 1 (rows 3 and 5), we show improvements brought by DBNs with Gaussian inputs over DBNs with binomial inputs (with binomial hidden units in both cases). The networks have two hidden layers. All hyper-parameters are selected based on validation set performance. 4 Understanding why the layer-wise strategy works A reasonable explanation for the apparent success of the layer-wise training strategy for DBNs is that unsupervised pre-training helps to mitigate the difficult optimization problem of deep networks by better initializing the weights of all layers. Here we present experiments that support and clarify this. Training each layer as an auto-encoder We want to verify that the layer-wise greedy unsupervised pre-training principle can be applied when using an auto-encoder instead of the RBM as a layer building block. Let x be the input vector with xi ∈(0, 1). For a layer with weights matrix W, hidden biases column vector b and input biases column vector c, the reconstruction probability for bit i is pi(x), with the vector of probabilities p(x) = sigm(c + Wsigm(b + W ′x)). The training criterion for the layer is the average of negative log-likelihoods for predicting x from p(x). For example, if x is interpreted either as a sequence of bits or a sequence of bit probabilities, we minimize the reconstruction cross-entropy: R = −P i xi log pi(x) + (1 −xi) log(1 −pi(x)). We report several experimental results using this training criterion for each layer, in comparison to the contrastive divergence algorithm for an RBM. Pseudo-code for a deep network obtained by training each layer as an auto-encoder is given in Appendix (Algorithm TrainGreedyAutoEncodingDeepNet). One question that arises with auto-encoders in comparison with RBMs is whether the auto-encoders will fail to learn a useful representation when the number of units is not strictly decreasing from one layer to the next (since the networks could theoretically just learn to be the identity and perfectly minimize the reconstruction error). However, our experiments suggest that networks with non-decreasing layer sizes generalize well. This might be due to weight decay and stochastic gradient descent, preventing large weights: optimization falls in a local minimum which corresponds to a good transformation of the input (that provides a good initialization for supervised training of the whole net). Greedy layer-wise supervised training A reasonable question to ask is whether the fact that each layer is trained in an unsupervised way is critical or not. An alternative algorithm is supervised, greedy and layer-wise: train each new hidden layer as the hidden layer of a one-hidden layer supervised neural network NN (taking as input the output of the last of previously trained layers), and then throw away the output layer of NN and use the parameters of the hidden layer of NN as pre-training initialization of the new top layer of the deep net, to map the output of the previous layers to a hopefully better representation. Pseudo-code for a deep network obtained by training each layer as the hidden layer of a supervised one-hidden-layer neural network is given in Appendix (Algorithm TrainGreedySupervisedDeepNet). Experiment 2. We compared the performance on the MNIST digit classification task obtained with five algorithms: (a) DBN, (b) deep network whose layers are initialized as auto-encoders, (c) above described supervised greedy layer-wise algorithm to pre-train each layer, (d) deep network with no pre-training (random initialization), (e) shallow network (1 hidden layer) with no pre-training. The final fine-tuning is done by adding a logistic regression layer on top of the network and training the whole network by stochastic gradient descent on the cross-entropy with respect to the target classification. The networks have the following architecture: 784 inputs, 10 outputs, 3 hidden layers with variable number of hidden units, selected by validation set performance (typically selected layer sizes are between 500 and 1000). The shallow network has a single hidden layer. An L2 weight decay hyper-parameter is also optimized. The DBN was slower to train and less experiments were performed, so that longer training and more appropriately chosen sizes of layers and learning rates could yield better results (Hinton 2006, unpublished, reports 1.15% error on the MNIST test set). Experiment 2 Experiment 3 train. valid. test train. valid. test DBN, unsupervised pre-training 0% 1.2% 1.2% 0% 1.5% 1.5% Deep net, auto-associator pre-training 0% 1.4% 1.4% 0% 1.4% 1.6% Deep net, supervised pre-training 0% 1.7% 2.0% 0% 1.8% 1.9% Deep net, no pre-training .004% 2.1% 2.4% .59% 2.1% 2.2% Shallow net, no pre-training .004% 1.8% 1.9% 3.6% 4.7% 5.0% Table 2: Classification error on MNIST training, validation, and test sets, with the best hyperparameters according to validation error, with and without pre-training, using purely supervised or purely unsupervised pre-training. In experiment 3, the size of the top hidden layer was set to 20. On MNIST, differences of more than .1% are statistically significant. The results in Table 2 suggest that the auto-encoding criterion can yield performance comparable to the DBN when the layers are finally tuned in a supervised fashion. They also clearly show that the greedy unsupervised layer-wise pre-training gives much better results than the standard way to train a deep network (with no greedy pre-training) or a shallow network, and that, without pre-training, deep networks tend to perform worse than shallow networks. The results also suggest that unsupervised greedy layer-wise pre-training can perform significantly better than purely supervised greedy layer-wise pre-training. A possible explanation is that the greedy supervised procedure is too greedy: in the learned hidden units representation it may discard some of the information about the target, information that cannot be captured easily by a one-hidden-layer neural network but could be captured by composing more hidden layers. Experiment 3 However, there is something troubling in the Experiment 2 results (Table 2): all the networks, even those without greedy layer-wise pre-training, perform almost perfectly on the training set, which would appear to contradict the hypothesis that the main effect of the layer-wise greedy strategy is to help the optimization (with poor optimization one would expect poor training error). A possible explanation coherent with our initial hypothesis and with the above results is captured by the following hypothesis. Without pre-training, the lower layers are initialized poorly, but still allowing the top two layers to learn the training set almost perfectly, because the output layer and the last hidden layer form a standard shallow but fat neural network. Consider the top two layers of the deep network with pre-training: it presumably takes as input a better representation, one that allows for better generalization. Instead, the network without pre-training sees a “random” transformation of the input, one that preserves enough information about the input to fit the training set, but that does not help to generalize. To test that hypothesis, we performed a second series of experiments in which we constrain the top hidden layer to be small (20 hidden units). The Experiment 3 results (Table 2) clearly confirm our hypothesis. With no pre-training, training error degrades significantly when there are only 20 hidden units in the top hidden layer. In addition, the results obtained without pre-training were found to have extremely large variance indicating high sensitivity to initial conditions. Overall, the results in the tables and in Figure 1 are consistent with the hypothesis that the greedy layer-wise procedure essentially helps to better optimize the deep networks, probably by initializingthe hidden layers so that they represent more meaningful representations of the input, which also yields to better generalization. Continuous training of all layers of a DBN With the layer-wise training algorithm for DBNs (TrainUnsupervisedDBN in Appendix), one element that we would like to dispense with is having to decide the number of training iterations for each layer. It would be good if we did not have to explicitly add layers one at a time, i.e., if we could train all layers simultaneously, but keeping the “greedy” idea that each layer is pre-trained to model its input, ignoring the effect of higher layers. To achieve this it is sufficient to insert a line in TrainUnsupervisedDBN, so that RBMupdate is called on all the layers and the stochastic hidden values are propagated all the way up. Experiments with this variant demonstrated that it works at least as well as the original algorithm. The advantage is that we can now have a single stopping criterion (for the whole network). Computation time is slightly greater, since we do more computations initially (on the upper layers), which might be wasted (before the lower layers converge to a decent representation), but time is saved on optimizing hyper-parameters. This variant may be more appealing for on-line training on very large data-sets, where one would never cycle back on the training data. 5 Dealing with uncooperative input distributions In classification problems such as MNIST where classes are well separated, the structure of the input distribution p(x) naturally contains much information about the target variable y. Imagine a supervised learning task in which the input distribution is mostly unrelated with y. In regression problems, which we are interested in studying here, this problem could be much more prevalent. For example imagine a task in which x ∼p(x) and the target y = f(x)+ noise (e.g., p is Gaussian and f = sinus) with no particular relation between p and f. In such settings we cannot expect the unsupervised greedy layer-wise pre-training procedure to help in training deep supervised networks. To deal with such uncooperative input distributions, we propose to train each layer with a mixed training criterion that combines the unsupervised objective (modeling or reconstructing the input) and a supervised objective (helping to predict the target). A simple algorithm thus adds the updates on the hidden layer weights from the unsupervised algorithm (Contrastive Divergence or reconstruction error gradient) with the updates from the gradient on a supervised prediction error, using a temporary output layer, as with the greedy layer-wise supervised training algorithm. In our experiments it appeared sufficient to perform that partial supervision with the first layer only, since once the predictive information about the target is “forced” into the representation of the first layer, it tends to stay in the upper layers. The results in Figure 1 and Table 1 clearly show the advantage of this partially supervised greedy training algorithm, in the case of the financial dataset. Pseudo-code for partially supervising the first (or later layer) is given in Algorithm TrainPartiallySupervisedLayer (in the Appendix). 6 Conclusion This paper is motivated by the need to develop good training algorithms for deep architectures, since these can be much more representationally efficient than shallow ones such as SVMs and one-hiddenlayer neural nets. We study Deep Belief Networks applied to supervised learning tasks, and the principles that could explain the good performance they have yielded. The three principal contributions of this paper are the following. First we extended RBMs and DBNs in new ways to naturally handle continuous-valued inputs, showing examples where much better predictive models can thus be obtained. Second, we performed experiments which support the hypothesis that the greedy unsupervised layer-wise training strategy helps to optimize deep networks, but suggest that better generalization is also obtained because this strategy initializes upper layers with better representations of relevant highlevel abstractions. These experiments suggest a general principle that can be applied beyond DBNs, and we obtained similar results when each layer is initialized as an auto-associator instead of as an RBM. Finally, although we found that it is important to have an unsupervised component to train each layer (a fully supervised greedy layer-wise strategy performed worse), we studied supervised tasks in which the structure of the input distribution is not revealing enough of the conditional density of y given x. In that case the DBN unsupervised greedy layer-wise strategy appears inadequate and we proposed a simple fix based on partial supervision, that can yield significant improvements. References Allender, E. (1996). Circuit complexity before the dawn of the new millennium. In 16th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 1–18. Lecture Notes in Computer Science 1180. Bengio, Y., Delalleau, O., & Le Roux, N. (2006). The curse of highly variable functions for local kernel machines. In Weiss, Y., Sch¨olkopf, B., & Platt, J. (Eds.), Advances in Neural Information Processing Systems 18, pp. 107–114. MIT Press, Cambridge, MA. Bengio, Y., & Le Cun, Y. (2007). Scaling learning algorithms towards AI. In Bottou, L., Chapelle, O., DeCoste, D., & Weston, J. (Eds.), Large Scale Kernel Machines. MIT Press. Bengio, Y., Le Roux, N., Vincent, P., Delalleau, O., & Marcotte, P. (2006). Convex neural networks. In Weiss, Y., Sch¨olkopf, B., & Platt, J. (Eds.), Advances in Neural Information Processing Systems 18, pp. 123–130. MIT Press, Cambridge, MA. Chen, H., & Murray, A. (2003). A continuous restricted boltzmann machine with an implementable training algorithm. IEE Proceedings of Vision, Image and Signal Processing, 150(3), 153–158. Fahlman, S., & Lebiere, C. (1990). The cascade-correlation learning architecture. In Touretzky, D. (Ed.), Advances in Neural Information Processing Systems 2, pp. 524–532 Denver, CO. Morgan Kaufmann, San Mateo. Hastad, J. T. (1987). Computational Limitations for Small Depth Circuits. MIT Press, Cambridge, MA. Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554. Hinton, G. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771–1800. Hinton, G., Dayan, P., Frey, B., & Neal, R. (1995). The wake-sleep algorithm for unsupervised neural networks. Science, 268, 1558–1161. Hinton, G., & Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507. Lengell´e, R., & Denoeux, T. (1996). Training MLPs layer by layer using an objective function for internal representations. Neural Networks, 9, 83–97. Movellan, J., Mineiro, P., & Williams, R. (2002). A monte-carlo EM approach for partially observable diffusion processes: theory and applications to neural networks. Neural Computation, 14, 1501–1544. Tesauro, G. (1992). Practical issues in temporal difference learning. Machine Learning, 8, 257–277. Utgoff, P., & Stracuzzi, D. (2002). Many-layered learning. Neural Computation, 14, 2497–2539. Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems, Vol. 17 Cambridge, MA. MIT Press.
2006
70
3,093
Optimal Change-Detection and Spiking Neurons Angela J. Yu CSBMB, Princeton University Princeton, NJ 08540 ajyu@princeton.edu Abstract Survival in a non-stationary, potentially adversarial environment requires animals to detect sensory changes rapidly yet accurately, two oft competing desiderata. Neurons subserving such detections are faced with the corresponding challenge to discern “real” changes in inputs as quickly as possible, while ignoring noisy fluctuations. Mathematically, this is an example of a change-detection problem that is actively researched in the controlled stochastic processes community. In this paper, we utilize sophisticated tools developed in that community to formalize an instantiation of the problem faced by the nervous system, and characterize the Bayes-optimal decision policy under certain assumptions. We will derive from this optimal strategy an information accumulation and decision process that remarkably resembles the dynamics of a leaky integrate-and-fire neuron. This correspondence suggests that neurons are optimized for tracking input changes, and sheds new light on the computational import of intracellular properties such as resting membrane potential, voltage-dependent conductance, and post-spike reset voltage. We also explore the influence that factors such as timing, uncertainty, neuromodulation, and reward should and do have on neuronal dynamics and sensitivity, as the optimal decision strategy depends critically on these factors. 1 Introduction Animals interacting with a changeable, potentially adversarial environment need to excel in the detection of changes in its sensory inputs. This detection, however, is riddled by the inherently competing goals of accuracy and speed. Due to the noisy and incomplete nature of sensory inputs, the animal can generally achieve more accurate detection by waiting for more sensory inputs. However, gathering this extra information incurs an opportunity cost, as the extra time can be used to gather more food, attract a mate, or escape a predator. Neurons subserving the detection process face a similar speed-accuracy trade-off. In this work, we aim to understand the computations performed by a neuron at the time-scale of single spikes. How sensitive a neuron is to each input spike should depend on the relative probabilities of the input representing noise and useful information, and the relative costs of mis-interpretation. We formulate the problem as an example of change-detection, and characterize the optimal decision policy in this context. The formal tools we utilize to formalize the change-detection problem are built upon work in the area of controlled stochastic processes. Controlled stochastic processes refer to decision-making in environments plagued not only by inferential uncertainty about the state of the world, but also uncertainty associated with the consequences of an action or decision on the world itself. Finding optimal decision policies for such processes is an actively researched problem in financial mathematics and operations research. As we will discuss below, neuronal change-detection is a prime example of such a problem. In Sec. 2, we introduce the general framework of change-detection. In Sec. 3, we apply the framework to a specific scenario similar to that faced by the neuron, and characterize the optimal solution. In Sec. 3, we demonstrate that the optimal information accumulation and decision process has dynamics remarkably resembling that of a spiking neuron. We examine the computational import of certain intracellular properties, characterize the input-output firing rate relationship, and extend the framework of multi-source detection. In Sec. 4, we explore the behavioral consequences of optimal change-detection and examine issues such as the speed-accuracy trade-off, temporal and spatial cueing, and neuromodulation. 2 A Bayesian Formulation of the Change-Detection Problem The Generative Model Suppose we have sequential inputs x1, x2, . . ., which are generated iid by a distribution f0(x) before time θ ∈{0, 1, . . .}, and by a distribution f1(x) afterwards, where the random variable (r.v.) θ denotes the sudden, hidden change time. θ has an initial probability P(θ =0)=q0, and a geometric distribution thereafter: P(θ = t) = (1−q0)(1−q)t−1q, t > 0. The change-detection problem is concerned with finding the optimal decision policy for reporting the change from f0 to f1 as early as possible while minimizing false-alarms [1]. A decision policy π is a mapping, possibly stochastic, from all observations made so far to the control (or action) set, π(xt ≜{x1, . . . , xt}) 7→{a1, a2}. The action a1 terminates the observation process and reports θ ≤t, and a2 continues the observation for another time step. Every unique decision policy is identified by a corresponding r.v. of stopping times τ ∈{0, 1, . . .}. In the following, we will use π and τ interchangeably to refer to a policy. The Loss Function Following convention [2], we assume a loss function linear in false alarms and detection delay: lπ(θ, τ) = 1{τ<θ;π} + 1{τ≥θ;π}c(τ −θ) (1) where 1 is the indicator function, and c > 0 is a constant that specifies the relative importance of speed and accuracy. The total loss is the expectation of this loss function over θ and τ: Lπ ≜⟨lπ(θ, τ); π⟩= τ=∞ X θ=0 θ−1 X τ=0 P(θ, τ) + ∞ X τ=θ c(τ −θ)P(θ, τ) ! = P(τ < θ) + c⟨(τ −θ)+⟩(2) An optimal policy π∗minimizes Lπ. Due to the linear loss in detection delay, the expected loss blows up for all policies that do not stop almost surely (a.s.; probability=1) in finite time; therefore, we restrict the optimization problem in the following to the class of almost-surely finite-time policies. Using the notation Pt ≜P(θ ≤t|xt), we have the following: P(θ > τ) = ∞ X t=0 P(τ =t, θ>τ)= ∞ X t=0 Z P(θ>τ|xτ)P(τ =t|xt)p(xt)dxt =⟨1−Pτ⟩τ,xτ ⟨(τ−θ)+⟩= ∞ X t=0 ⟨1{τ>t}·1{θ≤t}⟩θ,τ = ∞ X τ=0 τ−1 X t=0 P(τ)P(θ≤t)= ∞ X τ=0 P(τ) t−1 X t=0 ⟨Pt⟩θ,xt =⟨ τ−1 X t=0 Pt⟩θ,xt,τ The cumulative posterior probability Pτ at the detection time τ, therefore, is the critical factor in loss evaluation and policy optimization: Lπ = ⟨cΣτ−1 k=0Pk + (1 −Pτ)⟩θ,Pk,τ;π . (3) Bayes Rule gives us the iterative update rule for the cumulative posterior Pt ≜P(θ ≤t|xt), Pt+1 = (Pt + (1 −Pt)q)f1(xt+1) (Pt + (1 −Pt)q)f1(xt+1) + (1 −Pt)(1 −q)f0(xt+1) , P0 =q0 . (4) Pt+1 is a deterministic function of Pt and xt+1, but appears to take a stochastic trajectory since xt+1 is an i.i.d.-distributed r.v. The expectation of ⟨Pt+1|xt⟩is Pt+(1−Pt)q. We also define the monotonically related posterior ratio Φt = Pt 1−Pt , which has the update rule Φt+1 = f1(xt+1)(Φt+q) f0(xt+1)(1 −q) , Φ0 = q 1 −q . (5) Optimal Policy: Threshold Crossing In order to optimize over the space of all possible stopping rules (policies), we define the following: (1) the conditional termination cost, Ct, associated with stopping at time t after observing xt: Ct ≜ c Pt−1 i=0 Pi + (1 −Pt); (2) the minimal conditional cost, γt, to be expected after observation xt: γt ≜ess infτ⟨Cτ|xt⟩, where τ ranges over all stopping rules that terminate no earlier than t, and the expectation is taken over all future observations (which can be a function of the decision taken at every time step); (3) ess inf, the largest (a.s.) r.v. less than (a.s.) every r.v. Xn, n ∈N. As an example of Bellman’s Equation, γt satisfies the dynamic programming equation γt = min{Ct, ⟨γt+1|xt⟩}, and that the stationary, deterministic stopping rule τ ∗= min{t ≥1|γt = Ct} achieves optimality (Eq. 2). This implies that the optimal policy consists of a stopping region S ⊂[0, 1] and a continuation region C = [0, 1] \ S, such that π(Pt : Pt ∈S) = a1 and π(Pt : Pt ∈C) = a2. We will state and prove a useful theorem below, which will imply that C and S neatly fall into two contiguous blocks, such that the optimal policy requires the termination action as soon as Pt exceeds some fixed threshold B∗– ie the optimal policy is a first-passage process in Pt! Before we present the theorem, we first introduce the method of truncation. The difficulty of solving the dynamic equation for γt lies in its infinite recursiveness. If we can impose a finite horizon T on τ, then the finitely recursive relation γT t = min {Ct, ⟨γT t+1|xt⟩} has a corresponding finitehorizon optimal policy π∗ T , where γT T =CT . Taking the infinite limit γ∞ t ≜limT →∞γT t , it has been shown [2] that when the expected loss is finite (which is the case here, since the expression in Eq. 2 is finite for all decision policies that stop a.s. in finite time), γt = γ∞ t , and π∗ T converges to the infinite-horizon optimal policy π∗. We also note the following self-evident lemma. Lemma. Suppose {gi(t)}i∈I is a family of decreasing functions in t, and h(t) = P i gi(t)wi(t), where P i wi(t) = 1 ∀t. If gi(t) ≤gj(t) implies w′ i(t) ≥w′ j(t), then h(t) decreases with t. Theorem. Ct−⟨γT t+1|xt⟩is a decreasing function of Pt. Proof. CT −1−⟨γT T |xt−1⟩decreases with PT −1. Assume that the theorem holds for t+1, and note: Ct −⟨γT t+1|xt⟩= −(c + q)Pt + q + X i giwi where gi ≜max(0, li), li ≜Ct+1 −⟨γT t+2|xt, xt+1 = i⟩, and wi ≜P(xt+1 = i|x). gi decreases with Pt for each i, since li decreases with Pt+1 by the inductive hypothesis, and Pt+1 increases with Pt by Eq. 4. Suppose i, j are such that f1(i)−f0(i)>f1(j)−f0(j), then Φt+1(i)>Φt+1(j), and Pt+1(i) > Pt+1(j), for any given xt. The inductive hypothesis implies gi ≤gj. Also note dwk/dPt =(f1(k)−f0(k))(1−q), so dwi/dPt ≥dwj/dPt. Thus, Ct−⟨γT t+1|xt⟩decreases with Pt. This theorem states that the cost of stopping at time t relative to continuing gets smaller when it is more certain that θ≤t. This is true for any finite stopping time T and therefore also for the infinitehorizon limit. If Ct−⟨γt+1|xt⟩is negative for some value of Pt, then the optimal policy is to select action a1; this is also true for any larger values of Pt. Define B∗∈[0, 1] as the lower bound of all such Pt, then the stopping and continuation regions have the form [B∗, 1] and [0, B∗), respectively. Ideally, we would like to have an exact solution for the optimal policy as a function of the generative and cost parameters of the change-detection problem as defined in Sec. 1. While the explicit form of B∗is not known, the theorem allows us to find the optimal policy numerically by evaluating and minimizing the empirical loss as a function of the decision threshold B ∈[0, 1]. 3 Neuronal change-detection In the following, we focus on the specific case where f0 and f1 are Bernoulli processes with respective rate parameters λ0 and λ1. This case resembles the problem faced by neurons, which receive sequential binary inputs (spike=1, no spike=0) with approximately Poisson statistics. The Bernoulli process is a discrete-time analog of the Poisson process, and obviates the problematic assumption (made by the Poisson model) that spikes could be fired infinitely close to one another. For now, we assume that the generative parameters λ1, λ0, q0, q and the cost parameter c are known. We also assume, without loss of generality, that λ1 >λ0 (rate increases), since otherwise we can just redefine the inputs (0 or 1). When the parameters satisfy c ≥(λ1 −λ0 −q(1−λ0))/(1−λ1), we have the explicit solution B∗= q/(q = c), or Φ ≥q/c (proof omitted). This corresponds to the one-step look-ahead policy, and is optimal when the cost of detection is large or when the probability of the change taking place is very high. This turns out not to be a very interesting case as the detection process is driven to cross the threshold even in the absence of any input spikes. Although we do not have an explicit solution for the optimal detection threshold B∗in general, we can numerically compare different values of B for any specific problem. Fig. 1(a) shows the empirical cost averaged over 1000 trials for different threshold values. For these particular parameters, the minimum is around B = 0.65, although the cost function is quite shallow for a large range of values of B around the optimum, implying that performance is not particularly sensitive to relatively large perturbation around the optimal value. Repeated Change-Detection and Firing Rate From the problem formulation in sec. 2, it might seem like the framework only applies to detecting a single change, or multiple unrelated changes. However, the same policy formulation can apply to the case of repeated detection of changes, one after another, in a temporally contiguous fashion. As long as each detection event is generated from the same model parameters (q, q0, f1, f0), and the cost parameter (c) remains constant, the threshold-crossing policy is still optimal in minimizing the empirical expected loss over these repeated events. The only generative parameter affected by the repetition is q0, which represents the probability of the inputs already being generated from f1 before the current observation process began. In this repeated detection scenario, q0 should in general be high if the detection threshold B∗is high, and low if B∗is low. However, the strength of this coupling is tempered by (i) whether each detection termination resets the generative process, as happens when visual detection leads to saccades and thus the resetting of input statistics, and (ii) the amount of time elapsed during the refractory period after a detection spike. Fortunately, while q0 is influenced by the detection policy, the optimization of the policy is not influenced by q0, since it consists of comparing Ct and ⟨γt+1|xt⟩at every time step. This comparison does not depend on q0, which simply adds a linear factor to both terms. In this repeated firing scenario, where the number of spikes is relatively high relative to the frequency of changes, the loss function of Eq. 2 can be rewritten as Lπ = p0r0 + c/r1, where ri is the mean firing rate when the inputs are generated from fi, and p0 is the fraction of time when f0 is applicable (as opposed to f1). In other words, if the rate of change is slow compared to neuronal firing rates, then optimal processing amounts to minimizing the “spontaneous” firing rate during f0 and maximizing the “stimulus-evoked” firing rate during f1. Optimality and Dynamics of Leaky Integrate-and-Fire Fig. 1(b) illustrates this concept of repeated firing. The top panel shows an example tracing of the dynamical variable Φt in the repeated optimal change-detection process. Whenever Φt reaches the threshold 0.65/(1−0.65) (or equivalently when Pt reaches 0.65, the optimal threshold as determined in the last section), a change is reported and the whole process resets to Φ0. The dynamics of Φt is remarkably similar to a leaky integrate-and-fire neuron. The bottom panel shows a raster plot of input and output spikes over 25 trials, and again the resemblance to spiking neurons is remarkable. Closer inspection indicates that the update rule for the posterior ratio in Eq. 5 indeed approximates the dynamics of a leaky integrate-and-fire neuron [3]. Let a≜ f1(xt) (1−q)f0(xt), we can rewrite Eq. 5 as Φt = a(Φt−1 + q) (6) When xt = 1, a = λ1 (1−q)λ0 > 1, Φt increases, and the rate of increase is larger when Φt itself is larger. This is reminiscent of the near-threshold dynamics of the Hodgkin-Huxley model, in which the voltage-dependent activation of sodium conductance drives the neuron to fire [4]. When xt = 0, Φt converges to Φ0 ∞= f1q/(f0(1 −q) −f1) (by Eq. 5), which is greater than 0 when f0(0)/f1(0) ≥1−q. We can think of Φ0 ∞as the resting membrane potential. Since Φ0 ∞increases with q, it implies that the resting potential should be higher and closer to the firing threshold, making the neuron more sensitive to synaptic inputs, when there is a stronger expectation that a change is imminent. Relationship Between Input and Output Firing Rates We can also look at the input-output relationship at the firing-rate level. The state-dependent rate parameter a has the expected values: a0 ≜⟨a|f0⟩= 1 1−q a1 ≜⟨a|f1⟩= 1 1−q λ2 1+λ0−2λ0λ1 λ0−λ2 0 . Given Eqs. 5 and 6, we can write down an approximate, explicit expression for ⟨Φt|fi⟩: ⟨Φt|fi⟩≈ai(⟨Φt−1⟩+q)=at i⟨Φ0⟩+ aiq t−1 X k=0 ak i =at i⟨Φ0⟩+ aiq(1−at i) 1−ai ≈at i  Φ0 + q ai−1  . (7) 0 100 200 300 400 500 0 10 20 0 100 200 300 400 500 0 0.5 1 −200 −100 0 100 200 0 0.1 0.2 Distribution of input spikes Frequency −200 −100 0 100 200 0 0.02 0.04 Distribution of output spikes Frequency Time 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Cost as a function of thresholds Threshold Cost (a) (b) (c) Dynamics of Φ Input and output spikes Time (samples) Figure 1: Optimal change-detection and dynamics. (a) The empirical average cost (over 1000 trials) has a single shallow minimum at B = 0.65. λ0 = 0.13, λ1 = 0.17, q = 0.0125, q0 = 0.05, c = 0.0005; these parameters apply for the remainder of the paper unless otherwise specified. (b) Top panel: a typical example of the dynamics of Φt over time. Superimposed on Φt are the spikes, which are arbitrarily set to a fixed high value. Black bars near the bottom indicate input spikes. Green line indicates time of actual change. In this example, a chance flurry of input spikes near the start causes the optimal change-detector to fire; after the change, the increased input firing rate induces the change-detector fire much more frequently. Note that Φt decreases whenever there is a lull in input spikes. Bottom panel: Raster plot of input (blue) and output (red) spikes; both more frequent after the the change indicated by the green line. (c) Output spikes (bottom) increase frequency quickly after the increase in input spikes (top). Given the decision threshold B, ⟨Φt0|f0⟩= ⟨Φt1|f0⟩= B, where ti is the average number of time steps it takes to reach the threshold for for xt = fi, and can be assumed to be ≫1 (it takes many time steps of input integration to reach the threshold). We therefore have at0 0  Φ0 + q a0−1  = at1 1  Φ1 + q a1−1  =⇒a1 = at0/t1 0 q/(a0−1) + Φ0 q/(a1−1) + Φ0  1 t1 ≈at0/t1 0 . (8) And therefore the ratio of the output firing rates, ri ≜1/ti for i=1, 2, is r1 r0 = t0 t1 = log a1 log a0 = log 1 1−q + log λ2 1+λ0−2λ0λ1 λ0−λ2 0 log 1 1−q = 1 + log λ2 1+λ0−2λ0λ1 λ0−λ2 0 log 1 1−q . (9) Since the arguments of log in both the denominator and numerator are greater than 1, r1/r0 > 1. Therefore, when the input rates are such that λ1 >λ0, then the respective output rates are also such that r1 >r0. To see exactly how the output firing rate ratio changes as a function of the input rates, we define the function g(λ0, λ1) ≜λ2 1+λ0−2λ0λ1 λ0−λ2 0 , and take its partial derivatives with respect to λ0 and λ1. Then we see that the output firing ratio Eq. 9 also increases with λ1 and decreases with λ0, consistent with intuitions. Fig. 1(c) shows the average detection/firing rate over time: the rise in output firing rate closely follows that in the input, despite the small change in the input firing rates. Multi-source change-detection So far, we have only considered the case of the Bernoulli inputs uniformly changing from one rate to another. However, sometimes the problem at hand is one of multi-source change-detection. For instance, a visual neuron detecting the onset of a stimulus might get inputs from up-stream neurons sensitive to stimuli with different properties (different colors, orientations, depth of view, etc.). Here, we extend our framework to the case of two independent sources of inputs, using an approach similar to that taken in [5]. The source f i, i∈{1, 2} emits observations xi 1, xi 2, . . . from a Bernoulli process that changes from rate λi 0 to λi 1 at an unknown time θi, where θi is generated by a geometric distribution with parameter qi, and the prior probability P(θi = 0) is qi 0. The objective is to detect θ≜min(θ1, θ2) with the cost function specified as before (Eqs. 1-2). Defining the individual posteriors P i t ≜P(θi ≤t|xi t), where xi t ≜xi 1, . . . , xi t, we have the following Pt ≜P(min(θ1, θ2) ≤t|x1 t, x2 t) = 1 −(1 −P 1 t )(1 −P 2 t ) = P 1 t + P 2 t −P 1 t P 2 t . (10) We can also define the corresponding overall posterior ratio Φt ≜Pt/(1 −Pt) = Φ1 t + Φ2 t + Φ1 tΦ2 t (11) (a) (b) (c) Figure 2: Effect of cueing on change-detection. (a) Distribution of first spikes for the optimal stopping policy; spikes aligned to time 0 when the actual change takes place. (b) This distribution is significantly tightened with mean brought closer to the actual change, when there is extra temporal information about an imminent change (q = .02). (c) The distribution of spikes is also slightly tightened and brought closer to the actual change time, when there is stronger prior probability of a stimulus appearing (q0 = .1), as during special cueing. The effect is smaller because the higher prior leads to false alarms as well as reducing detection delay. as a function of the individual posterior ratios Φi t ≜P i t /(1 −P i t ). Following reasoning very close to that of Sec. 2, we can show that if the generative and cost parameters are such that Φt is lowerbounded by Φ0 ∞for t ≫1, then the optimal stopping/detection policy is to terminate at the smallest t, such that Φt = Φ1 t +Φ2 t +Φ1 tΦ2 t ≥(q1+q2−q1q2)/c. Despite the generative independence of the two Bernoulli processes, we note that the optimal policy is different from the na¨ıve strategy of running two single-source change-detectors, and report a change as soon as one of them reports a change. To see this, consider the case when Φ1 t = q1/c, but Φ2 t ≈0, so that Φt ≈Φ1 t = q1/c < (q1+q2(1 −q1))/c. Therefore, the individual detector for process 1 would have reported a change, but the overall detector would not. 4 Optimal Change-Detection and Neuromodulation A sizeable body of behavioral studies suggest that stimulus processing is influenced by cognitive factors such as knowledge about the timing of stimulus onset, or whether or not a stimulus would appear in a particular location. There is evidence that the neuromodulators norepinephrine [6], and acetylcholine [7] are respectively involved in those two aspects of stimulus processing. Separately, there is a rich literature on the effects of these various neuromodulators at the single-cell level [8]. Since we have here an explicit model of neuronal dynamics as a function of the statistical properties associated with the stimulus, we are ideally positioned to examine how these properties should affect the cellular properties, and whether the known behavioral consequences of neuromodulation are consistent with their observed effects at the cellular level. If the system has some prior knowledge about the onset time of a stimulus, we can model the information accumulation process as starting shortly before the mean change-time, with a tight distribution over the random variable θ. Making q larger achieves both effects in our model. Fig. 2A shows the distribution of first spikes for 1000 trials; Fig. 2B shows that this distribution is more tightly clustered immediately after the actual change time θ for larger q. Experimentally, it has been observed that norepinephrine makes sensory neurons fire more vigorously to bottom-up sensory inputs [8]. It is also known from behavioral studies that a temporal cue improves detection performance, and that noradrenergic depletion diminishes this advantage [6]. If there is some prior knowledge about the stimulus being in a particular location, we can model this with a higher prior probability q0 of the stimulus being present. This also has the effect of increasing the responsiveness of the change-detection process to input spikes (Fig. 2C), as well as making the detection (spiking) process more sensitive. It has been shown experimentally that a (correct) spatial cue improves stimulus detection, and that acetylcholine is implicated in this process [7], and that acetylcholine potentiates neurons and increases their responsiveness to sensory inputs [8]. 5 Discussion Responding accurately and rapidly to changes in the environment is a problem confronted by the brain at every level, from single neurons to behavior. In this work, we have presented a formal treatment of the change-detection problem and obtained important properties of the optimal policy – for a broad class of problems, the optimal detection algorithm is a threshold-crossing process based on the posterior probability of the change having taken place, which can be iteratively updated using Bayes’ Rule. Applying these ideas to the case of neurons that must rapidly and accurately detect changes in input spike statistics, we saw that the optimal algorithm yields dynamics remarkably similar to the intracellular dynamics of spiking neurons. This suggests that neurons are optimized for tracking discrete, abrupt changes in the inputs. The model yields insight into the computational import of cellular properties such as resting membrane potential, post-spike reset potential, voltage-dependent conductances, and the input-output spiking relationship. The basic framework was extended to examine the case of multi-source change-detection, a problem faced by a neuron tasked with detecting a stimulus when it could be one of two possible sub-categories. We also explored the computational consequences of spatial and temporal cueing on stimulus detection, and saw that the behavioral and biophysical effects of neuromodulation (eg by acetylcholine and norepinephrine) are consistent within the framework. This novel framework for modeling single-neuron computations is attractive, as it suggests explicit design principles underlying neuronal dynamics, and not merely provides a descriptive model. Since the computational objects are well-specified at the outset, it provides a natural theoretica link between cellular properties and behavioral constraints. It is also appealing as a self-consistent and elegantly simple model of the computations taking place in single neurons. Every neuron in this scheme simply detects changes in its synaptic inputs, on a spike-to-spike time scale, and propagates its knowledge according to its own speed-accuracy trade-off. All that a down-stream neuron needs from this neuron for its own change-detection computations are this neuron’s average firing rate in different states, the rate of change among these states, and the prior probability of of this neuron being in one of those states – all of these quantities can be learned over a longer time-scale. In particular, the down-stream neuron does not need to know about this neuron’s inputs, its internal dynamics, its decision policy, its objective function, its model of the world, etc. In this scheme, more sophisticated computations can be achieved by pooling together the outputs of different neurons in various configurations – we explored this briefly with the example of multi-source change-detection. Another advantage of this framework is that it eliminates the boundary between inference and decision. In this scheme, neurons make inferences about their inputs and make decisions at every level of processing. It therefore obviates the problem of where in a hierarchical nervous system does the nature of the computation change from input-processing to decision-making. While the incorporation of formal tools from controlled stochastic processes into the modeling of single-cell computations is a novel approach, this work is related to several other theoretical works. The idea of neurons processing and representing probabilistic information has received much attention in recent years, with most work focusing on the level of neuronal populations [9–12]. Theoretical work on the representation and processing of probabilistic information in single neurons are comparatively more rare. It has been suggested [13] that certain decision-making neurons may accumulate probabilistic information and spike when the evidence exceeds a certain threshold. However, it was typically assumed that the neurons already receive continuously-valued inputs that represent probabilistic information. Moreover, the tasks considered in these earlier works involved stationary discrimination, such that there was no explicit non-stationarity in the state of the world/inputs. We note that our framework is a generalization of the commonly studied 2AFC task, which is equivalent to setting the change probability q to 0 in our model. Consistent with this characterization, our optimal policy is a generalization of the SPRT algorithm which is known to be optimal for stationary 2AFC discrimination [14]. One closely related piece of work proposed that single neurons track the log posterior ratio of the state of an underlying binary variable, and spike when the new inputs imply a value for this log posterior ratio that is sufficiently different from the neuron’s current estimate based on previous inputs [15]. The key difference at the conceptual level is that this previous work focused on the explicit propagation of probabilistic information across neurons, thus introducing complications into processing and learning that are necessary to make this probabilistic knowledge consistent across neurons. Also, there was no explicit analysis of the optimality of the output spike generation process: how much of a discrepancy merits a spike, and how this depends on the relevant statistical and cost parameters. At the mechanistic level, having the membrane potential represent the log posterior ratio, as opposed to the posterior ratio, requires the dynamical update rule to involve exponentiation. While it was shown in that work that the dynamics is approximately leaky integrate-and-fire during steady state, it does not help the most interesting case, when the world is rapidly changing and the linear approximation is most detrimental. We showed in this work that there are good reasons for neurons not to integrate inputs linearly. The amount of new evidence provided by each input (spike or not spike) at every time step is state-dependent, and should be so according to optimal information integration. This work suggests that the particular types of nonlinearity we see in neuronal dynamics are desirable from a computational point of view. One important assumption we made in our model is that the cost of detection delay is linear in time, parameterized by the constant c. Without this assumption, the controlled dynamic process framework would not apply, as the decision policy would not only depend on a state variable, but on time in an explicit way. However, in general, there might not be a fixed c that relates the trade-off between false alarms and detection delay. Intuitively, c should be related to how much reward could be obtained per unit of time if the system were not engaged in prolonging the current observation process. In particular, if a new “trial” begins as soon as the current “trial” terminates, regardless of detection accuracy, then c should be set to P(θ ≤τ)/⟨τ⟩, which also places the two cost terms in the same dimension. If we had analytical expressions for P(θ ≤τ) and ⟨τ⟩as a function of the decision threshold B, then we could solve the optimization problem through the self-consistency constraint placed on the optimal threshold B∗through its dependence on c. Unfortunately, there is no known analytical expressions for P(θ ≤τ; B) and ⟨τ; B⟩. Alternatively, one might still numerically obtain a value for a fixed detection threshold that incurs the lowest cost among all thresholds. There is no guarantee, however, that the optimal policy lives in this parameterized family of policies. It may be that the best fixed threshold policy is still far from optimal detection. There are several important and exciting directions in which we plan to extend the current work. One is the consideration of more complex state transitions. In this work, we assumed that the state transition is always from f0 to f1. But in more general scenarios, the inputs are likely to revert back to f0 before another transition into f1, and so on. Thus, we need at least two populations of detectors, one that detects the onset (f0 to f1), and one that detects the offset (f1 to f0). Intuitively, there ought to be recurrent connections between them, to propagate and aggregate the total information about what states the inputs are in. A related problem is when the inputs can be in multiple (> 2) possible states, or even a continuous range of states, with complex transitions among these states. Another interesting question is what happens when we have a different or more complex distribution for the change variable θ. We know, for instance, that animals are capable of utilizing independent temporal information about the mean and variance of the stimulus onset. In the geometric model we assumed, these two variables are coupled. Finally, we note that the formal framework we presented, that of optimal detection of changes in input statistics, is not only applicable to the level of single neuron, but also to systems and cognitive level problems. For example, certain problems in reinforcement learning, such as reversal learning and exploration versus exploitation in general, are also amenable to analysis by a similar approach. We intend to explore some of these problems in the future using similar formal tools from controlled dynamic processes. Acknowledgments We thank Bill Bialek, Peter Dayan, Savas Dayanik, and Sophie Deneve for helpful discussions. References [1] Shiryaev, A N (1978). Optimal Stopping Rules, Springer-Verlag, New York. [2] Chow, Y S et al (1971). Great Expectations: The Theory of Optimal Stopping, Houghton Mifflin, Boston. [3] Dayan, P & Abbott, L F (2001). Theoretical Neuroscience, MIT Press, Boston. [4] Hodgkin, A L & Huxley, A F (1952). J. Physiology 117: 500-44. [5] Bayraktar, E & Poor, H V (2005). 44th IEEE Conf. on Decision and Control and Eur. Control Conference. [6] Witte, E A & Marrocco, R T (1997). Psychopharmacology 132: 315-23. [7] Phillips, J M, McAlonan, K, Robb, W G K & Brown, V (2000). Psychopharmacology 150: 112-6. [8] Gu, Q (2002). Neuroscience 111: 815-35. [9] Zemel, R S, Dayan, P, & Pouget, A (1998). Neural Computation 10: 403-30. [10] Sahani, M & Dayan, P (2003). Neural Computation 15: 2255-79. [11] Rao, R P (2004). Neural Computation 16: 1-38. [12] Yu, A J & Dayan, P (2005). Advances in Neural Information Processing Systems 17. [13] Gold, J I & Shadlen, M N (2002). Neuron 36: 299-308. [14] Wald, A & Wolfowitz, J (1948). Ann. Math. Statisti. 19: 326-39. [15] Deneve, S (2004). Advances in Neural Information Processing Systems 16.
2006
71
3,094
Temporal dynamics of information content carried by neurons in the primary visual cortex Danko NikoliC* Department of Neurophysiology Max-Planck-Institute for Brain Research, Frankfurt (Main), Germany danko@mpih-frankfurt.mpg.de Wolf Singer Department of Neurophysiology Max-Planck-Institute for Brain Research, Frankfurt (Main), Germany singer@mpih-frankfurt . mpg . de Stefan Haeusler* Institute for Theoretical Computer Science Graz University of Technology A-80lO Graz, Austria haeusler@igi.tugraz.at Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-80lO Graz, Austria maass@ i gi.tugraz . at Abstract We use multi-electrode recordings from cat primary visual cortex and investigate whether a simple linear classifier can extract information about the presented stimuli. We find that information is extractable and that it even lasts for several hundred milliseconds after the stimulus has been removed. In a fast sequence of stimulus presentation, information about both new and old stimuli is present simultaneously and nonlinear relations between these stimuli can be extracted. These results suggest nonlinear properties of cortical representations. The important implications of these properties for the nonlinear brain theory are discussed. 1 Introduction It has recently been argued that the most fundamental aspects of computations in visual cortex are still unknown [1]. This could be partially because of the narrow and reductionist approaches in the design of experiments and partially because of the nonlinear properties of cortical neurons that are ignored by the current theories [1]. It has also been argued that the recurrent neuronal circuits in the visual cortex are highly complex and thus, that notions such as "feed forward" and "feedback" are inadequate concepts for the analysis of nonlinear dynamical systems [2]. Furthermore, current theories do not take in account the precise timing of neuronal activity and synchronicity in responses, which should play an important computational role [3]. Alternative computational models from dynamical systems theory [4] argue that fading memory properties of neural circuits are essential for real-time processing of quickly varying visual stimuli. However, an experimental test of this prediction has been missing. An example for an experimental study that may be seen as a step in this direction is [5], where it was shown that the firing activity of neurons in macaque inferior temporal cortex (IT) contains information about an image that has been just presented and that this information lasts for several hundred milliseconds. This information was extracted by algorithms from machine learning that classified the patterns of neuronal responses to different images. The present paper extends the results from [5] in several directions: ' These authors contributed equally to this work. A o 200 400 600 Time [ms] 50 " " 40 30 20 500 [ n I:l ill _ IL_ I l..ll . ''';' o o 200 400 600 Time [ms] Figure 1: A: An example of a visual stimulus in relation to the constellation of receptive fields (gray rectangles) from one Michigan probe. B: Spike times recorded from one electrode across 50 stimulus presentations and for two stimulation sequences (ABC and DBC). In this and in all other figures the gray boxes indicate the periods during which the letter-stimuli were visible on the screen. C: Peri-stimulus time histogram for the responses shown in B. Mfr: mean firing rate. • We show that also neurons in cat primary visual cortex (area 17) and under anesthesia contain information about previously shown images and that this information lasts even longer. • We analyze the information content in neuronal activity recorded simultaneously from multiple electrodes. • We analyze the information about a previously shown stimulus for rapid sequences of images and how the information about consecutive images in a sequence is superimposed (i.e., we probe the system's memory for images). 2 Methods 2.1 Experiments In three cats anaesthesia was induced with ketamine and maintained with a mixture of 70% N20 and 30% 0 2 and with halothane (0.4-0.6%). The cats were paralysed with pancuronium bromide applied intravenously (Pancuronium, Organon, 0.15 mg kg-1h- 1). Multi-unit activity (MUA) was recorded from area 17 and by using mUltiple silicon-based 16-channel probes (organized in a 4 x 4 spatial matrix) which were supplied by the Center for Neural Communication Technology at the University of Michigan (Michigan probes). The inter-contact distances were 200 !Jm (0.3-0.5 MO 100 AID r------.------,------.~~--_r------,_----_,------_,----_.80 N ~ ~-;:- 80 c u co ID 60 E t:: ~ 0 /" ' .. o u 40 ~ # 20 , a..~ 0 0 100 200 300 Time [ms] .. ". -c . 400 ----- Performance 500 600 ID '§ Cl C .;:: ..-:: Figure 2: The ability of a linear classifier to determine which of the letters A or D was previously used as a stimulus. The classification performance is shown as a function of time passed between the initiation of the experimental trial and the moment at which a sample of neuronal activity that was taken for training/test of the classifier. The classification performance peaks at about 200 ms (reaching almost 100% accuracy) and remains high until at least 700 ms. Dash-dotted line: the mean firing rate across the entire population of investigated neurons. Dotted line: Performance at the chance level (50% correct). impedance at 1000 Hz). Signals were amplified 1000x and, to extract unit activity, were filtered between 500 Hz and 3.5 kHz. Digital sampling was made with 32 kHz frequency and the waveforms of threshold-detected action potentials were stored for an off-line spike sorting procedure. The probes were inserted approximately perpendicular to the surface of the cortex, allowing us to record simultaneously from neurons at different cortical layers and at different columns. This setup resulted in a cluster of overlapping receptive fields (RF), all RFs being covered by the stimuli (see Fig. 1A) (more details on recording techniques can be found in [6, 7]). Stimuli were presented binocularly on a 21" computer monitor (HITACHI CM813ET, 100 Hz refresh rate) and by using the software for visual stimulation ActiveSTIM (www.ActiveSTIM.com). Binocular fusion of the two eyes was achieved by mapping the borders of the respective RFs and then by aligning the optical axes with an adjustable prism placed in front of one eye. The stimuli consisted of single white letters with elementary features suitable for area 17 and spanning approximately 5° of visual angle. The stimuli were presented on a black background for a brief period of time. Fig. 1A illustrates the spatial relation between the constellation of RFs and the stimulus in one of the experimental setups. In each stimulation condition either a single letter was presented or a sequence of up to three letters. For presentation of single letters we used letters A and D each presented for 100 ms. Stimulus sequences were made with letters A, B, C, D, and E and we compared either the responses across the sequences ABC, DBC, and ADC (cat 1) or across sequences ABE, CBE, ADE, and CDE (cats 2 and 3). Each member of a sequence was presented for 100 ms and the blank delay-period separating the presentation of letters lasted also 100 ms. Each stimulation condition (single letter or a sequence) was presented 50 to 150 times and the order of presentation was randomized across the stimulation conditions. Example raster plots of responses to two different sets of stimuli can be seen in Fig. lB. 2.2 Data analysis The typical spike trains prior to the application of the spike-sorting procedure are illustrated in Fig. lB. All datasets showed high trial-to-trial variability, with an average fano factor of about 8. If we included into our analysis all the single units that resulted from the spike-sorting procedure, this resulted in too sparse data representations and hence in overfitting. We therefore used only units with mean firing rates ::::: 10 Hz and pooled single units with less frequent firings into multi-unit signals. These methods resulted in datasets with 66 to 124 simultaneously recorded units for further analysis. The recorded spike times were convolved with an exponential kernel with a decay time constant of T = 20 ms. A linear classifier was trained to discriminate between pairs of stimuli on the basis of the convolved spike trains at time points t E {O, 10, ... , 700} ms after stimulus onset (using only the vectors of 66 to 124 values of the convolved time series at time t). We refer to this classifier as Rt . A second type of classifier, which we refer to as R i n t , was trained to carry out such classification simultaneously for all time points t E {ISO, 160, ... , 450} (see Fig. 7). If not otherwise stated, the results for type Rt classifiers are reported. A linear classifier applied to the convolved spike data (i.e., an equivalent to low-pass-filtering) can be interpreted as an integrate-and-fire (I&F) neuron with synaptic inputs modeled as Dirac delta functions. The time constant of 20 ms reflects the temporal properties of synaptic receptors and of the membrane. A classification is obtained due to the firing threshold of the I&F neuron. The classifiers were trained with linear-kernel support vector machines with parameter G chosen to be 10 in case of 50 samples per stimulus class and 50 in case of 150 samples per stimulus. The classification performance was estimated with lO-fold cross validation in which we balanced the number of examples for the training and for the test class. All the reported performance data are for the test class. Error bars in the figures denote the average standard error of the mean for one cross validation run. 3 Results 3.1 High classification performance As observed in IT [5], the classification performance peaks also in area 17 at about 200 ms after the stimulus onset. Therefore, a classifier can detect the identity of the stimulus with high reliability. In contrast to [5] information about stimuli is in our data available much longer and can last up to 700 ms after the stimulus onset (Fig. 2). 3.2 Memory for the past stimuli We also find that even when new stimuli are presented, information about old stimuli is not erased. Instead, neuronal activity continued to maintain information about the previously presented stimuli. In Fig. 3 we show that classifiers can extract substantial information about the first image well after this image is removed and when new images are shown. Thus, the system maintains a memory of previous activations and this memory lasts at least several hundred milliseconds. Note that the information remains in memory even if neuronal rate responses decrease for a brief period of time and approach the level that is close to that of spontaneous activity. 3.3 Simultaneous availability of different pieces of information Simultaneous presence of information about different stimuli is a necessary prerequisite for efficient coding. Fig. 4A shows that classifiers can identify the second letter in the sequence. In Fig. 4B we show the results of an experiment in which both the first and second letter were varied simultaneously. Two classifiers, each for the identity of letters at one time slot, performed both very well. During the period from 250 to 300 ms information about both letters was available. This information can be used to perform a nonlinear XOR classification function, i.e., return one if the sequences ADE or GBE have been presented but not if both A and B or none of them was presented in the same sequence, in which case a zero should be returned. In Fig. 4C we show XOR classification based on the information extracted from the two classifiers in Fig. 4B (dashed line). In this case, the nonlinear component of the XOR computation is made externally by the observer and not by the brain. We compared these results with the performance of a single linear classifier that was trained to extract XOR information directly from the brain responses (solid line). As this classifier was linear, the nonlinear component of the computation could have been performed only by the brain. The classification performance was in both cases well above the chance level (horizontal dotted line in Fig. 4C). More interestingly, the two performance functions were similar, the brain slightly outperforming the external computation of XOR in this nonlinear task. Therefore, the brain can perform also nonlinear computations. AID B C 80 N 100 ~-;:~ 80 Q) c U '§ co Q) 60 E t:: Cl ~ 0 c ,Eu 40 .;:: (fi ef ..-:: a..~ 20 c co 0 0 Q) ~ AIC B E 80 N 100 ~-;:~ 80 Q) c u '§ co Q) 60 E t:: Cl ~ 0 c ,Eu 40 .;:: (fi ef / "/ ..-:: a..~ 20 Cat 21 c co Q) 0 0 ~ AIC B E 80 N 100 ~-;:~ 80 Q) c u '§ co Q) 60 E t:: Cl ~ 0 .... ./ C ,Eu 40 ~ ~ .;:: (fi ef ./ ..-:: a..~ 20 Cat ;31 c co Q) 0 0 ~ 0 100 200 300 400 500 600 700 Time [ms] Figure 3: Classifiers were trained to identify the first letter in the sequences ABC vs. DBC in one experiment (cat 1) and sequences ABE vs. CBE in other two experiments (cats 2 and 3). In all cases, the performance reached its maximum shortly before or during the presentation of the second letter. In one case (cat 1) information about the first letter remained present even with multiple exchanges of the stimuli, i.e., during the presentation of the third letter. Notification is the same as in Fig. 2. 3.4 Neural code It is also important to understand how this information is encoded in neuronal activity. Fig. 5 shows lower bounds for the information contents of neuronal firing rates. The ability of the classifiers to distinguish between two stimuli was positively correlated to the difference in the average firing rate responses to these stimuli. For the three experiments (cats 1 to 3) the Pearson coefficients of correlation between these two variables were 0.37,0.42 and 0.46, respectively (14-21 % of explained variance). The correlation coefficients with the absolute rate responses were always larger (0.45, 0.68 and 0.66). In contrast to [5], we also found that in addition to rate responses, the precise timing relationships between neuronal spiking events carry important information about stimulus identity. To show this, we perturbed the recorded data by jittering the timings of the spikes for various amounts. Only a few milliseconds of jitter were sufficient to decrease the performance of the classifiers significantly (Fig. 6). Therefore, the information is also contained in the timing of spikes. Timing is therefore also a neuronal code. Moreover, like rate, timing also carried information about the past. We could demonstrate that jitter induces a significant drop in classification performance even for time points as far as 200 ms past the stimulus onset (the rightmost panel of Fig. 6). We also investigated the 'synaptic' weights of the classifiers and this enabled us to study the temporal evolution of the code. We asked the following question: Do the same pieces of information indicate the identity of a stimulus early and late in the stimulation sequence? Hence, we compared the performance of Rt classifiers, for which the weights were allowed to change along the stimulation sequence, against the performance of Rint classifiers, for which the weights were fixed. The results indicated that the neuronal code was invariant during a single stimulation-response event (e.g., onA 100 A ~-;:- 80 c U co ID 60 E t:: ~ 0 ,Eu 40 (fief 20 a..~ " 0 B 100 AIC ~-;:- 80 c u co ID 60 E t:: ~ 0 ,Eu 40 (fief 20 a..~ . '-. .\ /"' . 0 C 100 AIC ~-;:- 80 c u co ID 60 E t:: ~ 0 )-;.1 .· ,Eu 40 (fief 20 a..~ 0 o 100 200 BID /~. J, 1'\ \ " BID BID 300 Time [ms] E E 400 Mean firing rate --- 1 st letter 2nd letter Cat3 --- XOR External XOR 80 N ~ ID '§ Cl c .;:: ..-:: c co 0 ID ~ 500 600 700 Figure 4: Classification of the second letter in a sequence and of a combination of letters. A: Performance of a classifier trained to identify the second letter in the sequences ABC and ADC. Similarly to the results in Fig. 3, the performance is high during the presentation of the subsequent letter. B: Simultaneously available information about two different letters of a sequence. Two classifiers identified either the first or the second letter of the following four sequences: ABE, CBE, ADE, and CDE . C: The same data as in B but a linear classifier was trained to compute the XOR function of the 2 bits encoded by the 2 choices AIC and BID (solid line). The dashed line indicates the performance of a control calculation made by an external computation of the XOR function that was based on the information extracted by the classifiers whose performance functions are plotted in B. responses to the presentation of a letter) but changed across such events (e.g., off-response to the same letter or on-response to the subsequent letter)(Fig. 7). Finally, as in [5], an application of nonlinear radial-basis kernels did not produce significant improvement in the number of correct classifications when compared to linear kernels and this was the case for type Rt classifiers for which the improvement never exceeded 2% (results not shown). However, the performance of type R int classifiers increased considerably (~8% ) when they were trained with nonlinear as opposed to linear kernels (time interval t = [150, 450] ms, results not shown). 4 Discussion In the present study we find that information about preceding visual stimuli is preserved for several hundred ms in neurons of the primary visual cortex of anesthesized cats. These results are consistent to those reported by [5] who investigated neuronal activity in awake state and in a higher cortical area (IT-cortex). We show that information about a previously shown stimulus can last in visual cortex up to 700 ms, much longer than reported for IT-cortex. Hence, we can conclude that it is a general property of cortical networks to contain information about the stimuli in a distributed and time-dynamic manner. Thus, a trained classifier is capable to reliably determine from a short sample of this distributed activity the identity of previously presented stimuli. AID B C 160 N 100 ~-;:~ 80 Q) c U "§ co Q) 60 E t:: Cat 1 Cl ~ 0 c ,Eu 40 .,.. Ir - --- Performance .;:: (fief ........ ~;/ Mean firing rates ..-:: 20 " f c a..~ Ii'Rate differences co . -..... -::":..-.iQ) 0 0 ~ AIC B E 80 N 100 ~-;:~ 80 Q) c u "§ co Q) 60 E t:: Cl ~ 0 c ,Eu 40 p .;:: (fief I "- , ..-:: a..~ 20 / / '-. j Cat 2 1 c /'-.~ \ ..... co /' Q) 0 0 ~ AIC B E 80 N 100 ~-;:~ 80 Q) c u "§ co Q) 60 E t:: Cl ~ 0 c ,Eu 40 -..... "-... .;:: (fief ---._. ..-:: a..~ 20 ~ Cat 3 1 c ", co 0 / 0 Q) 0 100 200 300 400 500 600 700 ~ Time [ms] Figure 5: The relation between classifier's performance and i) the mean firing rates (dash-dotted lines) and ii) the difference in the mean firing rates between two stimulation conditions (8 fold magnified, dashed lines). The results are for the same data as in Fig. 3. During the After the During the 1 st letter presentation 1 st letter presentation 2nd letter presentation 20 20 20 Q) u c 15 co 15 15 E .g 10 10 10 Q) Q. .!: 5 Q. 5 5 e "0 -,R. 0 0 o o o 5 10 15 20 o 5 10 15 20 o 5 10 15 20 SD of the jitter [ms] SD of the jitter [ms] SD of the jitter [ms] Figure 6: Drop in performance for the classifiers in Fig. 3 due to the Gaussian jitter of spiking times. The drop in performance was computed for three points in time according to the original peaks in performance in Fig. 3. For cat 1, these peaks were t E {60, 120, 200} ms and for cat 3, t E {40, 120, 230} ms. The performance drops for these three points in time are shown in the three panels, respectively and in the order left to right. SO: standard deviation of jitter. A standard deviation of only a few milliseconds decreased the classification performance significantly. Furthermore, the system's memory for the past stimulation is not necessarily erased by the presentation of a new stimulus, but it is instead possible to extract information about mUltiple stim uli simultaneously. We show that different pieces of information are superimposed to each other and that they allow extraction of nonlinear relations between the stimuli such as the XOR function. A B C B R. t readout C Rt readout In 100 60 60 80 ~~ x c u 60 40 40 ~ t1l Ql E t:: .!: ~ 0 o u ~# 40 'c ll..~ ___ Rt readout 20 20 ::> 20 Rint readout 0 0 0 200 300 400 500 200 300 400 Time [ms] Readout weight Readout weight at time t [ms] Figure 7: Temporal evolution of the weights needed for optimal linear classification. A: Comparison in performance between R t and Rint classifiers. Rint classifier was trained on the time intervals t = [150,450] ms and on the data in Fig. 3. The performance drop of a type R int classifier during the presentation of the third letter indicates that the neural code has changed since the presentation of the second letter. B: Weight vector of the type R int classifier used in A. C: Weight vectors of the type R t classifier shown in A for t E {200, 300, 400} ms. Our results indicate that the neuronal code is not only contained in rate responses but that the precise spike-timing relations matter as well and that they carry additional and important information about the stimulus. Furthermore, almost all information extracted by the state-of-the-art nonlinear classifiers can be extracted by using simple linear classification mechanisms. This is in agreement with the results reported in [5] for IT-cortex. Hence, similarly to our classifiers, cortical neurons should also be able to read out such information from distributed neuronal activity. These results have important implications for theories of brain function and for understanding the nature of computations performed by natural neuronal circuits. In agreement with the recent criticism [1, 2], the present results are not compatible with computational models that require a precise "frame by frame" processing of visual inputs or focus on comparing each frame with an internally generated reconstruction or prediction. These models require a more precise temporal organisation of information about subsequent frames of visual inputs. Instead, our results support the view recently put forward by theoretical studies [4, 8], in which computations are performed by complex dynamical systems while information about results of these computations is read out by simple linear classifiers. These theoretical systems show memory and information-superposition properties that are similar to those reported here for the cerebral cortex. References [1] B. A. Olshausen and D. J. Field. What is the other 85% of vi doing? In J. L. van Hemmen and T. J. Sejnowski, editors, 23 Problems in Systems Neuroscience, pages 182-211. Oxford Univ. Press (Oxford, UK),2006. [2] A. M. Sillito and H. E. Jones. Feedback systems in visual processing. In L. M. Chalupa and J. S. Werner, editors, The Visual Neurosciences, pages 609-624. MIT Press, 2004. [3] W. Singer. Neuronal synchrony: a versatile code for the definition of relations? Neuron, 24(1):49-65, 111-25, 1999. [4] W. Maass, T. Natschlager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531-2560,2002. [5] C. P. Hung, G. Kreiman, T. Poggio, and J. J. DiCarlo. Fast readout of object identity from macaque inferior temporal cortex. Science, 310(5749):863-866, 2005. [6] G. Schneider, M. N. Havenith, and D. Nikolic. Spatio-temporal structure in large neuronal networks detected from cross correlation. Neural Computation, 18(10):2387-2413,2006. [7] G. Schneider and D. Nikolic. Detection and assessment of near-zero delays in neuronal spiking activity. J Neurosci Methods, 152(1-2):97-106,2006. [8] H. Jager and H. Haas. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science, 304:78-80, 2004.
2006
72
3,095
A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo Oliver Williams Microsoft Research Ltd. Cambridge, UK omcw2@cam.ac.uk Abstract This paper describes a Gaussian process framework for inferring pixel-wise disparity and bi-layer segmentation of a scene given a stereo pair of images. The Gaussian process covariance is parameterized by a foreground-backgroundocclusion segmentation label to model both smooth regions and discontinuities. As such, we call our model a switched Gaussian process. We propose a greedy incremental algorithm for adding observations from the data and assigning segmentation labels. Two observation schedules are proposed: the first treats scanlines as independent, the second uses an active learning criterion to select a sparse subset of points to measure. We show that this probabilistic framework has comparable performance to the state-of-the-art. 1 Introduction Given two views of the same scene, this paper addresses the dual objectives of inferring depth and segmentation in scenes with perceptually distinct foreground and background layers. We do this in a probabilistic framework using a Gaussian process prior to model the geometry of typical scenes of this type. Our approach has two properties of interest to practitioners: firstly, it can be employed incrementally which is useful for circumstances in which the time allowed for processing is constrained or variable; secondly it is probabilistic enabling fusion with other sources of scene information. Segmentation and depth estimation are well-studied areas (e.g., [1] and [2, 3, 4]). However the inspiration for the work in this paper is [5] in which both segmentation and depth are estimated in a unified framework based around graph cuts. In [5] the target application was video conferencing, however such an algorithm is also applicable to areas such as robotics and augmented reality. Gaussian process regression has previously been used in connection with stereo images in [6] to learn the non-linear mapping between matched left-right image points and scene points as an alternative to photogrammetric camera calibration [7]. In this paper we use a Gaussian process to help discover the initially unknown left-right matches in a complex scene: a camera calibration procedure might then be used to determine actual 3D scene geometry. The paper is organized as follows: Sec. 2 describes our Gaussian process framework for inferring depth (disparity) and segmentation from stereo measurements. Sec. 3 proposes and demonstrates two observation schedules: the first operates along image scanlines independently, the second treats the whole image jointly, and makes a sparse set of stereo observations at locations selected by an active learning criterion [8]; we also show how colour information may be fused with predictions by the switched GP, the results of which are comparable to those of [5]. Sec. 4 concludes the paper. Figure 1: Anatomy of a disparity map. This schematic shows some of the important features in short baseline binocular stereo for an horizontal strip of pixels. Transitions between foreground and background at the right edge of a foreground object will induce a discontinuity from high to low disparity. Background–foreground transitions at the left edge of the foreground induce an occlusion region in which scene points visible in the left image are not visible in the right. We use the data from [5] which are available on their web site: http://research.microsoft.com/vision/cambridge/i2i 2 Single frame disparity estimation This framework is intended for use with short baseline stereo, in which the two images are taken slightly to the left and the right of a midpoint (see Fig. 1). This means that most features visible in one image are visible in the other, albeit at a different location: for a given point x in the left image L(x), our aim is therefore to infer the location of the same scene point in the right image R(x′). We assume that both L and R have been rectified [7] such that all corresponding points have the same vertical coordinate; hence if x = [x y]T then x′ = [x −d(x) y]T where d(x) is called the disparity map for points x in the left image. Because objects typically have smooth variations in depth, d(x) is generally smooth. However, there are two important exceptions to this and, because they occur at the boundaries between an object and the background, it is essential that they be modelled correctly (see also Fig. 1): Discontinuity Discontinuities occur where one pixel belongs to the foreground and its neighbour belongs to the background. Occlusion At background–foreground transitions (travelling horizontally from left to right), there will be a region of pixels in the left image that are not visible in the right since they are occluded by the foreground [3]. Such locations correspond to scene points in the background layer, however their disparity is undefined. The next subsection describes a prior for disparity that attempts to capture these characteristics by modelling the bi-layer segmentation. 2.1 A Gaussian process prior for disparity We model the prior distribution of a disparity map to be a Gaussian process (GP) [9]. GPs are defined by a mean function f(·) and a covariance function c(·, ·) which in turn define the joint distribution of disparities at a set of points {xi, . . . , xn} as a multivariate Gaussian P d(xi), . . . , d(xn)|f, c  = Normal (f, C) (1) where fi = f(xi) and Cij = c(xi, xj). In order to specify a mean and covariance function that give typical disparity maps an high probability, we introduce a latent segmentation variable s(x) ∈{F, B, O} for each point in the left image. This encodes whether a point belongs to the foreground (F), background (B) or is occluded (O) and makes it possible to model the fact that disparities in the background/foreground are smooth (spatially correlated) within their layers and are independent across layers. For a given segmentation, the covariance function is c(xi, xj; s) =    De−α∥xi−xj∥2 s(xi) = s(xj) ̸= O Dδ(xi −xj) s(xi) = s(xj) = O 0 s(xi) ̸= s(xj) (2) where D is the maximum disparity in the scene and δ is the Dirac delta function. The covariance of two points will be zero (i.e., the disparities are independent) unless they share the same segmentation label. Disparity is undefined within occlusion regions so these points are treated as independent with high variance to capture the noisy observations that occur here, pixels with other labels have disparities whose covariance falls off with distance engendering smoothness in the disparity map; the parameter α controls the smoothness and is set to α = 0.01 for all of the experiments shown in this paper (the points x are measured in pixel units). It will be convenient in what follows to define the covariance for sets of points such that c(X, X ′; s) = C(s) ∈Rn×n′ where the element Cij is the covariance of the ith element of X and jth element of X ′. The prior mean is also defined according to segmentation to reflect the fact that the foreground is at greater disparity (nearer the camera) than the background f(x; s) = ( 0.2D s(x) = B 0.8D s(x) = F 0.5D s(x) = O . (3) Because of the independence induced by the discrete labels s(x), we call this prior model a switched Gaussian process. Switching between Gaussian processes for different parts of the input space has been discussed previously by [10] in which switching was demonstrated for a 1D regression problem. 2.2 Stereo measurement process A proposed disparity d(x) is compared to the data via the normalized sum of squared differences (NSSD) matching cost over a region Ω(here a 5 × 5 pixel patch centred at the origin) using the normalized intensity is ¯L(x) = L(x) −1 25 P a∈ΩL(x + a) (likewise for ¯R(x)) m(x, d) = P a∈Ω ¯L(x + a) −¯R(x + a −d) 2 2 P a∈Ω ¯L2(x + a) + ¯R2(x + a + d) . (4) This cost has been shown in practice to be effective for disparity estimation [11]. To incorporate this information with the GP prior it must be expressed probabilistically. We follow the approach of [12] for this in which a parabola is fitted around the disparity with minimum score m(x, d) ≈ad2 + bd + c. Interpreting this as the inverse logarithm of a Gaussian distribution gives d(x) = µ(x) + ϵ where ϵ ∼Normal (0, v(x)) (5) with µ(x) = −b a and v(x) = 1 2a being the observation mean and variance. Given a segmentation and a set of noisy measurements at locations X = {xi, . . . , xn}, the GP can be used to predict the disparity at a new point P(d(x)|X). This is a Gaussian distribution Normal (˜µ(x), ˜v(x)) with [9] ˜µ(x; s) = µT ˜C(s)−1c(X, x; s) and ˜v(x; s) = c(x, x; s) −c(X, x; s)T ˜C(s)−1c(x; s) (6) where ˜C(s) = c(X, X; s) + diag v(x1), . . . , v(xn)  and µ = [µ(x1), . . . , µ(xn)]T. 2.3 Segmentation likelihood The previous discussion has assumed that the segmentation is known, yet this will rarely be the case in practice: s must therefore be inferred from the data together with the disparity. For a given set of observations, the probability that they are a sample from the GP prior is given by E(X) = log P µ|s, v  = −  µ −f(s) T ˜C(s)−1 µ −f(s)  −log det ˜C(s) −n 2 log 2π. (7) This is the evidence for the parameters of the prior model and constitutes a data likelihood for the segmentation. The next section describes an algorithm that uses this quantity to infer a segmentation whilst incorporating observations. 3 Incremental incorporation of measurements and model selection We propose an incremental and greedy algorithm for finding a segmentation. Measurements are incorporated one at a time and the evidence of adding the ith observation to each of the three segmentation layers is computed based on the preceding i −1 observations and their labels. The ith point is labelled according to which gave the greatest evidence. The first i −1 observation points Xi−1 = {x1, . . . , xi−1} are partitioned according to their labelling into the mutually independent sets XF, XB and XO. Since the three segmentation layers are independent, some of the cost of computing and storing the large matrix ˜C−1 is avoided by constructing ˜F −1 and ˜B−1 instead where ˜F = c(XF, XF) and ˜B = c(XB, XB). Observations assigned to the occlusion layer are independent of all other points and contain no useful information. There is therefore no need to keep a covariance matrix for these. As shown in [13], the GP framework easily facilitates incremental incorporation of observations by repeatedly updating the matrix inverse required in the prediction equations (6). For example, to add the ith example to the foreground (the process is identical for the background layer) compute ˜F −1 i =  ˜Fi−1 c(XF, xi) c(XF, xi)T c(xi, xi) + v(x) −1 =  ˜F −1 i−1 + qFqT F/rF qF qT F rF  (8) where r−1 F = c(xn, xn) + v(xi) −c(XF, x)T ˜F −1 i−1c(XF, x) qF = −rF ˜F −1 i−1c(XF, x). (9) Similarly, there is an incremental form for computing the evidence of a particular segmentation as E(Xi|s(xi) = j) = E(Xi−1) + ∆Ej(xi) where ∆Ej(xi) = log(rj) −(µ(Xj)Tqj)2 rj −2µ(xi)qT jµ(Xj) −rjµ(xi)2 −1 2 log 2π (10) By computing ∆Ej for the three possible segmentations, a new point can be greedily labelled as that which gives the greatest increase in evidence. Algorithm 1 gives pseudo-code for the incremental incorporation of a measurement and greedy labelling. As with Gaussian Process regression in general, this algorithm scales as O(n2) for storage and O(n3) for time and it is therefore impractical to make an observation at every pixel for images of useful size. We propose two mechanisms to overcome this: 1. Factorize the image into several sub-images and treat each one independently. The next subsection demonstrates this when each scanline (row of pixels) is handled independently. 2. Only make measurements at a sparse subset of locations. Subsection 3.2 describes an active learning approach for identifying optimally informative observation points. 3.1 Independent scanline observation schedule By handling the image pixels one row at a time, the problem becomes one-dimensional. Points are processed in order from right to left: for each point the disparity is measured as described in Algorithm 1 Add and label measurement at xi input ˜F −1, ˜B−1, XF, XB and XO Compute matrix building blocks rj∈{F,B} and qj∈{F,B} from (9) Compute change in evidence for adding to each layer ∆Ej∈{F,B,O} from (10) Label point s(xi) = arg maxj∈{F,B,O} ∆Ej(xi) Add point to set Xs(xi) ←Xs(xi) ∪xi if s(xi) ∈F ∪B then Update matrix ˜F −1 or ˜B−1 as (8) end if i = i + 1 return ˜F −1, ˜B−1, XF, XB and XO (a) (b) (c) Figure 2: Scanline predictions. Disparity and segmentation maps inferred by treating each scanline independently. (a) 320 × 240 pixel left input images. (b) Mean predicted disparity map ˜µ(x). (c) Inferred segmentation s(x) with F = white, B = grey (orange) and O = black. Sec. 2.2 and incorporated/labelled according to Algorithm 1. In this setting there are constraints on which labels may be neighbours along a scanline. Fig. 1 shows the segmentation for a typical image from which it can be seen that, moving horizontally from right to left, the only “legal” transitions in segmentation are B →F, F →O and O →B. Algorithm 1 is therefore modified to consider legal segmentations only. Fig. 2 shows some results of this approach. Both the disparities and segmentation are, subjectively, accurate however there are a number of “streaky” artifacts caused by the fact that there is no vertical sharing of information. There are also a number of artifacts where an incorrect segmentation label has been assigned; in many cases this is where a point in the foreground or background has been labelled as occluded because there is no texture in that part of an image and measurements made for such points have an high variance. The occlusion class could therefore be more accurately described as a general outlier category. 3.2 Active selection of sparse measurement locations As shown above, our GP model scales badly with the number of observations. The previous subsection used measurements at all locations by treating each scanline as independent, however a shortcoming of this approach is that no information is propagated vertically, introducing streaky artifacts and reducing the model’s ability to reason about occlusions and discontinuities. Rather than introduce artificial independencies, the observation schedule in this section copes with the O(n3) scaling by making measurements at only a sparse set of locations. Obvious ways of implementing this include choosing n locations either at random or in a grid pattern, however these fail to exploit information that can be readily obtained from both the image data and the current predictions made by the model. Hence, we propose an active approach, similar to that in [14]: given the first i −1 observations, observe the point which maximally reduces the entropy of the GP [8] ∆H(x) = H P(d|Xi−1)  −H P(d|Xi−1 ∪x)  = −1 2 log det Σ + 1 2 log det Σ′ + const. (11) where Σ and Σ′ are the posterior covariances of the GP over all points in the image before and after making an observation at x. To compute the entire posterior for each observation would be prohibitively expense; instead we approximate it by the product of the marginal distributions at each (a) (b) (c) Figure 3: Predictions after sparse active observation schedule. This figure shows the predictions made by the GP model with observations at 1000 image locations for the images used in Fig. 2. (a) Mean predicted disparity ˜µ(x); (b) Predictive uncertainty ˜v(x); (c) Inferred segmentation. point (i.e., ignore off-diagonal elements in Σ) which gives ∆H(x) ≈1 2 log ˜v(x)−log v(x)  where ˜v(x) is the predicted variance from (6) and v(x) is the measurement variance. Since the logarithm is monotonic, an equivalent utility function is used: U x|Xi−1  = ˜v(x) v(x). (12) Here the numerator drives the system to make observations at points with greatest predictive uncertainty. However, this is balanced by the denominator to avoid making observations at points where there is no information to be obtained from the data (e.g., the textureless regions in Fig. 2). To initialize the active algorithm, 64 initial observations are made in a evenly spaced grid over the image. Following this, points are selected using the utility function (12) and incorporated into the GP model using Algorithm 1. Predicting disparity in the scanline factorization was straightforward because a segmentation label had been assigned to every pixel. With sparse measurements, only the observation points have been labelled and to predict disparity at an arbitrary location a segmentation label must also be inferred. Our simple strategy for this is to label a point according to which gives the least predictive variance, i.e.: s(x) = arg min j∈{F,B,O} ˜v(x; s(x) = j). (13) Fig. 3 shows the results of using this active observation schedule with n = 1000 for the images of Fig. 2. As expected, by restoring 2D spatial coherence the results are smoother and have none of the streaky artifacts induced by the scanline factorization. Despite observing only 1.3% of the points used by the scanline factorization, the active algorithm has still managed to capture the important features in the scenes. Fig. 4a shows the locations of the n observation points; the observations are clustered around the boundary of the foreground object in an attempt to minimize the uncertainty at discontinuities/occlusions; the algorithm is dedicating its computational resources to the parts of the image which are most interesting, important and informative. Fig. 4b demonstrates further the benefits of selecting the points in an active framework compared to taking points at random. 0 500 1000 1500 2000 0 2 4 6 8 10 number of observations n % mislabelled points active random (a) (b) Figure 4: Advantage of active point selection. (a) The inferred segmentation from Fig. 3 with spots (blue) corresponding to observation locations selected by the active criterion. (b) This plot compares the accuracy of the segmentation against the number of sparse observations when the observation locations are chosen at random and using our active schedule. Accuracy is measured as the percentage of mislabelled pixels compared to a hand-labelled ground truth segmentation. The active strategy achieves better accuracy with fewer observations. (a) (b) (c) Figure 5: Improved segmentation by fusion with colour. (a) Pixel-wise energy term V (x) combining segmentation predictions from both the switched GP posterior and a colour model; (b) Segmentation returned by the Viterbi algorithm. This contains 0.5% labelling errors by area. (c) Inferred foreground image pixels. 3.3 Adding colour information The best segmentation accuracies using stereo information alone are around 1% labelling errors (with n ≥1000). In [5], superior segmentation results are achieved by incorporating colour information. We do the same here by computing a foreground “energy” V (x) at each location based on the variances predicted by the foreground/background layers and a known colour distribution P F|Lc(x)  where Lc(x) is the RGB colour of the left image at x: V (x) = log ˜v(x; s(x) = B) −log ˜v(x; s(x) = F) −log P F|Lc(x)  . (14) We represent the colour distribution using a 10 × 10 × 10 bin histogram in red-green-blue colour space. Fig. 5a shows this energy for the first image in Fig. 2. As in [5], we treat each scanline as a binary HMM and use the Viterbi algorithm to find a segmentation. A result of this is shown in Fig. 5c which contains 0.58% erroneous labels. This is comparable to the errors in [5] which are around 0.25% for this image. We suspect that our result can be improved with a more sophisticated colour model. 4 Discussion We have proposed a Gaussian process model for disparity, switched by a latent segmentation variable. We call this a switched Gaussian process and have proposed an incremental greedy algorithm for fitting this model to data and inferring a segmentation. We have demonstrated that by using a sparse model with points selected according to an active learning criterion, an accuracy can be achieved that is comparable to the state of the art [5]. We believe there are four key strengths to this probabilistic framework: Flexibility The incremental nature of the algorithm makes it possible to set the number of observations n according to time or quality constraints. Extensibility This method is probabilistic so fusion with other sources of information is possible (e.g., laser range scanner on a robot). Efficiency For small n, this approach is very fast ( 30ms per pair of images for n = 200 on a 3GHz PC). However, higher quality results require n > 1000 observations which reduces the execution speed to a few seconds per image. Accuracy We have shown that (for large n) this technique achieves an accuracy comparable to the state of the art. Future work will investigate the use of approximate techniques to overcome the O(n3) scaling problem [15]. The framework described in this paper can operate at real time for low n, however any technique that combats the scaling will allow higher accuracy for the same execution time. Also, improving the approximation to the likelihood in (5), e.g., by expectation propagation [16], may increase accuracy. References [1] D. Comaniciu and P. Meer. Robust analysis of feature spaces: color image sgementation. In Proc. Conf. Computer Vision and Pattern Recognition, pages 750–755, 1997. [2] Y. Ohta and T. Kanade. Stereo by intra- and inter-scanline search using dynamic programming. IEEE Trans. on Pattern Analysis and Machine Intelligence, 7(2):139–154, 1985. [3] D. Geiger, B. Ladendorf, and A. Yuille. Occlusions and binocular stereo. Int. J. Computer Vision, 14:211–226, 1995. [4] V. Kolmogorov and R. Zabih. Computing visual correspondence with occlusions using graph cuts. In Proc. Int. Conf. Computer Vision, 2001. [5] V. Kolmogorov, A. Criminisi, A. Blake, G. Cross, and C. Rother. Bi-layer segmentation of binocular stereo video. In Proc. Conf. Computer Vision and Pattern Recognition, 2005. [6] F. Sinz, J. Qui˜nonero-Candela, G.H. Bakir, C.E. Rasmussen, and M.O. Franz. Learning depth from stereo. In Pattern Recognition, Proc. 26th DAGM Symposium, pages 245–252, 2004. [7] R. Hartley and A. Zisserman. Multiple View Geometry. Cambridge University Press, 2000. [8] D.J.C. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):589–603, 1992. [9] C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [10] A. Storkey. Gaussian processes for switching regimes. In Proc. ICANN, 1998. [11] D. Scharstein and R. Szeliski. A taxonomy and evaluation of desnse two-frame stereo correspondence algorithms. Int. J. Computer Vision, 47(1):7–42, 2002. [12] L. Matthies, R. Szeliski, and T. Kanade. Incremental estimation of dense depth maps from image sequences. In Proc. Conf. Computer Vision and Pattern Recognition, 1988. [13] M. Gibbs and D.J.C. MacKay. Efficient implementation of gaussian processes. Technical report, University of Cambridge, 1997. [14] M. Seeger, C.K.I. Williams, and N. Lawrence. Fast forward selection to speed up sparse gaussian process regression. In Proc. AI-STATS, 2003. [15] J. Qui˜nonero-Candela and C.E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. J. Machine Learning Research, 6:1939–1959, 2005. [16] T.P. Minka. Expectation propagation for approximate Bayesian inference. In Proc. UAI, pages 362–369, 2001.
2006
73
3,096
Automated Hierarchy Discovery for Planning in Partially Observable Environments Laurent Charlin & Pascal Poupart David R. Cheriton School of Computer Science Faculty of Mathematics University of Waterloo Waterloo, Ontario {lcharlin,ppoupart}@cs.uwaterloo.ca Romy Shioda Dept of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo, Ontario rshioda@math.uwaterloo.ca Abstract Planning in partially observable domains is a notoriously difficult problem. However, in many real-world scenarios, planning can be simplified by decomposing the task into a hierarchy of smaller planning problems. Several approaches have been proposed to optimize a policy that decomposes according to a hierarchy specified a priori. In this paper, we investigate the problem of automatically discovering the hierarchy. More precisely, we frame the optimization of a hierarchical policy as a non-convex optimization problem that can be solved with general non-linear solvers, a mixed-integer non-linear approximation or a form of bounded hierarchical policy iteration. By encoding the hierarchical structure as variables of the optimization problem, we can automatically discover a hierarchy. Our method is flexible enough to allow any parts of the hierarchy to be specified based on prior knowledge while letting the optimization discover the unknown parts. It can also discover hierarchical policies, including recursive policies, that are more compact (potentially infinitely fewer parameters) and often easier to understand given the decomposition induced by the hierarchy. 1 Introduction Planning in partially observable domains is a notoriously difficult problem. However, in many realworld scenarios, planning can be simplified by decomposing the task into a hierarchy of smaller planning problems. Such decompositions can be exploited in planning to temporally abstract subpolicies into macro actions (a.k.a. options). Pineau et al. [17], Theocharous et al. [22], and Hansen and Zhou [10] proposed various algorithms that speed up planning in partially observable domains by exploiting the decompositions induced by a hierarchy. However these approaches assume that a policy hierarchy is specified by the user, so an important question arises: how can we automate the discovery of a policy hierarchy? In fully observable domains, there exists a large body of work on hierarchical Markov decision processes and reinforcement learning [6, 21, 7, 15] and several hierarchy discovery techniques have been proposed [23, 13, 11, 20]. However those techniques rely on the assumption that states are fully observable to detect abstractions and subgoals, which prevents their use in partially observable domains. We propose to frame hierarchy and policy discovery as an optimization problem with variables corresponding to the hierarchy and policy parameters. We present an approach that searches in the space of hierarchical controllers [10] for a good hierarchical policy. The search leads to a difficult non-convex optimization problem that we tackle using three approaches: generic non-linear solvers, a mixed-integer non-linear programming approximation or an alternating optimization technique that can be thought as a form of hierarchical bounded policy iteration. We also generalize Hansen and Zhou’s hierarchical controllers [10] to allow recursive controllers. These are controllers that may recursively call themselves, with the ability of representing policies with a finite number of parameters that would otherwise require infinitely many parameters. Recursive policies are likely to arise in language processing tasks such as dialogue management and text generation due to the recursive nature of language models. 2 Finite State Controllers We first review partially observable Markov decision processes (POMDPs) (Sect. 2.1), which is the framework used throughout the paper for planning in partially observable domains. Then we review how to represent POMDP policies as finite state controllers (Sect. 2.2) as well as some algorithms to optimize controllers of a fixed size (Sect. 2.3). 2.1 POMDPs POMDPs have emerged as a popular framework for planning in partially observable domains [12]. A POMDP is formally defined by a tuple (S, O, A, T, Z, R, γ) where S is the set of states, O is the set of observations, A is the set of actions, T(s′, s, a) = Pr(s′|s, a) is the transition function, Z(o, s′, a) = Pr(o|s′, a) is the observation function, R(s, a) = r is the reward function and γ ∈ [0, 1) is the discount factor. It will be useful to view γ as a termination probability. This will allow us to absorb γ into the transition probabilities by defining discounted transition probabilities: Prγ(s′|s, a) = Pr(s′|s, a)γ. Given a POMDP, the goal is to find a course of action that maximizes expected total rewards. To select actions, the system can only use the information available in the past actions and observations. Thus we define a policy π as a mapping from histories of past actions and observations to actions. Since histories may become arbitrarily long, we can alternatively define policies as mappings from beliefs to actions (i.e., π(b) = a). A belief b(s) = Pr(s) is a probability distribution over states, taking into account the information provided by past actions and observations. Given a belief b, after executing a and receiving o, we can compute an updated belief ba,o using Bayes’ theorem: ba,o(s) = kb(s) Pr(s′|s, a) Pr(o|s′a). Here k is a normalization constant. The value V π of policy π when starting with belief b is measured by the expected sum of the future rewards: V π(b) = P t R(bt, π(bt)), where R(b, a) = P s b(s)R(s, a). An optimal policy π∗is a policy with the highest value V ∗for all beliefs (i.e., V ∗(b) ≥V π(b)∀b, π). The optimal value function also satisfies Bellman’s equation: V ∗(b) = maxa (R(b, a) + γ Pr(o|b, a)V ∗(ba,o)), where Pr(o|b, a) = P s,s′ b(s) Pr(s′|s, a) Pr(o|s′, a). 2.2 Policy Representation A convenient representation for an important class of policies is that of finite state controllers [9]. A finite state controller consists of a finite state automaton (N, E) with a set N of nodes and a set E of directed edges Each node n has one outgoing edge per observation. A controller encodes a policy π = (α, β) by mapping each node to an action (i.e., α(n) = a) and each edge (referred by its observation label o and its parent node n) to a successor node (i.e., β(n, o) = n′). At runtime, the policy encoded by a controller is executed by doing the action at = α(nt) associated with the node nt traversed at time step t and following the edge labelled with observation ot to reach the next node nt+1 = β(nt, ot). Stochastic controllers [18] can also be used to represent stochastic policies by redefining α and β as distributions over actions and successor nodes. More precisely, let Prα(a|n) be the distribution from which an action a is sampled in node n and let Prβ(n′|n, a, o) be the distribution from which the successor node n′ is sampled after executing a and receiving o in node n. The value of a controller is computed by solving the following system of linear equations: V π n (s) = X a Prα(a|n)[R(s, a) + X s′,o,n′ Prγ(s′|s, a) Pr(o|s′, a) Prβ(n′|n, a, o)V π n′(s′)] ∀n, s (1) While there always exists an optimal policy representable by a deterministic controller, this controller may have a very large (possibly infinite) number of nodes. Given time and memory constraints, it is common practice to search for the best controller with a bounded number of nodes [18]. However, when the number of nodes is fixed, the best controller is not necessarily deterministic. This explains why searching in the space of stochastic controllers may be advantageous. Table 1: Quadratically constrained optimization program for bounded stochastic controllers [1]. max x,y X s bo(s) Vno(s) | {z } y s.t. Vn(s) | {z } y = X a,n′ h Pr(a, n′|n, ok) | {z } x R(s, a) + X s′,o Prγ(s′|s, a) Pr(o|s′, a) Pr(n′, a|n, o) | {z } x Vn′(s′) | {z } y i ∀n, s Pr(n′, a|n, o) | {z } x ≥0 ∀n′, a, n, o X n′,a Pr(n′, a|n, o) | {z } x = 1 ∀n, o X n′ Pr(n′, a|n, o) | {z } x = X n′ Pr(n′, a|n, ok) | {z } x ∀a, n, o 2.3 Optimization of Stochastic Controllers The optimization of a stochastic controller with a fixed number of nodes can be formulated as a quadratically constrained optimization problem (QCOP) [1]. The idea is to maximize V π by varying the controller parameters Prα and Prβ. Table 1 describes the optimization problem with Vn(s) and the joint distribution Pr(n′, a|n, o) = Prα(n|a) Prβ(n′|n, a, o) as variables. The first set of constraints corresponds to those of Eq. 1 while the remaining constraints ensure that Pr(n′, a|n, o) is a proper distribution and that P′ n Pr(n′, a|n, o) = Prα(a|n)∀o. This optimization program is non-convex due to the first set of constraints. Hence, existing techniques can at best guarantee convergence to a local optimum. Several techniques have been tried including gradient ascent [14], stochastic local search [3], bounded policy iteration (BPI) [18] and a general non-linear solver called SNOPT (based on sequential quadratic programming) [1, 8]. Empirically, biased-BPI (version of BPI that biases its search to the belief region reachable from a given initial belief state) and SNOPT have been shown to outperform the other approaches on some benchmark problems [19, 1]. We quickly review BPI since it will be extended in Section 3.2 to optimize hierarchical controllers. BPI alternates between policy evaluation and policy improvement. Given a policy with fixed parameters Pr(a, n′|n, o), policy evaluation solves the linear system in Eq 1 to find Vn(s) for all n, s. Policy improvement can be viewed as a linear simplification of the program in Table 1 achieved by fixing Vn′(s′) in the right hand side of the first set of constraints. Policy improvement is achieved by optimizing the controller parameters Pr(n′, a|n, o) and the value Vn(s) on the left hand side.1 3 Hierarchical controllers Hansen and Zhou [10] recently proposed hierarchical finite-state controllers as a simple and intuitive way of encoding hierarchical policies. A hierarchical controller consists of a set of nodes and edges as in a flat controller, however some nodes may be abstract, corresponding to sub-controllers themselves. As with flat controllers, concrete nodes are parameterized with an action mapping α and edges outgoing concrete nodes are parameterized by a successor node mapping β. In contrast, abstract nodes are parameterized by a child node mapping indicating in which child node the subcontroller should start. Hansen and Zhou consider two schemes for the edges outgoing abstract nodes: either there is a single outgoing edge labelled with a null observation or there is one edge per terminal node of the subcontroller labelled with an abstract observation identifying the node in which the subcontroller terminated. Subcontrollers encode full POMDP policies with the addition of a termination condition. In fully observable domains, it is customary to stop the subcontroller once a goal state (from a predefined set of terminal states) is reached. This strategy cannot work in partially observable domains, so Hansen and Zhou propose to terminate a subcontroller when an end node (from a predefined set of terminal nodes) is reached. Since the decision to reach a terminal node is made according to the successor node mapping β, the timing for returning control is implicitly optimized. Hansen and 1Note however that this optimization may decrease the value of some nodes so [18] add an additional constraint to ensure monotonic improvement by forcing Vn(s) on the left hand side to be at least as high as Vn(s) on the right hand side. Zhou propose to use |A| terminal nodes, each mapped to a different action. Terminal nodes do not have any outgoing edges nor any action mapping since they already have an action assigned. The hierarchy of the controller is assumed to be finite and specified by the programmer. Subcontrollers are optimized in isolation in a bottom up fashion. Subcontrollers at the bottom level are made up only of concrete nodes and therefore can be optimized as usual using any controller optimization technique. Controllers at other levels may contain abstract nodes for which we have to define the reward function and the transition probabilities. Recall that abstract nodes are not mapped to concrete actions, but rather to children nodes. Hence, the immediate reward of an abstract node ¯n corresponds to the value Vα(¯n)(s) of its child node α(¯n). Similarly, the probability of reaching state s′ after executing the subcontroller of an abstract node ¯n corresponds to the probability Pr(send|s, α(¯n)) of terminating the subcontroller in send when starting in s at child node α(¯n). This transition probability can be computed by solving the following linear system: Pr(send|s, n) =    1 when n is a terminal node and s = send 0 when n is a terminal node and s ̸= send P o,s′ Pr(s′|s, α(n)) Pr(o|s′, α(n)) Pr(send|s′, β(n, o)) otherwise (2) Subcontrollers with abstract actions correspond to partially observable semi-Markov decision processes (POSMDPs) since the duration of each abstract action may vary. The duration of an action is important to determine the amount by which future rewards should be discounted. Hansen and Zhou propose to use the mean duration to determine the amount of discounting, however this approach does not work. In particular, abstract actions with non-zero probability of never terminating have an infinite mean duration. Instead, we propose to absorb the discount factor into the transition distribution (i.e., Prγ(s′|s, a) = γ Pr(s′|s, a)). This avoids all issues related to discounting and allows us to solve POSMDPs with the same algorithms as POMDPs. Hence, given the abstract reward function R(s, α(¯n)) = Vα(¯n)(s) and the abstract transition function Prγ(s′|s, α(¯n)) obtained by solving the linear system in Eq. 2, we have a POSMDP which can be optimized using any POMDP optimization technique (as long as the discount factor is absorbed into the transition function). Hansen’s hierarchical controllers have two limitations: the hierarchy must have a finite number of levels and it must be specified by hand. In the next section we describe recursive controllers which may have infinitely many levels. We also describe an algorithm to discover a suitable hierarchy by simultaneously optimizing the controller parameters and hierarchy. 3.1 Recursive Controllers In some domains, policies are naturally recursive in the sense that they decompose into subpolicies that may call themselves. This is often the case in language processing tasks since language models such as probabilistic context-free grammars are composed of recursive rules. Recent work in dialogue management uses POMDPs to make high level discourse decisions [24]. Assuming POMDP dialogue management eventually handles decisions at the sentence level, recursive policies will naturally arise. Similarly, language generation with POMDPs would naturally lead to recursive policies that reflect the recursive nature of language models. We now propose several modifications to Hansen and Zhou’s hierarchical controllers that simplify things while allowing recursive controllers. First, the subcontrollers of abstract nodes may be composed of any node (including the parent node itself) and transitions can be made to any node anywhere (whether concrete or abstract). This allows recursive controllers and smaller controllers since nodes may be shared across levels. Second, we use a single terminal node that has no action nor any outer edge. It is a virtual node simply used to signal the termination of a subcontroller. Third, while abstract nodes lead to the execution of a subcontroller, they are also associated with an action. This action is executed upon termination of the subcontroller. Hence, the actions that were associated with the terminal nodes in Hansen and Zhou’s proposal are associated with the abstract nodes in our proposal. This allows a uniform parameterization of actions for all nodes while reducing the number of terminal nodes to 1. Fourth, the outer edges of abstract nodes are labelled with regular observations since an observation will be made following the execution of the action of an abstract node. Finally, to circumvent all issues related to discounting, we absorb the discount factor into the transition probabilities (i.e., Prγ(s′|s, a)). ¯n s Pr(n′, a|¯n, o) oc(send, nend|s, nbeg) Pr(nbeg|¯n) nend send n′ s′ nbeg s (a) ¯n s Pr(n′, a|¯n, o) oc(send, nend|s, nbeg) Pr(nbeg|¯n) oc(send, nend|s, nbeg) n0 s0 nend send n′ s′ nbeg s (b) Figure 1: The figures represent controllers and transitions as written in Equations 5 and 6b. Alongside the directed edges we’ve indicated the equivalent part of the equations which they correspond to. 3.2 Hierarchy and Policy Optimization We formulate the search for a good stochastic recursive controller, including the automated hierarchy discovery, as an optimization problem (see Table 2). The global maximum of this optimization problem corresponds to the optimal policy (and hierarchy) for a fixed set N of concrete nodes n and a fixed set ¯N of abstract nodes ¯n. The variables consist of the value function Vn(s), the policy parameters Pr(n′, a|n, o), the (stochastic) child node mapping Pr(n′|¯n) for each abstract node ¯n and the occupancy frequency oc(n, s|n0, s0) of each (n, s)-pair when starting in (n0, s0). The objective (Eq. 3) is the expected value P s b0(s)Vn0(s) of starting the controller in node n0 with initial belief b0. The constraints in Equations 4 and 5 respectively indicate the expected value of concrete and abstract nodes. The expected value of an abstract node corresponds to the sum of three terms: the expected value Vnbeg(s) of its subcontroller given by its child node nbeg, the reward R(send, a¯n) immediately after the termination of the subcontroller and the future rewards Vn(s′). Figure 1a illustrates graphically the relationship between the variables in Equation 5. Circles are state-node pairs labelled by their expected value. Edges indicate single transitions (solid line), sequences of transitions (dashed line) or the beginning/termination of a subcontroller (bold/dotted line). Edges are labelled with the corresponding transition probability variables. Note that the reward R(send, a¯n) depends on the state send in which the subcontroller terminates. Hence we need to compute the probability that the last state visited in the subcontroller is send. This probability is given by the occupancy frequency oc(send, nend|s, nbeg), which is recursively defined in Eq. 6 in terms of a preceding state-node pair which may be concrete (6a) or abstract (6b). Figure 1b illustrates graphically the relationship between the variables in Eq. 6b. Eq. 7 prevents infinite loops (without any action execution) in the child node mappings. The label function refers to the labelling of all abstract nodes, which induces an ordering on the abstract nodes. Only the nodes labelled with numbers larger than the label of an abstract node can be children of that abstract node. This constraint ensures that chains of child node mappings have a finite length, eventually reaching a concrete node where an action is executed. Constraints, like the ones in Table 1, are also needed to guarantee that the policy parameters and the child node mappings are proper distributions. 3.3 Algorithms Since the problem in Table 2 has non-convex (quartic) constraints in Eq. 5 and 6, it is difficult to solve. We consider three approaches inspired from the techniques for non-hierarchical controllers: Non-convex optimization: Use a general non-linear solver, such as SNOPT, to directly tackle the optimization problem in Table 2. This is the most convenient approach, however a globally optimal solution may not be found due to the non-convex nature of the problem. Mixed-Integer Non-Linear Programming (MINLP): We restrict Pr(n′, a|n, o) and Pr(nbeg|¯n) to be binary (i.e., in {0, 1}). Since the optimal controller is often near deterministic in practice, this restriction tends to have a negligible effect on the value of the optimal controller. The problem is still non-convex but can be tackled with a mixed-integer non-linear solver such as MINLP BB 2. Bounded Hierarchical Policy Iteration (BHPI): We alternate between (i) solving a simplified version of the optimization where some variables are fixed and (ii) updating the values of the fixed variables. More precisely, we fix Vn′(s′) in Eq. 5 and oc(s, ¯n|s0, n0) in Eq. 6. As a result, Eq. 5 and 6 are now cubic, involving products of variables that include a single continuous variable. This per2http://www-unix.mcs.anl.gov/∼leyffer/solvers.html Table 2: Non-convex quarticly constrained optimization problem for hierarchy and policy discovery in bounded stochastic recursive controllers. max w,x,y,z X s∈S b0(s) Vn0(s) | {z } y (3) s.t. Vn(s) | {z } y = X a,n′ h Pr(n′, a|n, ok) | {z } x R(s, a) + X s′,o Prγ(s′|s, a) Pr(o|s′, a) Pr(n′, a|n, o) | {z } x Vn′(s′) | {z } y i ∀s, n (4) V¯n(s) | {z } y = X nbeg Pr(nbeg|¯n) | {z } z h Vnbeg(s) | {z } y + X send,a,n′ oc(send, nend|s, nbeg) | {z } w h Pr(n′, a|¯n, ok) | {z } x R(send, a) + X s′,o Prγ(s′|send, a) Pr(o|s′, a) Pr(n′, a|¯n, o) | {z } x Vn′(s′) | {z } y ii ∀s, ¯n (5) oc(s′, n′|s0, n0) | {z } w = δ(s′, n′, s0, n0) + X s,o,a h (6) X n oc(s, n|s0, n0) | {z } w Prγ(s′|s, a) Pr(o|s′, a) Pr(n′, a|n, o) | {z } x i 9 = ; n concrete (6a) + P send,nbeg,¯n oc(s, ¯n|s0, n0) | {z } w Prγ(s′|send, a) Pr(o|s′, a) oc(send, nend|s, nbeg) | {z } w Pr(n′, a|¯n, o) | {z } x Pr(nbeg|¯n) | {z } z i ∀s0, s′, n0, n′ 9 > > = > > ; ¯n abstract (6b) Pr(¯n′|¯n) = 0 if label(¯n′) ≤label(¯n), ∀¯n, ¯n′ (7) mits the use of disjunctive programming [2] to linearize the constraints without any approximation. The idea is to replace any product BX (where B is binary and X is continuous) by a new continuous variable Y constrained by lbXB ≤Y ≤ubXB and X + (B −1)ubX ≤Y ≤X + (B −1)lbX where lbX and ubX are lower and upper bounds on X. One can verify that those additional linear constraints force Y to be equal to BX. After applying disjunctive programming, we solve the resulting mixed-integer linear program (MILP) and update Vn′(s′) and oc(s, ¯n|s0, n0) based on the new values for Vn(s) and oc(s′, n′|s0, n0). We repeat the process until convergence or until a pre-defined time limit is reached. Although, convergence cannot be guaranteed, in practice we have found BHPI to be monotonically increasing. Note that fixing Vn′(s′) and oc(s, ¯n|s0, n0) while varying the policy parameters is reminiscent of policy iteration, hence the name bounded hierarchical policy iteration. 3.4 Discussion Discovering a hierarchy offers many advantages over previous methods that assume the hierarchy is already known. In situations where the user is unable to specify the hierarchy, our approach provides a principled way of discovering it. In situations where the user has a hierarchy in mind, it may be possible to find a better one. Note however that discovering the hierarchy while optimizing the policy is a much more difficult problem than simply optimizing the policy parameters. Additional variables (e.g., Pr(n′, a|n, o) and oc(s, n|s0, n0)) must be optimized and the degree of non-linearity increases. Our approach can also be used when the hierarchy and the policy are partly known. It is fairly easy to set the variables that are known or to reduce their range by specifying upper and lower bounds. This also has the benefit of simplifying the optimization problem. It is also interesting to note that hierarchical policies may be encoded with exponentially fewer nodes in a hierarchical controller than a flat controller. Intuitively, when a subcontroller is called by k abstract nodes, this subcontroller is shared by all its abstract parents. An equivalent flat controller would have to use k separate copies of the subcontroller. If a hierarchical controller has l levels with subcontrollers shared by k parents in each level, then the equivalent flat controller will need O(kl) copies. By allowing recursive controllers, policies may be represented even more compactly. Recursive controllers allow abstract nodes to call subcontrollers that may contain themselves. An Table 3: Experiment results Problem S A O V* Num. SNOPT BHPI MINLP BB of Nodes Time V Time V Time V Paint 4 4 2 3.3 4(3/1) 2s 0.48 13s 3.29 <1s 3.29 Shuttle 8 3 5 32.7 4(3/1) 2s 31.87 85s 18.92 4s 18.92 8 3 5 32.7 6(4/2) 6s 31.87 7459s 27.93 221s 27.68 8 3 5 32.7 7(4/3) 26s 31.87 10076s 31.87 N/A – 8 3 5 32.7 9(5/4) 1449s 30.27 10518s 3.73 N/A – 4x4 Maze 16 4 2 3.7 3(2/1) 3s 3.15 397s 3.21 30s 3.73 equivalent non-hierarchical controller would have to unroll the recursion by creating a separate copy of the subcontroller each time it is called. Since recursive controllers essentially call themselves infinitely many times, they can represent infinitely large non-recursive controllers with finitely many nodes. As a comparison, recursive controllers are to non-recursive hierarchical controllers what context-free grammars are to regular expressions. Since the leading approaches for controller optimization fix the number of nodes [18, 1], one may be able to find a much better policy by considering hierarchical recursive controllers. In addition, hierarchical controllers may be easier to understand and interpret than flat controllers given their natural decomposition into subcontrollers and their possibly smaller size. 4 Experiments We report on some preliminary experiments with three toy problems (paint, shuttle and maze) from the POMDP repository3. We used the SNOPT package to directly solve the non-convex optimization problem in Table 2 and bounded hierarchical policy iteration (BHPI) to solve it iteratively. Table 3 reports the running time and the value of the hierarchical policies found.4 For comparison purposes, the optimal value of each problem (copied from [4]) is reported in the column labelled by V ∗. We optimized hierarchical controllers of two levels with a fixed number of nodes reported in the column labelled “Num. of Nodes”. The numbers in parentheses indicate the number of nodes at the top level (left) and at the bottom level (right).5 In general, SNOPT finds the optimal solution with minimal computational time. In contrast, BHPI is less robust and takes up to several orders of magnitude longer. MINLP BB returns good solutions for the smaller problems but is unable to find feasible solutions to the larger ones. We also looked at the hierarchy discovered for each problem and verified that it made sense. In particular, the hierarchy discovered for the paint problem matches the one hand coded by Pineau in her PhD thesis [16]. Given the relatively small size of the test problems, these experiments should be viewed as a proof of concept that demonstrate the feasibility of our approach. More extensive experiments with larger problems will be necessary to demonstrate the scalability of our approach. 5 Conclusion & Future Work This paper proposes the first approach for hierarchy discovery in partially observable planning problems. We model the search for a good hierarchical policy as a non-convex optimization problem with variables corresponding to the hierarchy and policy parameters. We propose to tackle the optimization problem using non-linear solvers such as SNOPT or by reformulating the problem as an approximate MINLP or as a sequence of MILPs that can be thought of as a form of hierarchical bounded policy iteration. Preliminary experiments demonstrate the feasibility of our approach, however further research is necessary to improve scalability. The approach can also be used in situations where a user would like to improve or learn part of the hierarchy. Many variables can then be set (or restricted to a smaller range) which simplifies the optimization problem and improves scalability. We also generalize Hansen and Zhou’s hierarchical controllers to recursive controllers. Recursive controllers can encode policies with finitely many nodes that would otherwise require infinitely large 3http://pomdp.org/pomdp/examples/index.shtml 4N/A refers to a trial when the solver was unable to return a feasible solution to the problem. 5Since the problems are simple, the number of levels was restricted to two, though our approach permits any number of levels and does not require the number of levels nor the number of nodes per level to be specified. non-recursive controllers. Further details about recursive controllers and our other contributions can be found in [5]. We plan to further investigate the use of recursive controllers in dialogue management and text generation where recursive policies are expected to naturally capture the recursive nature of language models. Acknowledgements: this research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, the Canada Foundation for Innovation (CFI) and the Ontario Innovation Trust (OIT). References [1] C. Amato, D. Bernstein, and S. Zilberstein. Solving POMDPs using quadratically constrained linear programs. In To appear In International Joint Conferences on Artificial Intelligence (IJCAI), 2007. [2] E. Balas. Disjunctive programming. Annals of Discrete Mathematics, 5:3–51, 1979. [3] D. Braziunas and C. Boutilier. Stochastic local search for POMDP controllers. In AAAI, pages 690–696, 2004. [4] A. Cassandra. Exact and approximate algorithms for partially observable Markov decision processes. PhD thesis, Brown University, Dept. of Computer Science, 1998. [5] L. Charlin. Automated hierarchy discovery for planning in partially observable domains. Master’s thesis, University of Waterloo, 2006. [6] T. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. JAIR, 13:227–303, 2000. [7] M. Ghavamzadeh and S. Mahadevan. Hierarchical policy gradient algorithms. In T. Fawcett and N. Mishra, editors, ICML, pages 226–233. AAAI Press, 2003. [8] P. Gill, W. Murray, and M. Saunders. SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Review, 47(1):99–131, 2005. [9] E. Hansen. An improved policy iteration algorithm for partially observable MDPs. In NIPS, 1998. [10] E. Hansen and R. Zhou. Synthesis of hierarchical finite-state controllers for POMDPs. In E. Giunchiglia, N. Muscettola, and D. Nau, editors, ICAPS, pages 113–122. AAAI, 2003. [11] B. Hengst. Discovering hierarchy in reinforcement learning with HEXQ. In ICML, pages 243–250, 2002. [12] L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1-2):99–134, 1998. [13] A. McGovern and A. Barto. Automatic discovery of subgoals in reinforcement learning using diverse density. In ICML, pages 361–368, 2001. [14] N. Meuleau, L. Peshkin, K.-E. Kim, and L. Kaelbling. Learning finite-state controllers for partially observable environments. In UAI, pages 427–436, 1999. [15] R. Parr. Hierarchical Control and learning for Markov decision processes. PhD thesis, University of California at Berkeley, 1998. [16] J. Pineau. Tractable Planning Under Uncertainty: Exploiting Structure. PhD thesis, Robotics Institute, Carnegie Mellon University, 2004. [17] J. Pineau, G. Gordon, and S. Thrun. Policy-contingent abstraction for robust robot control. In UAI, pages 477–484, 2003. [18] P. Poupart and C. Boutilier. Bounded finite state controllers. In NIPS, 2003. [19] Pascal Poupart. Exploiting Structure to efficiently solve large scale partially observable Markov decision processes. PhD thesis, University of Toronto, 2005. [20] M. Ryan. Using abstract models of behaviours to automatically generate reinforcement learning hierarchies. In ICML, pages 522–529, 2002. [21] R. Sutton, D. Precup, and S. Singh. Between MDPs and Semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-2):181–211, 1999. [22] G. Theocharous, S. Mahadevan, and L. Kaelbling. Spatial and temporal abstractions in POMDPs applied to robot navigation. Technical Report MIT-CSAIL-TR-2005-058, Computer Science and Artificial Intelligence Laboratory, MIT, 2005. [23] S. Thrun and A. Schwartz. Finding structure in reinforcement learning. In NIPS, pages 385–392, 1994. [24] J. Williams and S. Youngs. Scaling POMDPs for dialogue management with composite summary pointbased value iteration (CSPBVI). In AAAI workshop on Statistical and Empirical Methods in Spoken Dialogue Systems, 2006.
2006
74
3,097
Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models Mark Johnson Microsoft Research / Brown University Mark Johnson@Brown.edu Thomas L. Griffiths University of California, Berkeley Tom Griffiths@Berkeley.edu Sharon Goldwater Stanford University sgwater@gmail.com Abstract This paper introduces adaptor grammars, a class of probabilistic models of language that generalize probabilistic context-free grammars (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with “adaptors” that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian models of language using Dirichlet processes and hierarchical Dirichlet processes can be written as simple grammars. We present a general-purpose inference algorithm for adaptor grammars, making it easy to define and use such models, and illustrate how several existing nonparametric Bayesian models can be expressed within this framework. 1 Introduction Probabilistic models of language make two kinds of substantive assumptions: assumptions about the structures that underlie language, and assumptions about the probabilistic dependencies in the process by which those structures are generated. Typically, these assumptions are tightly coupled. For example, in probabilistic context-free grammars (PCFGs), structures are built up by applying a sequence of context-free rewrite rules, where each rule in the sequence is selected independently at random. In this paper, we introduce a class of probabilistic models that weaken the independence assumptions made in PCFGs, which we call adaptor grammars. Adaptor grammars insert additional stochastic processes called adaptors into the procedure for generating structures, allowing the expansion of a symbol to depend on the way in which that symbol has been rewritten in the past. Introducing dependencies among the applications of rewrite rules extends the set of distributions over linguistic structures that can be characterized by a simple grammar. Adaptor grammars provide a simple framework for defining nonparametric Bayesian models of language. With a particular choice of adaptor, based on the Pitman-Yor process [1, 2, 3], simple context-free grammars specify distributions commonly used in nonparametric Bayesian statistics, such as Dirichlet processes [4] and hierarchical Dirichlet processes [5]. As a consequence, many nonparametric Bayesian models that have been used in computational linguistics, such as models of morphology [6] and word segmentation [7], can be expressed as adaptor grammars. We introduce a general-purpose inference algorithm for adaptor grammars, which makes it easy to define nonparametric Bayesian models that generate different linguistic structures and perform inference in those models. The rest of this paper is structured as follows. Section 2 introduces the key technical ideas we will use. Section 3 defines adaptor grammars, while Section 4 presents some examples. Section 5 describes the Markov chain Monte Carlo algorithm we have developed to sample from the posterior distribution over structures generated by an adaptor grammar. Software implementing this algorithm is available from http://cog.brown.edu/˜mj/Software.htm. 2 Background In this section, we introduce the two technical ideas that are combined in the adaptor grammars discussed here: probabilistic context-free grammars, and the Pitman-Yor process. We adopt a nonstandard formulation of PCFGs in order to emphasize that they are a kind of recursive mixture, and to establish the formal devices we use to specify adaptor grammars. 2.1 Probabilistic context-free grammars A context-free grammar (CFG) is a quadruple (N, W, R, S) where N is a finite set of nonterminal symbols, W is a finite set of terminal symbols disjoint from N, R is a finite set of productions or rules of the form A →β where A ∈N and β ∈(N ∪W)⋆(the Kleene closure of the terminal and nonterminal symbols), and S ∈N is a distinguished nonterminal called the start symbol. A CFG associates with each symbol A ∈N ∪W a set TA of finite, labeled, ordered trees. If A is a terminal symbol then TA is the singleton set consisting of a unit tree (i.e., containing a single node) labeled A. The sets of trees associated with nonterminals are defined recursively as follows: TA = [ A→B1...Bn∈RA TREEA(TB1, . . . , TBn) where RA is the subset of productions in R with left-hand side A, and TREEA(TB1, . . . , TBn) is the set of all trees whose root node is labeled A, that have n immediate subtrees, and where the ith subtree is a member of TBi. The set of trees generated by the CFG is TS, and the language generated by the CFG is the set {YIELD(t) : t ∈TS} of terminal strings or yields of the trees TS. A probabilistic context-free grammar (PCFG) is a quintuple (N, W, R, S, θ), where (N, W, R, S) is a CFG and θ is a vector of non-negative real numbers indexed by productions R such that X A→β∈RA θA→β = 1. Informally, θA→β is the probability of expanding the nonterminal A using the production A →β. θ is used to define a distribution GA over the trees TA for each symbol A. If A is a terminal symbol, then GA is the distribution that puts all of its mass on the unit tree labeled A. The distributions GA for nonterminal symbols are defined recursively over TA as follows: GA = X A→B1...Bn∈RA θA→B1...BnTREEDISTA(GB1, . . . , GBn) (1) where TREEDISTA(GB1, . . . , GBn) is the distribution over TREEA(TB1, . . . , TBn) satisfying: TREEDISTA(G1, . . . , Gn)   XX A t1 tn ... ! = n Y i=1 Gi(ti). That is, TREEDISTA(G1, . . . , Gn) is a distribution over trees where the root node is labeled A and each subtree ti is generated independently from Gi; it is this assumption that adaptor grammars relax. The distribution over trees generated by the PCFG is GS, and the probability of a string is the sum of the probabilities of all trees with that string as their yields. 2.2 The Pitman-Yor process The Pitman-Yor process [1, 2, 3] is a stochastic process that generates partitions of integers. It is most intuitively described using the metaphor of seating customers at a restaurant. Assume we have a numbered sequence of tables, and zi indicates the number of the table at which the ith customer is seated. Customers enter the restaurant sequentially. The first customer sits at the first table, z1 = 1, and the n + 1st customer chooses a table from the distribution zn+1|z1, . . . , zn ∼ma + b n + b δm+1 + m X k=1 nk −a n + b δk (2) where m is the number of different indices appearing in the sequence z = (z1, . . . , zn), nk is the number of times k appears in z, and δk is the Kronecker delta function, i.e., the distribution that puts all of its mass on k. The process is specified by two real-valued parameters, a ∈[0, 1] and b ≥0. The probability of a particular sequence of assignments, z, with a corresponding vector of table counts n = (n1, . . . , nm) is P(z) = PY(n | a, b) = Qm k=1(a(k −1) + b) Qnk−1 j=1 (j −a) Qn−1 i=0 (i + b) . (3) From this it is easy to see that the distribution produced by the Pitman-Yor process is exchangeable, with the probability of z being unaffected by permutation of the indices of the zi. Equation 2 instantiates a kind of “rich get richer” dynamics, with customers being more likely to sit at more popular tables. We can use the Pitman-Yor process to define distributions with this character on any desired domain. Assume that every table in our restaurant has a value xj placed on it, with those values being generated from an exchangeable distribution G, which we will refer to as the generator. Then, we can sample a sequence of variables y = (y1, . . . , yn) by using the Pitman-Yor process to produce z and setting yi = xzi. Intuitively, this corresponds to customers entering the restaurant, and emitting the values of the tables they choose. The distribution defined on y by this process will be exchangeable, and has two interesting special cases that depend on the parameters of the Pitman-Yor process. When a = 1, every customer is assigned to a new table, and the yi are drawn from G. When a = 0, the distribution on the yi is that induced by the Dirichlet process [4], a stochastic process that is commonly used in nonparametric Bayesian statistics, with concentration parameter b and base distribution G. We can also identify another scheme that generates the distribution outlined in the previous paragraph. Let H be a discrete distribution produced by generating a set of atoms x from G and weights on those atoms from the two-parameter Poisson-Dirichlet distribution [2]. We could then generate a sequence of samples y from H. If we integrate over values of H, the distribution on y is the same as that obtained via the Pitman-Yor process [2, 3]. 3 Adaptor grammars In this section, we use the ideas introduced in the previous section to give a formal definition of adaptor grammars. We first state this definition in full generality, allowing any choice of adaptor, and then consider the case where the adaptor is based on the Pitman-Yor process in more detail. 3.1 A general definition of adaptor grammars Adaptor grammars extend PCFGs by inserting an additional component called an adaptor into the PCFG recursion (Equation 1). An adaptor C is a function from a distribution G to a distribution over distributions with the same support as G. An adaptor grammar is a sextuple (N, W, R, S, θ, C) where (N, W, R, S, θ) is a PCFG and the adaptor vector C is a vector of (parameters specifying) adaptors indexed by N. That is, CA maps a distribution over trees TA to another distribution over TA, for each A ∈N. An adaptor grammar associates each symbol with two distributions GA and HA over TA. If A is a terminal symbol then GA and HA are distributions that put all their mass on the unit tree labeled A, while GA and HA for nonterminal symbols are defined as follows:1 GA = X A→B1...Bn∈RA θA→B1...BnTREEDISTA(HB1, . . . , GHn) (4) HA ∼ CA(GA) The intuition here is that GA instantiates the PCFG recursion, while the introduction of HA makes it possible to modify the independence assumptions behind the resulting distribution through the choice of the adaptor, CA. If the adaptor is the identity function, with HA = GA, the result is just a PCFG. However, other distributions over trees can be defined by choosing other adaptors. In practice, we integrate over HA, to define a single distribution on trees for any choice of adaptors C. 1This definition allows an adaptor grammar to include self-recursive or mutually recursive CFG productions (e.g., X →X Y or X →Y Z, Y →X W). Such recursion complicates inference, so we restrict ourselves to grammars where the adapted nonterminals are not recursive. 3.2 Pitman-Yor adaptor grammars The definition given above allows the adaptors to be any appropriate process, but our focus in the remainder of the paper will be on the case where the adaptor is based on the Pitman-Yor process. Pitman-Yor processes can cache, i.e., increase the probability of, frequently occurring trees. The capacity to replace the independent selection of rewrite rules with an exchangeable stochastic process enables adaptor grammars based on the Pitman-Yor process to define probability distributions over trees that cannot be expressed using PCFGs. A Pitman-Yor adaptor grammar (PYAG) is an adaptor grammar where the adaptors C are based on the Pitman-Yor process. A Pitman-Yor adaptor CA(GA) is the distribution obtained by generating a set of atoms from the distribution GA and weights on those atoms from the two-parameter PoissonDirichlet distribution. A PYAG has an adaptor CA with parameters aA and bA for each non-terminal A ∈N. As noted above, if aA = 1 then the Pitman-Yor process is the identity function, so A is expanded in the standard manner for a PCFG. Each adaptor CA will also be associated with two vectors, xA and nA, that are needed to compute the probability distribution over trees. xA is the sequence of previously generated subtrees with root nodes labeled A. Having been “cached” by the grammar, these now have higher probability than other subtrees. nA lists the counts associated with the subtrees in xA. The adaptor state can thus be summarized as CA = (aA, bA, xA, nA). A Pitman-Yor adaptor grammar analysis u = (t, ℓ) is a pair consisting of a parse tree t ∈TS together with an index function ℓ(·). If q is a nonterminal node in t labeled A, then ℓ(q) gives the index of the entry in xA for the subtree t′ of t rooted at q, i.e., such that xAℓ(q) = t′. The sequence of analyses u = (u1, . . . , un) generated by an adaptor grammar contains sufficient information to compute the adaptor state C(u) after generating u: the elements of xA are the distinctly indexed subtrees of u with root label A, and their frequencies nA can be found by performing a top-down traversal of each analysis in turn, only visiting the children of a node q when the subanalysis rooted at q is encountered for the first time (i.e., when it is added to xA). 4 Examples of Pitman-Yor adaptor grammars Pitman-Yor adaptor grammars provide a framework in which it is easy to define compositional nonparametric Bayesian models. The use of adaptors based on the Pitman-Yor process allows us to specify grammars that correspond to Dirichlet processes [4] and hierarchical Dirichlet processes [5]. Once expressed in this framework, a general-purpose inference algorithm can be used to calculate the posterior distribution over analyses produced by a model. In this section, we illustrate how existing nonparametric Bayesian models used for word segmentation [7] and morphological analysis [6] can be expressed as adaptor grammars, and describe the results of applying our inference algorithm in these models. We postpone the presentation of the algorithm itself until Section 5. 4.1 Dirichlet processes and word segmentation Adaptor grammars can be used to define Dirichlet processes with discrete base distributions. It is straightforward to write down an adaptor grammar that defines a Dirichlet process over all strings: Word → Chars Chars → Char Chars → Chars Char (5) The productions expanding Char to all possible characters are omitted to save space. The start symbol for this grammar is Word. The parameters aChar and aChars are set to 1, so the adaptors for Char and Chars are the identity function and HChars = GChars is the distribution over words produced by sampling each character independently (i.e., a “monkeys at typewriters” model). Finally, aWord is set to 0, so the adaptor for Word is a Dirichlet process with concentration parameter bWord. This grammar generates all possible strings of characters and assigns them simple right-branching structures of no particular interest, but the Word adaptor changes their distribution to one that reflects the frequencies of previously generated words. Initially, the Word adaptor is empty (i.e., xWord is empty), so the first word s1 generated by the grammar is distributed according to GChars. However, the second word can be generated in two ways: either it is retrieved from the adaptor’s cache (and hence is s1) with probability 1/(1 + bWord), or else with probability bWord/(1 + bWord) it is a new word generated by GChars. After n words have been emitted, Word puts mass n/(n + bWord) on those words and reserves mass bWord/(n + bWord) for new words (i.e., generated by Chars). We can extend this grammar to a simple unigram word segmentation model by adding the following productions, changing the start label to Words and setting aWords = 1. Words → Word Words → Word Words This grammar generates sequences of Word subtrees, so it implicitly segments strings of terminals into a sequence of words, and in fact implements the word segmentation model of [7]. We applied the grammar above with the algorithm described in Section 5 to a corpus of unsegmented child-directed speech [8]. The input strings are sequences of phonemes such as WAtIzIt. A typical parse might consist of Words dominating three Word subtrees, each in turn dominating the phoneme sequences Wat, Iz and It respectively. Using the sampling procedure described in Section 5 with bWord = 30, we obtained a segmentation which identified words in unsegmented input with 0.64 precision, 0.51 recall, and 0.56 f-score, which is consistent with the results presented for the unigram model of [7] on the same data. 4.2 Hierarchical Dirichlet processes and morphological analysis An adaptor grammar with more than one adapted nonterminal can implement a hierarchical Dirichlet process. A hierarchical Dirichlet process that uses the Word process as a generator can be defined by adding the production Word1 →Word to (5) and making Word1 the start symbol. Informally, Word1 generates words either from its own cache xWord1 or from the Word distribution. Word itself generates words either from xWord or from the “monkeys at typewriters” model Chars. A slightly more elaborate grammar can implement the morphological analysis described in [6]. Words are analysed into stem and suffix substrings; e.g., the word jumping is analysed as a stem jump and a suffix ing. As [6] notes, one of the difficulties in constructing a probabilistic account of such suffixation is that the relative frequencies of suffixes varies dramatically depending on the stem. That paper used a Pitman-Yor process to effectively dampen this frequency variation, and the adaptor grammar described here does exactly the same thing. The productions of the adaptor grammar are as follows, where Chars is “monkeys at typewriters” once again: Word → Stem Suffix Word → Stem Stem → Chars Suffix → Chars We now give an informal description of how samples might be generated by this grammar. The nonterminals Word, Stem and Suffix are associated with Pitman-Yor adaptors. Stems and suffixes that occur in many words are associated with highly probable cache entries, and so have much higher probability than under the Chars PCFG subgrammar. Figure 1 depicts a possible state of the adaptors in this adaptor grammar after generating the three words walking, jumping and walked. Such a state could be generated as follows. Before any strings are generated all of the adaptors are empty. To generate the first word we must sample from HWord, as there are no entries in the Word adaptor. Sampling from HWord requires sampling from GStem and perhaps also GSuffix, and eventually from the Chars distributions. Supposing that these return walk and ing as Stem and Suffix strings respectively, the adaptor entries after generating the first word walking consist of the first entries for Word, Stem and Suffix. In order to generate another Word we first decide whether to select an existing word from the adaptor, or whether to generate the word using GWord. Suppose we choose the latter. Then we must sample from HStem and perhaps also from HSuffix. Suppose we choose to generate the new stem jump from GStem (resulting in the second entry in the Stem adaptor) but choose to reuse the existing Suffix adaptor entry, resulting in the word jumping. The third word walked is generated in a similar fashion: this time the stem is the first entry in the Stem adaptor, but the suffix ed is generated from GSuffix and becomes the second entry in the Suffix adaptor. Word Stem w a l k Suffix i n g Word Stem w a l k Suffix e d Stem w a l k Stem j u m p Suffix i n g Suffix e d Word Stem j u m p Suffix i n g Word: Stem: Suffix: Figure 1: A depiction of a possible state of the Pitman-Yor adaptors in the adaptor grammar of Section 4.2 after generating walking, jumping and walked. The model described in [6] is more complex than the one just described because it uses a hidden “morphological class” variable that determines which stem-suffix pair is selected. The morphological class variable is intended to capture morphological variation; e.g., the present continuous form skipping is formed by suffixing ping instead of the ing form using in walking and jumping. This can be expressed using an adaptor grammar with productions that instantiate the following schema: Word → Wordc Wordc → Stemc Suffixc Wordc → Stemc Stemc → Chars Suffixc → Chars Here c ranges over the hidden morphological classes, and the productions expanding Chars and Char are as before. We set the adaptor parameter aWord = 1 for the start nonterminal symbol Word, so we adapt the Wordc, Stemc and Suffixc nonterminals for each hidden class c. Following [6], we used this grammar with six hidden classes c to segment 170,015 orthographic verb tokens from the Penn Wall Street Journal corpus, and set a = 0 and b = 500 for the adapted nonterminals. Although we trained on all verbs in the corpus, we evaluated the segmentation produced by the inference procedure described below on just the verbs whose infinitival stems were a prefix of the verb itself (i.e., we evaluated skipping but ignored wrote, since its stem write is not a prefix). Of the 116,129 tokens we evaluated, 70% were correctly segmented, and of the 7,170 verb types, 66% were correctly segmented. Many of the errors were in fact linguistically plausible: e.g., eased was analysed as a stem eas followed by a suffix ed, permitting the grammar to also generate easing as eas plus ing. 5 Bayesian inference for Pitman-Yor adaptor grammars The results presented in the previous section were obtained by using a Markov chain Monte Carlo (MCMC) algorithm to sample from the posterior distribution over PYAG analyses u = (u1, . . . , un) given strings s = (s1, . . . , sn), where si ∈W ⋆and ui is the analysis of si. We assume we are given a CFG (N, W, R, S), vectors of Pitman-Yor adaptor parameters a and b, and a Dirichlet prior with hyperparameters α over production probabilities θ, i.e.: P(θ | α) = Y A∈N 1 B(αA) Y A→β∈RA θA→β αA→β−1 where: B(αA) = Q A→β∈RA Γ(αA→β) Γ(P A→β∈RA αA→β) with Γ(x) being the generalized factorial function, and αA is the subsequence of α indexed by RA (i.e., corresponding to productions that expand A). The joint probability of u under this PYAG, integrating over the distributions HA generated from the two-parameter Poisson-Dirichlet distribution associated with each adaptor, is P(u | α, a, b) = Y A∈N B(αA + fA(xA)) B(αA) PY(nA(u)|a, b) (6) where fA→β(xA) is the number of times the root node of a tree in xA is expanded by production A →β, and fA(xA) is the sequence of such counts (indexed by r ∈RA). Informally, the first term in (6) is the probability of generating the topmost node in each analysis in adaptor CA (the rest of the tree is generated by another adaptor), while the second term (from Equation 3) is the probability of generating a Pitman-Yor adaptor with counts nA. The posterior distribution over analyses u given strings s is obtained by normalizing P(u | α, a, b) over all analyses u that have s as their yield. Unfortunately, computing this distribution is intractable. Instead, we draw samples from this distribution using a component-wise Metropolis-Hastings sampler, proposing changes to the analysis ui for each string si in turn. The proposal distribution is constructed to approximate the conditional distribution over ui given si and the analyses of all other strings u−i, P(ui|si, u−i). Since there does not seem to be an efficient (dynamic programming) algorithm for directly sampling from P(ui|si, u−i),2 we construct a PCFG G′(u−i) on the fly whose parse trees can be transformed into PYAG analyses, and use this as our proposal distribution. 5.1 The PCFG approximation G′(u−i) A PYAG can be viewed as a special kind of PCFG which adapts its production probabilities depending on its history. The PCFG approximation G′(u−i) = (N, W, R′, S, θ′) is a static snapshot of the adaptor grammar given the sentences s−i (i.e., all of the sentences in s except si). Given an adaptor grammar H = (N, W, R, S, C), let: R′ = R ∪ [ A∈N {A →YIELD(x) : x ∈xA} θ′ A→β = mAaA + bA nA + bA  fA→β(xA) + αA→β mA + P A→β∈RA αA→β ! + X k:YIELD(XAk )=β nAk −aA nA + bA  where YIELD(x) is the terminal string or yield of the tree x and mA is the length of xA. R′ contains all of the productions R, together with productions representing the adaptor entries xA for each A ∈N. These additional productions rewrite directly to strings of terminal symbols, and their probability is the probability of the adaptor CA generating the corresponding value xAk. The two terms to the left of the summation specify the probability of selecting a production from the original productions R. The first term is the probability of adaptor CA generating a new value, and the second term is the MAP estimate of the production’s probability, estimated from the root expansions of the trees xA. It is straightforward to map parses of a string s produced by G′ to corresponding adaptor analyses for the adaptor grammar H (it is possible for a single production of R′ to correspond to several adaptor entries so this mapping may be non-deterministic). This means that we can use the PCFG G′ with an efficient PCFG sampling procedure [9] to generate possible adaptor grammar analyses for ui. 5.2 A Metropolis-Hastings algorithm The previous section described how to sample adaptor analyses u for a string s from a PCFG approximation G′ to an adaptor grammar H. We use this as our proposal distribution in a Metropolis2The independence assumptions of PCFGs play an important role in making dynamic programming possible. In PYAGs, the probability of a subtree adapts dynamically depending on the other subtrees in u, including those in ui. Hastings algorithm. If ui is the current analysis of si and u′ i ̸= ui is a proposal analysis sampled from P(Ui|si, G′(u−i)) we accept the proposal ui with probability A(ui, u′ i), where: A(ui, u′ i) = min  1, P(u′ | α, a, b) P(ui | si, G′(u−i)) P(u | α, a, b) P(u′ i | si, G′(u−i))  where u′ is the same as u except that u′ i replaces ui. Except when the number of training strings s is very small, we find that only a tiny fraction (less than 1%) of proposals are rejected, presumably because the probability of an adaptor analysis does not change significantly within a single string. Our inference procedure is as follows. Given a set of training strings s we choose an initial set of analyses for them at random. At each iteration we pick a string si from s at random, and sample a parse for si from the PCFG approximation G′(u−i), updating u when the Metropolis-Hastings procedure accepts the proposed analysis. At convergence the u produced by this procedure are samples from the posterior distribution over analyses given s, and samples from the posterior distribution over adaptor states C(u) and production probabilities θ can be computed from them. 6 Conclusion The strong independence assumptions of probabilistic context-free grammars tightly couple compositional structure with the probabilistic generative process that produces that structure. Adaptor grammars relax that coupling by inserting an additional stochastic component into the generative process. Pitman-Yor adaptor grammars use adaptors based on the Pitman-Yor process. This choice makes it possible to express Dirichlet process and hierarchical Dirichlet process models over discrete domains as simple context-free grammars. We have proposed a general-purpose inference algorithm for adaptor grammars, which can be used to sample from the posterior distribution over analyses produced by any adaptor grammar. While our focus here has been on demonstrating that this algorithm can be used to produce equivalent results to existing nonparametric Bayesian models used for word segmentation and morphological analysis, the great promise of this framework lies in its simplification of specifying and using such models, providing a basic toolbox that will facilitate the construction of more sophisticated models. Acknowledgments This work was performed while all authors were at the Cognitive and Linguistic Sciences Department at Brown University and supported by the following grants: NIH R01-MH60922 and RO1DC000314, NSF 9870676, 0631518 and 0631667, the DARPA CALO project and DARPA GALE contract HR0011-06-2-0001. References [1] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and Related Fields, 102:145–158, 1995. [2] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855–900, 1997. [3] H. Ishwaran and L. F. James. Generalized weighted Chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13:1211–1235, 2003. [4] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1:209–230, 1973. [5] Y. W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, to appear. [6] S. Goldwater, T. L. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. In Advances in Neural Information Processing Systems 18, 2006. [7] S. Goldwater, T. L. Griffiths, and M. Johnson. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics, 2006. [8] M. Brent. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71–105, 1999. [9] J. Goodman. Parsing inside-out. PhD thesis, Harvard University, 1998. available from http://research.microsoft.com/˜joshuago/.
2006
75
3,098
On Transductive Regression Corinna Cortes Google Research 76 Ninth Avenue New York, NY 10011 corinna@google.com Mehryar Mohri Courant Institute of Mathematical Sciences and Google Research 251 Mercer Street New York, NY 10012 mohri@cs.nyu.edu Abstract In many modern large-scale learning applications, the amount of unlabeled data far exceeds that of labeled data. A common instance of this problem is the transductive setting where the unlabeled test points are known to the learning algorithm. This paper presents a study of regression problems in that setting. It presents explicit VC-dimension error bounds for transductive regression that hold for all bounded loss functions and coincide with the tight classification bounds of Vapnik when applied to classification. It also presents a new transductive regression algorithm inspired by our bound that admits a primal and kernelized closedform solution and deals efficiently with large amounts of unlabeled data. The algorithm exploits the position of unlabeled points to locally estimate their labels and then uses a global optimization to ensure robust predictions. Our study also includes the results of experiments with several publicly available regression data sets with up to 20,000 unlabeled examples. The comparison with other transductive regression algorithms shows that it performs well and that it can scale to large data sets. 1 Introduction In many modern large-scale learning applications, the amount of unlabeled data far exceeds that of labeled data. Large amounts of digitized data are widely available but the cost of labeling is often prohibitive since it typically requires human assistance. Semi-supervised learning or transductive inference leverage unlabeled data to achieve better predictions and are thus particularly relevant to modern applications. Semi-supervised learning consists of using both labeled and unlabeled data to find a hypothesis that accurately labels unseen examples. Transductive inference uses the same information but only aims at predicting the labels of the known unlabeled examples. This paper deals with regression problems in the transductive setting, which arise in a variety of contexts. This may be to predict the real-valued labels of the nodes of a known graph in computational biology, or the scores associated to known documents in information extraction problems. The problem of transduction inference was originally formulated and analyzed by Vapnik [1982] who described it as a simpler task than the traditional induction treated in machine learning. A number of recent publications have dealt with the topic of transductive inference [Vapnik, 1998, Joachims, 1999, Bennett and Demiriz, 1998, Chapelle et al., 1999, Graepel et al., 1999, Schuurmans and Southey, 2002, Corduneanu and Jaakkola, 2003, Zhu et al., 2004, Lanckriet et al., 2004, Derbeko et al., 2004, Belkin et al., 2004, Zhou et al., 2005]. But, with the exception of [Chapelle et al., 1999], [Schuurmans and Southey, 2002], and [Belkin et al., 2004], this work has primarily dealt with classification problems. We present a specific study of transductive regression. We give new error bounds for transductive regression that hold for all boundedloss functions and coincide with the tight classification bounds of Vapnik [1998] when applied to classification. Our results also include explicit VC-dimension bounds for transductive regression. This contrasts with the original regression bound given by Vapnik [1998] which assumes a specific condition of global regularity on the class of functions and is based on a complicated and implicit function of the samples sizes and the confidence parameter. As stated by Vapnik [1998], this function must be “tabulated by a computer”. We also present a new algorithm for transductive regression inspired by our bound which first exploits the position of unlabeled points to locally estimate their labels, and then uses a global optimization to ensure robust predictions. We show that our algorithm admits both a primal and a kernelized closed-form solution. Existing algorithms for the transductive setting require the inversion of a matrix whose dimension is either the total number of unlabeled and labeled examples [Belkin et al., 2004], or the total number of unlabeled examples [Chapelle et al., 1999]. This may be prohibitive for many real-world applications with very large amounts of unlabeled examples. One of the original motivations for our work was to design algorithms dealing precisely with such situations. When the dimension of the feature space N is not too large, our algorithm provides a very efficient solution whose cost is dominated by the construction and inversion of an N × N-matrix. Similarly, when the number of training points m is small compared to the number of unlabeled points, using an empirical kernel map, our algorithm requires only constructing and inverting an m × m-matrix. Our study also includes the results of our experiments with several publicly available regression data sets with up to 20,000 unlabeled examples, limited only by the size of the data sets. We compared our algorithm with those of Belkin et al. [2004] and Chapelle et al. [1999], which are among the very few algorithms described in the literature dealing specifically with the problem of transductive regression. The results show that our algorithm performs well in several data sets compared to these algorithms and that it can scale to large data sets. The paper is organized as follows. Section 2 describes in more detail the transductive regression setting we are studying. New generalization error bounds for transductive regression are presented in Section 3. Section 4 describes and analyzes both the primal and dual versions of our algorithm and the experimental results of our study are reported in Section 5. 2 Definition of the Problem Assume that a full sample X of m + u examples is given. The learning algorithm further receives the labels of a random subset of X of size m which serves as a training sample: (x1, y1), . . . , (xm, ym) ∈X × R. (1) The remaining u unlabeled examples, xm+1, . . . , xm+u ∈X, serve as test data. The learning problem that we consider consists of predicting accurately the labels ym+1, . . . , ym+u of the test examples. No other test examples will ever be considered. This is a transduction regression problem [Vapnik, 1998].1 It differs from the standard (induction) regression estimation problem by the fact that the learning algorithm is given the unlabeled test examples beforehand. Thus, it may exploit that information and achieve a better result than via the standard induction. In what follows, we consider a hypothesis space H of real-valued functions for regression estimation. For a hypothesis h ∈H, we denote by R0(h) its mean squared error on the full sample, by bR(h) its error on the training data, and by R(h) the error of h on the test examples: R0(h) = 1 m + u m+u X i=1 (h(xi)−yi)2 bR(h) = 1 m m X i=1 (h(xi)−yi)2 R(h) = 1 u m+u X i=m+1 (h(xi)−yi)2. (2) For convenience, we will sometimes denote by yx = yi the label of a point x = xi ∈X. 3 Transductive Regression Generalization Error This section presents explicit generalization error bounds for transductive regression. Vapnik [1998] introduced and analyzed the problem of transduction and presented transductive inference bounds for both classification and regression. His regression bound assumes however a specific regularity condition on the hypothesis functions leading in particular to a surprising bound where no error on the training data implies zero generalization error. The bound has the multiplicative form: R(h) ≤Ω(m, u, d, δ) bR(h), where d is the VC-dimension of the class of hypotheses used and δ is the confidence parameter. Furthermore, for certain values of the parameters, for example larger ds or smaller δs, Ωbecomes infinite and the bound is ineffective [Vapnik, 1998, page 349]. Ω is also based on a complicated and implicit function of m, u, and δ, which makes its interpretation difficult. For example, it is hard to analyze the asymptotic behavior of the bound for large u. 1This is in fact one of the two transduction settings discussed by [Vapnik, 1998], but, under some general conditions, the results proved with this setting carry over to the other. Instead, our bounds simply hold for general bounded loss functions and, when applied to classification, coincide with the tight classification bounds of Vapnik [1998]. Our results also include explicit VC-dimension bounds for transductive regression. To the best of our knowledge, these are the first general explicit bounds for transductive regression. Our first bound uses the function ¯Γ defined as follows. Let Γ(ǫ, k) be defined by: ∀ǫ ≥0, ∀k ∈N, uǫ ≤k ≤m(1 −ǫ) + u, Γ(ǫ, k) = X r∈I(m,u,ǫ) k r m+u−k m−r  m+u m  , (3) where I(m, u, k, ǫ) is the set of integers r such that: k−r u −r m > ǫ and max(0, k −u) ≤r ≤ min(m, k). Γ(ǫ, k) represents the probability of observing a difference in error rate of more than ǫ between the training and test set when the total number of errors is k (see [Cortes and Mohri, 2006]). Then ¯Γ is defined as ¯Γ(ǫ) = maxk Γ( q k m+uǫ, k). ¯Γ is used in the transductive classification bound of Vapnik [1998] (see [Cortes and Mohri, 2006][Theorem 2]). [Cortes and Mohri, 2006][Corollary 2] gives an upper bound on ¯Γ. For any subset X ′ ⊆X, any non-negative real number t ≥0, and hypothesis h ∈H, let Θ(h, t, X ′) denote the fraction of the points xi ∈X ′, i = 1, . . . , k, such that (h(xi) −yi)2 −t > 0. Thus, Θ(h, t, X ′) represents the error rate over the sample X ′ of the classifier that associates to a point x the value zero if (h(x) −yx)2 ≤t, one otherwise. Two classifiers associated in this way to Θ(h, t, X) and Θ(h′, t′, X) can be viewed as equivalent if they label X in an identical way. Since X is finite, there is a finite number of equivalence classes of such classifiers, we will denote that number by N(m + u). Theorem 1 Let δ > 0, and let ǫ0 > 0 be the minimum value of ǫ such that N(m + u)¯Γ(ǫ) ≤δ, and assume that the loss function is bounded: for all h ∈H and x ∈X, (h(x) −yx)2 ≤B2, where B ∈R+. Then, with probability at least 1 −δ, for all h ∈H, R(h) ≤bR(h) + uǫ2 0B2 2(m + u) + ǫ0B s bR(h) +  uǫ0B 2(m + u) 2 . (4) Proof. For any h ∈H, let R1(h) be defined by: R1(h) = Z B2 0 p Θ(h, t, X) dt. (5) By the Cauchy-Schwarz inequality, R1(h) ≤ Z B2 0 Θ(h, t, X) dt !1/2 Z B2 0 1dt !1/2 = B Z B2 0 Θ(h, t, X) dt !1/2 . (6) Let D denote the uniform probability distribution associated to the sample X. Thus, D(x) = 1 m+u for all x ∈X. Let Prx∼D[Ex] denote the probability of event Ex when x is randomly drawn according to D. By definition of R0 and the Lebesgue integral, for all h ∈H, R0(h) = Z X (h(x)−yx)2D(x) dx = Z ∞ 0 Pr x∼D[(h(x)−yx)2 > t] dt = Z B2 0 Θ(h, t, X) dt. (7) Similarly, setting Xm = {xi ∈X : i ∈[1, m]} and Xu = {xi ∈X : i ∈[m + 1, m + u]}, we have bR(h) = Z B2 0 Θ(h, t, Xm) dt and R(h) = Z B2 0 Θ(h, t, Xu) dt. (8) In view of Equation 7, Inequality 6 can be rewritten as: R1(h) ≤B p R0(h). By [Cortes and Mohri, 2006][Theorem 2], for all ǫ > 0 and for any t ≥0, Pr[sup h∈H Θ(h, t, Xu) −Θ(h, t, Xm) p Θ(h, t, X) > ǫ] ≤N(m + u)¯Γ(ǫ). (9) Fix ǫ > 0. Then, with probability at least 1 −N(m + u)¯Γ(ǫ), for all integers n > 1 and i ≥0, Θ(h, iB2 n , Xu) −Θ(h, iB2 n , Xm) q Θ(h, iB2 n , X) ≤ǫ. (10) Then, the convergence of the Riemann sums to the integral ensures that R(h) −bR(h) = lim n→∞ 1 n n X i=0 Θ(h, iB2 n , Xu) −1 n n X i=0 Θ(h, iB2 n , Xm) (11) ≤ ǫ lim n→∞ 1 n n X i=0 r Θ(h, iB2 n , X) = ǫR1(h) ≤ǫB p R0(h). (12) Let δ > 0 and select ǫ = ǫ0 as the minimum value of ǫ such that N(m + u)¯Γ(ǫ) ≤δ, then with probability at least 1 −δ, R(h) −bR(h) ≤ǫ0B p R0(h). (13) Plugging in the following expression of R0(h) with respect to R(h) and bR(h) R0(h) = m m + u bR(h) + u m + uR(h), (14) and solving the second-degree equation in R(h) yields directly the statement of the theorem. Theorem 1 provides a general bound on the regression error within the transduction setting. The theorem can also be used to derive a bound in the classification case by simply setting B = 1. The resulting bound coincides with the tight classification bound given by Vapnik [1998]. The bound given by Theorem 1 depends on the function ¯Γ and is implicit. The following provides a general and explicit error bound for transduction regression directly expressed in terms of the empirical error, the number of equivalence N(m + u) or the VC-dimension d, and the sample sizes m and u. Corollary 1 Let H be a set of hypotheses with VC-dimension d. Assume that the loss function is bounded: for all h ∈H and x ∈X, (h(x) −yx)2 ≤B2, where B ∈R+. Then, with probability at least 1 −δ, for all h ∈H, R(h) ≤bR(h) + uα2B2 2(m + u) + αB s bR(h) +  uαB 2(m + u) 2 , (15) with α = q 2(m+u) mu log N(m + u) + log 1 δ  ≤ r 2(m+u) mu  d log (m+u)e d + log 1 δ  . Proof. By Theorem 1, Inequality 15 holds for all α > 0 such that N(m+u)¯Γ(α) ≤δ. By [Cortes and Mohri, 2006][Corollary 2], log N(m + u) ¯Γ(α)  ≤log N(m + u) −1 2 mu m+uα2. Setting log δ to match this upper bound yields the expression of α given above. Since N(m + u) is bounded by the shattering coefficient of H of order m + u, by Sauer’s lemma, log N(m + u) ≤d log (m+u)e d . This gives the upper bound on α in terms of the VC-dimension. The bound is explicit and can be readily used within the Structural Risk Minimization (SRM) framework, either by using the expression of α in terms of the VC-dimension, or the tighter expression with respect to the number of equivalence classes N. In the latter case, a structure of increasing number of equivalence classes can be constructed as in [Vapnik, 1998, page 360]. A more practical algorithm inspired by these concepts is described in the next section. 4 Transductive Regression Algorithm This section presents an algorithm for the transductive regression problem. Before presenting this algorithm, let us first emphasize that the algorithms introduced for transductive classification problems, e.g., transductive SVMs [Vapnik, 1998, Joachims, 1999], cannot be readily used for regression. These algorithms typically select the hypothesis h, out of a hypothesis space H, that minimizes the following optimization function min y∗ m+i,i=1,...,u Ω(h) + C 1 m m X i=1 L (h(xi), yi) + C′ 1 u u X i=1 L h(xm+i), y∗ m+i  , (16) where Ω(h) is a capacity measure term, L is the loss function used, C ≥0 and C′ ≥0 regularization parameters, and where the minimum is taken over all possible labels y∗ m+1, . . . , y∗ m+u for the test points. In regression, this scheme would lead to a trivial solution not exploiting the transduction setting. Indeed, let h0 be the hypothesis minimizing the first two terms, that is the solution of the induction problem. For the particular choice y∗ m+i = h0(xm+i), i = 1, . . . , u, the third term vanishes. Thus, h0 is also minimizing the sum of all three terms. In two-group classification, the trivial solution is typically not the solution of the minimization problem because in general h0(xm+i) is not in {0, 1}. The main idea behind the design of our algorithm is to exploit the additional information provided in transduction, that is the position of the unlabeled examples. Our algorithm has two stages. The first stage is based on the position of unlabeled points. For each unlabeled point xi, i = m+1, . . ., m+u, a local estimate label ¯yi is determined using the labeled points in the neighborhood of xi. In the second stage, a global hypothesis h is found that best matches all labels, those of the training data and the estimate labels ¯yi. This second stage is critical and distinguishes our method from other suggested ones. While using local information to determine labels is important (see for example the discussion of Vapnik [1998]), it is not sufficient for a robust prediction. A global estimate of all labels is needed to make predictions less vulnerable to noise. 4.1 Local Estimates Let Φ be a feature mapping from X to a vector space F provided with a norm. We fix a radius r ≥0 and consider for all x′ ∈Xu, the ball of radius r centered in Φ(x′), denoted by B(Φ(x′), r). This defines the neighborhood of the image of each unlabeled point. A single radius r is used for all neighborhoods to limit the number of parameters for the algorithm. Labeled points x ∈Xm whose images Φ(x) fall within the neighborhood of Φ(x′), x′ ∈Xu, help determine an estimate label of x′. With a very large radius r, the labels of all training examples contribute to the definition of the local estimates. But, with smaller radii, only a limited number of computations are needed. When no such labeled point exists in the neighborhood of x′ ∈Xu, which depends on the radius r selected, x′ is disregarded in both training stages of the algorithm. There are many possible ways to define the estimate label of x′ ∈Xu based on the neighborhood points. One simple way consists of defining it as the weighted average of the neighborhood labels yx, where the weights may be defined as the inverse of distances of Φ(x) to Φ(x′), or as similarity measures K(x, x′) when a positive definite kernel K is associated to Φ. Thus, when the set of labeled points with images in the neighborhood of Φ(x′) is not empty, I = {i ∈[1, m] : Φ(xi) ∈B(Φ(x′), r)} ̸= ∅, the estimate label ¯yx′ of x′ ∈Xu can be given by: ¯yx′ = X i∈I wi¯yi P i wi with w−1 i = ∥Φ(x′) −Φ(xi)∥≤r or wi = K(x′, xi). (17) The estimate labels can also be obtained as the solution of a local linear or kernel ridge regression, which is what we used in most of our experiments. In practice, with a relatively small radius r, the computation of an estimated label ¯yi depends only on a limited number of labeled points and their labels, and is quite efficient. 4.2 Global Optimization The second stage of our algorithm consists of selecting a hypothesis h that fits best the labels of the training points and the estimate labels provided in the first stage. As suggested by Corollary 1, hypothesis spaces with a smaller number of equivalence classes guarantee a better generalization error. The bound also suggests reducing the empirical error. This leads us to consider the following objective function G = ||w||2 + C m X i=1 (h(xi) −yi)2 + C′ m+u X i=m+1 (h(xi) −¯yi)2, (18) where h is as a linear function with weight vector w ∈F: ∀x ∈X, h(x) = w · Φ(x), and where C ≥0 and C′ ≥0 are regularization parameters. The first two terms of the objective function coincide with those used in standard (kernel) ridge regression. The third term, which restricts the estimate error, can be viewed as imposing a smaller number of equivalence classes on the hypothesis space as suggested by the error bound of Corollary 1. The constraint explicitly exploits knowledge about the location of all the test points, and limits the range of the hypothesis at these locations, thereby reducing the number of equivalence classes. Our algorithm can be viewed as a generalization of (kernel) ridge regression to the transductive setting. In the following, we will show that this generalized optimization problem admits a closed-form solution and a natural kernel-based solution. 4.2.1 Primal solution Let N be the dimension of the feature space and let W ∈RN×1 denote the column matrix whose components are the coordinates of w, Y ∈Rm×1 the column matrix whose components are the labels yi of the training examples, and Y′ ∈Ru×1 the column-matrix whose components are the estimated labels ¯yi of the test examples. Let X = [Φ(x1), . . . , Φ(xm)] ∈RN×m denote the matrix whose columns are the components of the images by Φ of the training examples, and similarly X′ = [Φ(xm+1), . . . , Φ(xm+u)] ∈RN×u the matrix corresponding to the test examples. G can then be rewritten as: G = ∥W∥2 + C∥X⊤W −Y∥2 + C′∥X′⊤W −Y′∥2. (19) G is convex and differentiable and its gradient is given by ∇G = 2W + 2C X(X⊤W −Y) + 2C′ X′(X′⊤W −Y′). (20) The matrix W minimizing G is the unique solution of ∇G = 0. Since (IN +C XX⊤+C′ X′X′⊤) is invertible, it is given by the following expression W = (IN + C XX⊤+ C′ X′X′⊤)−1(C XY + C′ X′Y′). (21) This gives a closed-form solution in the primal space based on the inversion of a matrix in RN×N. Let T (N) be the time complexity of computing the inverse of a matrix in RN×N. T (N) = O(N 3) using standard methods or T (N) = O(N 2.376) with the method of Coppersmith and Winograd. The time complexity of the computation of W from X, X′, Y, and Y′ is thus in O(T (N)+(m+u)N 2). When the dimension N of the feature space is small compared to the number of examples m + u, which is typical in modern learning applications where u is large, this method remains practical and leads to a very efficient computation. The use of the so-called empirical kernel map [Sch¨olkopf and Smola, 2002] also makes this method very attractive. Given a kernel K, the empirical kernel feature vector associated to x is the m-dimensional vector Φ(x) = [K(x, x1), . . . , K(x, xm)]⊤. Thus, the dimension of the feature space is then N = m. For relatively small m, even for very large values of u with respect to m, the solution is efficiently computable and yet benefits from the use of kernels. This computational advantage is not shared by other methods such as the manifold regularization techniques [Belkin et al., 2004], or even by the regression technique described by [Chapelle et al., 1999], despite it is based on a primal method (we have derived a dual version of that method as well, see Section 5) since it requires among other things the inversion of a matrix in Ru×u. Once W is computed, prediction can be done by computing X′⊤W in time O(uN). 4.2.2 Dual solution The computation can also be done in the dual space, which is useful in the case of very highdimensional feature spaces. Let MX ∈RN×(m+u) and MY ∈R(m+u)×1 be the matrices defined by: MX = √ C X √ C′ X′ MY =  √ C Y √ C′ Y′  . (22) Then, Equation 21 can be rewritten as: W = (IN + MXM⊤ X)−1MXMY . To determine the dual solution, observe that M⊤ X(MXM⊤ X + γIN)−1 = (M⊤ XMX + γIm+u)−1M⊤ X, (23) where Im+u denotes the identity matrix of R(m+u)×(m+u). This can be derived without difficulty from a series expansion of (MXM⊤ X + γIN)−1. Thus, W can also be computed via: W = MX(Im+u + K)−1MY , (24) where K is the Gram matrix K = M⊤ XMX. Let K21 ∈Ru×m and K22 ∈Ru×u be the sub-matrices of the Gram K defined by: K21 = (K(xm+i, xj)1≤i≤u,1≤j≤m) and K22 = (K(xm+i, xm+j)1≤i,j≤u) and let K2 ∈Ru×(m+u) be the matrix defined by: K2 = √ C K21 √ C′ K22  = X′⊤MX. (25) No. of unlab. Relative improvement in MSE (%) Dataset points Our algorithm Chapelle et al. [1999] Belkin et al. [2004] Boston Housing [13] 25 20.2±14.7 4.3±11.3 2.4±5.4 500 8.4±6.9 2.7±3.0 3.9±12.3 California Housing [8] 2,500 25.9±8.3 0.2±0.3 0.0±0.0 5,000 17.2±8.7 0.0±0.0 0.0±0.0 20,000 22.0±11.0 — — kin-32fh [32] 2,500 9.4±3.7 2.2±2.6 2.7±3.1 8,000 18.4±5.9 0.5±0.5 0.9±0.7 500 14.4±10.4 1.5±2.7 2.6 ±7.7 Elevators [18] 2500 9.0±6.9 2.2±2.9 0.0±0.0 15,000 9.7±5.8 — — Table 1: Transductive regression experiments. The number in brackets after the name indicates the input dimensionality of the data set. The number of training examples was m = 481 for the Boston Housing data set, m = 25 for the other tasks. The number of unlabeled examples was u = 25 for the Boston Housing data set and varied from u = 500 to the maximum of 20,000 examples for the California Housing data set. For u ≥10,000, the algorithms of Chapelle et al. [1999] and Belkin et al. [2004] did not terminate within the time period of our experiments. Then, predictions can be made using kernel functions alone since X′⊤W can be computed by: X′⊤W = X′⊤MX(Im+u + K)−1MY = K2(Im+u + K)−1MY . (26) When the dimension of the feature space N is very large with respect to the total number of examples, this can lead to a faster computation of the solution. (Im+u + K)−1MY can be computed in O(T (m + u) + (m + u)2tK) and predictions are computed in time O(u (m + u)), where tK is the time complexity of the computation of K(x, x), x, x′ ∈X. As already pointed in the description of the local estimates, in practice, some unlabeled points are disregarded in the training phases because no labeled point falls in their neighborhood. Thus, instead of u, a smaller number of unlabeled examples u′ ≤u determines the computational cost. 5 Experimental Results This section reports the results of our experiments with the transductive regression algorithm just presented with several data sets. For comparison, we also implemented the algorithm of Chapelle et al. [1999] and that of Belkin et al. [2004], which are among the very few algorithms described in the literature dealing specifically with the problem of transductive regression. For the algorithm of Chapelle et al. [1999], we in fact derived and implemented a dual solution not described in the original paper. With the notation used in that paper, it can be shown that C = I −ˆK ˆK⊤( ˆK ˆK⊤+ γI)−1. (27) Our comparisons were made using several publicly available regression data sets: Boston Housing, kin-32fh a data set in the Kinematics family with high unpredictability or noise, California Housing, and Elevators [Torgo, 2006]. For the Boston Housing data set, we used the same partitioning of the training and test sets as in [Chapelle et al., 1999]: 481 training examples and 25 test examples. The input variables were normalized to have mean zero and a variance one. For the kin-32fh, California Housing, and Elevators data sets, 25 training examples were used with varying (large) amounts of test examples: 2,500 and 8,000 for kin-32fh; from 500 up to 20,000 for California Housing; and from 500 to 15,000 for Elevators. The experiments were repeated for 100 random partitions of training and test sets. The kernels used with all algorithms were Gaussian kernels. To measure the improvement produced by the transductive inference algorithms, we used kernel ridge regression as a baseline. The optimal values for the width of the Gaussian σ and the ridge 1 C were determined using cross-validation. These parameters were then fixed at these values. The remaining parameters for our algorithm, r and C′, were determined using a grid search and cross-validation. The parameters of the algorithms of Chapelle et al. [1999] and Belkin et al. [2004] were determined in the same way. Alternatively, the parameters could be selected using the explicit VC-dimension generalization bound of Corollary 1. For our algorithm, we found the best values of r to be typically among the 2.5% smallest distances between training and test points. Thus, each estimate label was determined by only a small number of labeled points. For our algorithm, we experimented both with the dual solution using Gaussian kernels, and the primal solution with an empirical Gaussian kernel map as described in Section 4.2.1. The results obtained were very similar, however the primal method was dramatically faster since it required the inversion of relatively small-dimensional matrices even for a large number of unlabeled examples. For consistency, all the results reported for our method relate to the dual solution, except from those with very large u, e.g. u ≤10,000, where the dual method was too time-consuming. Table 1 shows the results of our experiments. For each data set and each algorithm, the relative improvement in mean squared error (MSE) with respect to the baseline averaged over the random partitions is indicated, followed by its standard deviation. Some improvements were small or not statistically significant. In general, we observed no significant performance improvement over the baseline on any of these data sets using the Laplacian regularized least squares method of Belkin et al. [2004]. We note that, while positive classification results have been previously reported for this algorithm, no transductive regression experimental result seems to have been published for it. Our results for the method of Chapelle et al. [1999] match those reported by the authors for the Boston Housing data set (both absolute and relative MSE). Our algorithm achieved a significant improvement of the MSE in all data sets and for different amounts of unlabeled data and was shown to be practical for large data sets of 20,000 test examples. This matches many real-world situations where amount of unlabeled data is orders of magnitude larger than that of labeled data. 6 Conclusion We presented a general study of transductive regression. We gave new and general explicit error bounds for transductive regression and described a simple and general algorithm inspired by our bound that can scale to relatively large data sets. The results of experiments show that our algorithm achieves a smaller error in several tasks compared to other previously published algorithms for transductive regression. The problem of transductive regression arises in a variety of learning contexts, in particular for learning node labels of a very large graphs such as the web graph. This leads to computational problems that may require approximations or new algorithms. We hope that our study will be useful for dealing with these and other similar transduction regression problems. References Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization; a geometric framework for learning from examples. Technical Report TR-2004-06, University of Chicago, 2004. Kristin Bennett and Ayhan Demiriz. Semi-supervised support vector machines. NIPS 11, pages 368–374, 1998. Olivier Chapelle, Vladimir Vapnik, and Jason Weston. Transductive Inference for Estimating Values of Functions. NIPS 12, pages 421–427, 1999. Adrian Corduneanu and Tommi Jaakkola. On information regularization. In Christopher Meek and Uffe Kjærulff, editors, Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence, pages 151–158, 2003. Corinna Cortes and Mehryar Mohri. On Transductive Regression. Technical Report TR2006-883, Courant Institute of Mathematical Sciences, New York University, November 2006. Philip Derbeko, Ran El-Yaniv, and Ron Meir. Explicit learning curves for transduction and application to clustering and compression algorithms. J. Artif. Intell. Res. (JAIR), 22:117–142, 2004. Thore Graepel, Ralf Herbrich, and Klaus Obermayer. Bayesian transduction. NIPS 12, 1999. Thorsten Joachims. Transductive inference for text classification using support vector machines. In Ivan Bratko and Saso Dzeroski, editors, Proceedings of ICML-99, 16th International Conference on Machine Learning, pages 200–209. Morgan Kaufmann Publishers, San Francisco, US, 1999. Gert R. G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. J. Mach. Learn. Res., 5:27–72, 2004. ISSN 1533-7928. Bernhard Sch¨olkopf and Alex Smola. Learning with Kernels. MIT Press: Cambridge, MA, 2002. Dale Schuurmans and Finnegan Southey. Metric-Based Methods for Adaptive Model Selection and Regularization. Machine Learning, 48:51–84, 2002. Lu´ıs Torgo. Regression datasets, 2006. http://www.liacc.up.pt/ ltorgo/Regression/DataSets.html. Vladimir N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer, Berlin, 1982. Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998. Dengyong Zhou, Jiayuan Huang, and Bernard Scholkopf. Learning from labeled and unlabeled data on a directed graph. In L. De Raedt and S. Wrobel, editors, Proceedings of ICML-05, pages 1041–1048, 2005. Xiaojin Zhu, Jaz Kandola, Zoubin Ghahramani, and John Lafferty. Nonparametric transforms of graph kernels for semi-supervised learning. NIPS 17, 2004.
2006
76
3,099
Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis Amit Gore and Shantanu Chakrabartty Department of Electrical and Computer Engineering Michigan State University East Lansing, MI 48823 {goreamit,shantanu}@egr.msu.edu Abstract A key challenge in designing analog-to-digital converters for cortically implanted prosthesis is to sense and process high-dimensional neural signals recorded by the micro-electrode arrays. In this paper, we describe a novel architecture for analog-to-digital (A/D) conversion that combines Σ∆conversion with spatial de-correlation within a single module. The architecture called multiple-input multiple-output (MIMO) Σ∆is based on a min-max gradient descent optimization of a regularized linear cost function that naturally lends to an A/D formulation. Using an online formulation, the architecture can adapt to slow variations in cross-channel correlations, observed due to relative motion of the microelectrodes with respect to the signal sources. Experimental results with real recorded multi-channel neural data demonstrate the effectiveness of the proposed algorithm in alleviating cross-channel redundancy across electrodes and performing data-compression directly at the A/D converter. 1 Introduction Design of cortically implanted neural prosthetic sensors (CINPS)is an active area of research in the rapidly emerging field of brain machine interfaces (BMI) [1, 2]. The core technology used in these sensors are micro-electrode arrays (MEAs) that facilitate real-time recording from thousands of neurons simultaneously. These recordings are then actively processed at the sensor (shown in Figure 1) and transmitted to an off-scalp neural processor which controls the movement of a prosthetic limb [1]. A key challenge in designing implanted integrated circuits (IC) for CINPS is to efficiently process high-dimensional signals generated at the interface of micro-electrode arrays [3, 4]. Sensor arrays consisting of more than 1000 recording elements are common [5, 6] which significantly increase the transmission rate at the sensor. A simple strategy of recording, parallel data conversion and transmitting the recorded neural signals ( at a sampling rate of 10 KHz) can easily exceed the power dissipation limit of 80mW/cm2 determined by local heating of biological tissue [7]. In addition to increased power dissipation, high-transmission rate also adversely affects the real-time control of neural prosthesis [3]. One of the solutions that have been proposed by several researchers is to perform compression of the neural signals directly at the sensor, to reduce its wireless transmission rate and hence its power dissipation [8, 4]. In this paper we present an approach where de-correlation or redundancy elimination is performed directly at analog-to-digital converter. It has been shown that neural cross-talk and common-mode effects introduces unwanted redundancy at the output of the electrode array [4]. As a result, neural signals typically occupy only a small sub-space within the high-dimensional space spanned by the micro-electrode signals. An optimal strategy for designing a multi-channel analogto-digital converter is to identify and operate within the sub-space spanned by the neural signals and in the process eliminate cross-channel redundancy. To achieve this goal, in this paper we proFigure 1: Functional architecture of a cortically implanted neural prosthesis illustrating the interface of the data converter to micro-electrode arrays and signal processing modules pose to use large margin principles [10], which have been highly successful in high-dimensional information processing [11, 10]. Our approach will be to formalize a cost function consisting of L1 norm of the internal state vector whose gradient updates naturally lends to a digital time-series expansion. Within this framework the correlation distance between the channels will be minimized which amounts to searching for signal spaces that are maximally separated from each other. The architecture called multiple-input multiple-output (MIMO) Σ∆converter is the first reported data conversion technique to embed large margin principles. The approach, however, is generic and can be extended to designing higher order ADC. To illustrate the concept of MIMO A/D conversion, the paper is organized as follows: section 2 introduces a regularization framework for the proposed MIMO data converter and introduces the min-max gradient descent approach. Section 3 applies the technique to simulated and recorded neural data. Section 4 concludes with final remarks and future directions. 2 Regularization Framework and Generalized Σ∆Converters In this section we introduce an optimization framework for deriving MIMO Σ∆converters. For the sake of simplicity we will first assume that the input to converter is a M dimensional vector x ∈RM where each dimension represents a single channel in the multi-electrode array. It is also assumed that the vector x is stationary with respect to discrete time instances n. The validity and limitation of this assumption is explained briefly at the end of this section. Also denote a linear transformation matrix A ∈RM×M and an regression weight vector w ∈RM. Consider the following optimization problem min w f(w, A) (1) where f(w, A) = |w|T 1 −wT Ax (2) and 1 represents a column vector whose elements are unity. The cost function in equation 2 consists of two factors: the first factor is an L1 regularizer which constrains the norm of the vector w and the second factor that maximizes the correlation between vector w and an input vector x transformed using a linear projection denoted by matrix A. The choice of L1 norm and the form of cost function in equation (2) will become clear when we present its corresponding gradient update rule. To ensure that the optimization problem in equation 1 is well defined, the norm of the input vector ||x||∞≤1 will be assumed to be bounded. Under bounded condition, the closed form solution to optimization problem in equation 1 can be found to be w∗= 0. From the perspective of A/D conversion we will show that the iterative steps leading towards solution to the optimization problem in equation 1 are more important than the final solution itself. Given an initial estimate of the state vector w[0] the online gradient descent step for Figure 2: Architecture of the proposed first-order MIMO Σ∆converter. minimizing 1 at iteration n is given by w[n] = w[n −1] −η ∂f ∂w (3) where η > 0 is defined as the learning rate. The choice of L1 norm in optimization function in equation 1 ensures that for η > 0 the iteration 3 exhibits oscillatory behavior around the solution w∗. Combining equation (3) with equation (2) the following recursion is obtained: w[n] = w[n −1] + η(Ax −d[n]) (4) where d[n] = sgn(w[n −1]) (5) and sgn(u) denotes an element-wise signum operation such that d[n] ∈{+1, −1}M represents a digital time-series. The iterations in 3 represents the recursion step for M first-order Σ∆converters [9] coupled together by the linear transform A. If we assume that the norm of matrix ||A||∞≤1 is bounded, it can be shown that ||w∞|| < 1 + η. Following N update steps the recursion given by equation 4 yields Ax −1 N N X n=1 d[n] = 1 ηN (w[N] −w[0]) (6) which using the bounded property of w asymptotically leads to 1 N N X n=1 d[n] −→Ax (7) as N →∞. Therefore consistent with the theory of Σ∆conversion [9] the moving average of vector digital sequence d[n] converges to the transformed input vector Ax as the number of update steps N increases. It can also be shown that N update steps yields a digital representation which is log2(N) bits accurate. 2.1 Online adaptation and compression The next step is to determine the form of the matrix A which parameterize the family of linear transformations spanning the signal space. The aim of optimizing for A is to find multi-channel signal configuration that is maximally separated from each other. For this purposes we denote one channel as a reference relative to which all distances/correlations will be measured. This is unlike independent component analysis (ICA) based approaches [12], where the objective is to search for maximally independent signal space including the reference channel. Even though several forms of the matrix A = [aij] can be chosen, for reasons which will discussed later in this paper the matrix A is chosen to be a lower triangular matrix such that aij = 0; i < j and aij = 1; i = j. The choice of a lower triangular matrix ensures that the matrix A is always invertible. It also implies that the first channel is unaffected by the proposed transform A and will be the reference channel. The problem of compression or redundancy elimination is therefore to optimize the cross-elements aij, i ̸= j such that the cross-correlation terms in optimization function given by equation 1 are minimized. This can be written as a min-max optimization criterion where an inner optimization performs analog-to-digital conversion, where as the outer loop adapts the linear transform matrix A such as to maximize the margin of separation between the respective signal spaces. This can be denoted by the following equation: max aiji̸=j(min w f(w, A)) (8) In conjunction with the gradient descent steps in equation 4 the update rule for elements of A follows a gradient ascent step given by aij[n] = aij[n −1] −εui[n]xj; ∀i > j (9) where ε is a learning rate parameter. The update rule in equation 9 can be made amenable to hardware implementation by considering only the sign of the regression vector w[n] and the input vector x as aij[n] = aij[n −1] −εdi[n] sign(xj); ∀i > j. (10) The update rule in equation 10 bears strong resemblance to online update rules used in independent component analysis (ICA) [12, 13]. The difference with the proposed technique however is the integrated data conversion coupled with spatial decorrelation/compression. The output of the MIMO Σ∆converter is a digital stream whose pulse density is proportional to the transformed input data vector as 1 N N X n=1 d[n] −→A[n]x (11) By construction the MIMO converter produces a digital stream whose pulse-density contains only non-redundant information. To achieve compression some of the digital channels can be discarded (based on their relative energy criterion ) and can also be shut down to conserve power. The original signal can be reconstructed from the compressed digital stream by applying an inverse transformation A−1 as bx = 1 N A[n]−1( N X n=1 d[n]). (12) An advantage of using a lower triangular form for the linear transformation matrix A with its diagonal elements as unity, is that its inverse always well-defined. Thus signal reconstruction using the output of the analog-to-digital converter is also always well defined. Since the transformation matrix A is continually being updated, the information related to the linear transform also needs to be periodically transmitted to ensure faithful reconstruction at the external prosthetic controller. However, analogous to many naturally occurring signal the underlying statistics of multi-dimensional signal changes slowly as the signal itself. Therefore the transmission of the matrix A needs to be performed at a relatively slower rate than the transmission of the compressed neural signals. Similar to conventional Σ∆conversion [9], the framework for MIMO Σ∆can be extended to timevarying input vector under the assumption of high oversampling criterion [9]. For a MIMO A/D converter oversampling ratio (OSR) is defined by the ratio of the update frequency fs and the maximum Nyquist rate amongst all elements of the input vector x[n]. The resolution of the MIMO Σ∆is also determined by the OSR as log2(OSR) and during the oversampling period the input signal vector can be assumed to be approximately stationary. For time-varying input vector (a) (b) Figure 3: Functional verification of MIMO Σ∆converter on artificially generated multi-channel data (a) Data presented to the MIMO Σ∆converter (b) Analog representation of digital output produced by MIMO converter x[n] = {xj[n]}, j = 1, .., M the matrix update equation in equation 10 can be generalized after N steps as 1 N aij[N] = ε 1 N N X n=1 di[n]sgn(xj[n]); ∀i > j. (13) Thus if the norm of the matrix A is bounded, then asymptotically N →∞the equation 13 imply that the cross-channel correlation between the digital output and the sign of the input signal approaches zero. This is similar to formulations in ICA where higher-order de-correlation is achieved using non-linear functions of random variables [12]. The architecture for the MIMO Σ∆converter illustrating recursions (4) and (11) is shown in Figure 2. As shown in the Figure 2 the regression vectors w[n] within the framework of MIMO Σ∆ represents the output of the Σ∆integrator. All the adaptation and linear transformation steps can be implemented using analog VLSI with adaptation steps implemented either using multiplying digital-to-analog converters or floating gates synapses. Even though any channel can be chosen as a reference channel, our experiments indicate that the channel with maximum cross-correlation and maximum signal power serves as the best choice. Figure 4: Reconstruction performance in terms of mean square error computed using artificial data for different OSR 3 Results The functionality of the proposed MIMO sigma-delta converter was verified using artificially generated data and with real multi-channel recorded neural data. The first set of experiments simulated an artificially generated 8 channel data. Figure 3(a) illustrates the multi-channel data where each channel was obtained by random linear mixing of two sinusoids with frequency 20Hz and 40Hz. The multi-channel data was presented to a MIMO sigma delta converter implemented in software. The equivalent analog representation of the pulse density encoded digital stream was obtained using a moving window averaging technique with window size equal to the oversampling ratio (OSR). The resultant analog representation of the ADC output is shown in 3(b). It can be seen in the figure that after initial adaptation steps the output corresponding to first two channels converges to the fundamental sinusoids, where as the rest of the digital streams converged to an equivalent zero output. This simple experiment demonstrates the functionality of MIMO sigma-delta in eliminating cross-channel redundancy. The first two digital streams were used to reconstruct the original recording using equation 12. Figure 4 shows the reconstruction error averaged over a time window of 2048 samples showing that the error indeed converges to zero, as the MIMO converter adapts. The Figure 4 also shows the error curves for different OSR. It can be seen that even though better reconstruction error can be achieved by using higher OSR, the adaptation procedure compensates for errors introduced due to low resolution. In fact the reconstruction performance is optimal for intermediate OSR. Figure 5: Functional verification of the MIMO sigma-delta converter for multi-channel neural data: (a) Original multichannel data (b) analog representation of digital output produced by the converter The multi-channel experiments were repeated with an eight channel neural data recorded from dorsal cochlear nucleus of adult guinea pigs. The data was recorded at a sampling rate of 20KHz and at a resolution of 16 bits. Figure 5(a) shows a clip of multi-channel recording for duration of 0.5 seconds. It can be seen from highlighted portion of Figure 5(a) that the data exhibits high degree of cross-channel correlation. Similar to the first set of experiments the MIMO converter eliminates spatial redundancy between channels as shown by the analog representation of the reconstructed output Figure 6: Reconstruction performance in terms of mean square error computed using neural data for different OSR in Figure 5(b). An interesting observation in this experiment is that even though the statistics of the input signals varies in time as shown in Figure 5 (a) and (b), the transformation matrix A remains relatively stationary during the duration of the conversion, which is illustrated through the reconstruction error graph in Figure 6. This validates the principle of operation of the MIMO conversion where the multi-channel neural recording lie on a low-dimensional manifold whose parameters are relatively stationary with respect to the signal statistics. Figure 7: Demonstration of common-mode rejection performed by MIMO Σ∆: (a) Original multichannel signal at the input of converter (b) analog representation of the converter output (c) a magnified clip of the output produced by the converter illustrating preservation of neural information. The last set of experiments demonstrate the ability of the proposed MIMO converter to reject common mode disturbance across all the channels. Rejection of common-mode signal is one of the most important requirement for processing neural signals whose amplitude range from 50µV - 500µV , where as the common-mode interference resulting from EMG or electrical coupling could be as high as 10mV [14]. Therefore most of the micro-electrode arrays use bio-potential amplifiers for enhancing signal-to-noise ratio and common-mode rejection. For this set of experiments, the recorded neural data obtained from the previous experiment was contaminated by an additive 60Hz sinusoidal interference of amplitude 1mV . The results are shown in Figure 7 illustrating that the reference channel absorbs all the common-mode disturbance where as the neural information is preserved in other channels. In fact theoretically it can be shown that the common-mode rejection ratio for the proposed MIMO ADC is dependent only on the OSR and is given by 20 log10 OSR. 4 Conclusion In this paper we presented a novel MIMO analog-to-digital conversion algorithm with application to multi-channel neural prosthesis. The roots of the algorithm lie within the framework of large margin principles, where the data converter maximizes the relative distance between signal space corresponding to different channels. Experimental results with real multi-channel neural data demonstrate the effectiveness of the proposed method in eliminating cross-channel redundancy and hence reducing data throughput and power dissipation requirements of a multi-channel biotelemetry sensor. There are several open questions that needs to be addressed as a continuation of this research which includes extension of the algorithm second-order Σ∆architectures, embedding of kernels into the ADC formulation and reformulation of the update rule to perform ICA directly on the ADC. Acknowledgments This work is supported by grant from National Institute of Health (R21NS047516-01A2). The authors would also like to thank Prof. Karim Oweiss for providing multi-channel neural data for the MIMO ADC experiments. References [1] Kennedy, P. R., R. A. Bakay, M. M. Moore, K. Adams, and J. Goldwaithe. Direct control of a computer from the human central nervous system. IEEE Trans Rehabil Eng 8:198-202, 2000. [2] J. Carmena, M. Lebedev, R. E. Crist, J. E. ODoherty, D. M. Santucci, D. Dimitrov, P. Patil, C. S. Henriquez, and M. A. Nicolelis, Learning to control a brain-machine interface for reaching and grasping by primates, PLoS Biol., vol. 1, no. 2, pp. 193208, Nov. 2003. [3] G. Santhanam, S. I. Ryu, B. M. Yu, and K. V. Shenoy, High information transmission rates in a neural prosthetic system, in Soc. Neurosci., 2004, Program 263.2. [4] K. Oweiss, D. Anderson, M. Papaefthymiou, Optimizing Signal Coding in Neural Interface System-ona-Chip Modules, IEEE Conf. on EMBS, pp. 2016-2019, Sept. 2003. [5] K. Wise et. al., Wireless Implantable Microsystems: High-Density Electronic Interfaces to the Nervous System, Proc. of the IEEE, Vol.: 92-1, pp: 7697, Jan. 2004. [6] Maynard EM, Nordhausen CT, Normann RA, The Utah intracortical electrode array: a recording structure for potential brain computer interfaces. Electroencephalogr Clin Neurophysiol 102: 228239, 1997. [7] T. M. Seese, H. Harasaki, G. M. Saidel, and C. R. Davies, Characterization of tissue morphology, angiogenesis, and temperature in adaptive response of muscle tissue to chronic heating, Lab Investigation, vol. 78, no. 12, pp. 15531562, Dec. 1998. [8] R. R. Harrison, A low-power integrated cicuit for adaptive detection of action potentials in noisy signals, in Proc. 25th Ann. Conf. IEEE EMBS, Cancun, Mexico, Sep. 2003, pp. 33253328. [9] J. C. Candy and G. C. Temes, Oversampled methods for A/D and D/A conversion, in Oversampled DeltaSigma Data Converters. Piscataway, NJ: IEEE Press, 1992, pp. 1- 29. [10] Vapnik, V. The Nature of Statistical Learning Theory, New York: Springer-Verlag, 1995. [11] Girosi, F., Jones, M. and Poggio, T. Regularization Theory and Neural Networks Architectures, Neural Computation, vol. 7, pp 219-269, 1996. [12] Hyvrinen, A. Survey on independent component, analysis. Neural Computing Surveys, 2:94128, 1999. [13] A. Celik, M. Stanacevic and G. Cauwenberghs, Gradient Flow Independent Component Analysis in Micropower VLSI, Adv. Neural Information Processing Systems (NIPS’2005), Cambridge: MIT Press, 18, 2006 [14] Pedram Mohseni and Khalil Najafi. A fully integrated neural recording amplifier with DC input stabilization. Biomedical Engineering, IEEE Transactions on Volume 51, Issue 5, May 2004.
2006
77